Show all abstracts
View Session
- Front Matter: Volume 6511
- Small Animal Imaging
- Optical Imaging
- Image Analysis I
- Image Analysis II
- Virtual Endoscopy I: Virtual Bronchoscopy and Related Methods
- Virtual Endoscopy II: CT Colonography
- Lung Imaging
- MRI Brain Analysis
- Mechanical Properties and Elastography
- Vessel Imaging and Dynamics
- Cardiac and Aortic Imaging
- Poster Session
Front Matter: Volume 6511
Front Matter: Volume 6511
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 6511, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Small Animal Imaging
In vivo small animal imaging for early assessment of therapeutic efficacy of photodynamic therapy for prostate cancer
Show abstract
We are developing in vivo small animal imaging techniques that can measure early effects of photodynamic therapy
(PDT) for prostate cancer. PDT is an emerging therapeutic modality that continues to show promise in the treatment
of cancer. At our institution, a new second-generation photosensitizing drug, the silicon phthalocyanine Pc 4, has been
developed and evaluated at the Case Comprehensive Cancer Center. In this study, we are developing magnetic
resonance imaging (MRI) techniques that provide therapy monitoring and early assessment of tumor response to PDT.
We generated human prostate cancer xenografts in athymic nude mice. For the imaging experiments, we used a highfield
9.4-T small animal MR scanner (Bruker Biospec). High-resolution MR images were acquired from the treated
and control tumors pre- and post-PDT and 24 hr after PDT. We utilized multi-slice multi-echo (MSME) MR
sequences. During imaging acquisitions, the animals were anesthetized with a continuous supply of 2% isoflurane in
oxygen and were continuously monitored for respiration and temperature. After imaging experiments, we manually
segmented the tumors on each image slice for quantitative image analyses. We computed three-dimensional T2 maps
for the tumor regions from the MSME images. We plotted the histograms of the T2 maps for each tumor pre- and
post-PDT and 24 hr after PDT. After the imaging and PDT experiments, we dissected the tumor tissues and used the
histologic slides to validate the MR images. In this study, six mice with human prostate cancer tumors were imaged
and treated at the Case Center for Imaging Research. The T2 values of treated tumors increased by 24 ± 14% 24 hr
after the therapy. The control tumors did not demonstrate significant changes of the T2 values. Inflammation and
necrosis were observed within the treated tumors 24 hour after the treatment. Preliminary results show that Pc 4-PDT
is effective for the treatment of human prostate cancer in mice. The small animal MR imaging provides a useful tool
to evaluate early tumor response to photodynamic therapy in mice.
In vivo-CT system with respiratory and cardiac gating using synchrotron radiation
Show abstract
The interest in using small animal models of human disease has produced a need to design a CT system at a microscopic level comparable to that achievable in a clinical CT in human. In this study, we developed a high-resolution in vivo-CT system with respiratory and cardiac gating using synchrotron radiation. The system was constructed in BL20B2 at SPring-8. SPring-8 is the third generation synchrotron radiation source in Hyogo, Japan, and prefers much higher flux X-ray than a laboratory X-ray source. Another advantage of synchrotron monochromatic CT is the minimalization of beam hardening effects, which pose serious problems when using white X-rays. Since the X-ray beam from the synchrotron source is parallel, each horizontal line corresponds to a slice position along the rotation axis. Multiple slices are easily obtained in one rotation (3D-CT). For in vivo scanning, the X-ray mechanical shutter and CCD electrical shutter were synchronized with an airway pressure (respiratory) and electrocardiographic (ECG) signal. Synchronization reduced the motion artifacts caused by respiration and heart beats, markedly improving visualization of the edges of the heart, ribs and diaphragm. In particular, small airways (diameter > 300 &mgr;m) and cerebral blood vessels were visualized clearly. This system is very useful for evaluating lung physiology and cardiovascular mechanics in vivo.
Computer-based analysis of microvascular alterations in a mouse model for Alzheimer's disease
Show abstract
Vascular factors associated with Alzheimer's disease (AD) have recently gained increased attention. To investigate changes in vascular, particularly microvascular architecture, we developed a hierarchical imaging framework to obtain large-volume, high-resolution 3D images from brains of transgenic mice modeling AD. In this paper, we present imaging and data analysis methods which allow compiling unique characteristics from several hundred gigabytes of image data. Image acquisition is based on desktop micro-computed tomography (µCT) and local synchrotron-radiation µCT (SRµCT) scanning with a nominal voxel size of 16 µm and 1.4 µm, respectively. Two visualization approaches were implemented: stacks of Z-buffer projections for fast data browsing, and progressive-mesh based surface rendering for detailed 3D visualization of the large datasets. In a first step, image data was assessed visually via a Java client connected to a central database. Identified characteristics of interest were subsequently quantified using global morphometry software. To obtain even deeper insight into microvascular alterations, tree analysis software was developed providing local morphometric parameters such as number of vessel segments or vessel tortuosity. In the context of ever increasing image resolution and large datasets, computer-aided analysis has proven both powerful and indispensable. The hierarchical approach maintains the context of local phenomena, while proper visualization and morphometry provide the basis for detailed analysis of the pathology related to structure. Beyond analysis of microvascular changes in AD this framework will have significant impact considering that vascular changes are involved in other neurodegenerative diseases as well as in cancer, cardiovascular disease, asthma, and arthritis.
Three-dimensional murine airway segmentation in micro-CT images
Show abstract
Thoracic imaging for small animals has emerged as an important tool for monitoring pulmonary disease progression
and therapy response in genetically engineered animals. Micro-CT is becoming the standard thoracic
imaging modality in small animal imaging because it can produce high-resolution images of the lung parenchyma,
vasculature, and airways. Segmentation, measurement, and visualization of the airway tree is an important step
in pulmonary image analysis. However, manual analysis of the airway tree in micro-CT images can be extremely
time-consuming since a typical dataset is usually on the order of several gigabytes in size. Automated and
semi-automated tools for micro-CT airway analysis are desirable. In this paper, we propose an automatic airway
segmentation method for in vivo micro-CT images of the murine lung and validate our method by comparing
the automatic results to manual tracing. Our method is based primarily on grayscale morphology. The results
show good visual matches between manually segmented and automatically segmented trees. The average true
positive volume fraction compared to manual analysis is 91.61%. The overall runtime for the automatic method
is on the order of 30 minutes per volume compared to several hours to a few days for manual analysis.
Automated segmentation of the ex vivo mouse brain
Show abstract
In biological image processing the segmentation of a volume is, although tedious, required for many applications, like the comparison of structures and annotation purposes. To automate this process, we present a segmentation method for various structures of the mouse brain. The segmentation consists of two parts; first a rough affine atlas-based registration was performed and second, the edges between structures were refined by an adapted Markov random field clustering approach. The segmentations results were compared to manual segmentations from two experts. The presented automatic segmentation method is quick, intuitive and suitable for registration purposes, but also for biological objectives, like comparison and annotation.
Optical Imaging
Modulated imaging: a novel method for quantifying tissue chromophores in evolving cerebral ischemia
Show abstract
The authors report the results of utilizing spatially-modulated near infrared light using Modulated Imaging (MI) technology in imaging cerebral ischemia. MI images of the left parietal somatosensory cortex were obtained post-occlusion and up to three hours following middle cerebral artery occlusion. Tissue chromophore maps were obtained to demonstrate spatiotemporal changes in the distribution of oxy, deoxy, total hemoglobin, and oxygen saturation. MI recorded a decrease in oxyhemoglobin concentration and tissue oxygen saturation and increase in tissue deoxyhemoglobin concentration following occlusion. Optical intrinsic signal was used to detect functional activation of the somatosensory barrel cortex to whisker stimulation. This activation was completely lost following occlusion. Imaging findings in a transient ischemic attack using photothrombosis is also demonstrated.
Image Analysis I
Retinal oxygen saturation evaluation by multi-spectral fundus imaging
Show abstract
Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye.
Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script.
Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects.
Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans.
This work is original and is not under consideration for publication elsewhere.
Quantifying mucosal blood volume fraction from multispectral images of the colon
Show abstract
One of the common physiological changes associated with cancer is the formation of a dense, irregular and leaky network of new blood vessels, which result in the increase of the blood volume fraction (BVF) at the site of a tumour. Such changes are not always obvious through visual inspection using a direct observation, an endoscopic device or colour photography. This paper presents a method for deriving quantitative estimates of BVF of the colon mucosa from multispectral images of the colon. The method has two stages. In the first ("forward") stage a physics-based model of light propagation computes the spectra corresponding to a range of instances of the colon tissue, and in particular the spectral changes resulting from changes in the quantity of blood volume fraction, haemoglobin saturation, the size and density of scattering particles, and the tissue thickness. In the second stage ("model inversion") the spectra obtained from the image data are used to derive the values of the above histological parameters. Parametric maps of the blood contents are created by storing at every pixel the BVF value recovered through the model inversion. In a pilot study multispectral images of ex-vivo samples of the colon were acquired from 8 patients. The samples contained histologically confirmed instances of adenocarcinoma and other pathologies. The parametric maps of BVF showed the significant increase in blood volume fraction (up to 75% above that of the surrounding the normal tissue). A Mann-Whitney test with Bonferroni correction showed that all but one of the differences (a benign neoplastic polyp) are significant (p<0.00015).
Discrete color-based Euclidean-invariant signatures for feature tracking in a DIET breast cancer screening system
Show abstract
A Digital Image-based Elasto-Tomography (DIET) system for breast cancer screening has been proposed in
which the elastic properties of breast tissue are recovered by solving an inverse problem on the surface motion of
a breast under low frequency (50-100 Hz) mechanical actuation. The proposed means for capturing the surface
motion of the breast in 3D is to use a stroboscope to capture images from multiple digital cameras at preselected
phase angles. Photogrammetric techniques are then used to reconstruct matched point features in 3D.
Since human skin lacks high contrast visual features, it is necessary to introduce artificial fiducials which can
be easily extracted from digital images. The chosen fiducials are points in three different colours in differing
proportions randomly applied to the skin surface. A three-dimensional signature which is invariant to locally
Euclidean transformations between images is defined on the points of the lowest proportion colour. The approximate local Euclidean invariance between adjacent frames enables these points to be matched using this signature.
The remaining points are matched by interpolating the transformation of the matched points. This algorithm
has significant performance gains over conventional gradient-based tracking algorithms because it utilises the
intrinsic problem geometry.
Successful results are presented for simulated image sequences and for images of a mechanically actuated
viscoelastic gel phantom with tracking errors within 3 pixels. The errors in the phantom sequence correspond to
less than 0.3 mm error in space, which is more than sufficient accuracy for the DIET system.
Estimating number of fiber directions per voxel for ICA DTI tractography
Show abstract
Recently, we have shown that Independent Component Analysis (ICA) can be used to recover up to three distinct fiber directions per voxel by using diffusion MRI data with 25 gradient directions. One prerequisite of our ICA approach is that the number of fiber directions per voxel be known. In this paper, we present a method to extract voxels with zero to three fiber directions and classify them accordingly using diffusion MRI data. The approach relies on the SPM segmented white matter images as well as diffusion anisotropic values per voxel. K-means segmentation and constrained non-linear optimization techniques are used to classify voxels into one to three fiber directions. The diffusion model for optimization is based on the hierarchy of diffusion characteristics. The method is tested with a healthy human subject. It is observed that the fiber maps are consistent with the underlying brain anatomy.
Neural mass model parameter identification for MEG/EEG
Show abstract
Electroencephalography (EEG) and magnetoencephalography (MEG) have excellent time resolution. However,
the poor spatial resolution and small number of sensors do not permit to reconstruct a general spatial activation
pattern. Moreover, the low signal to noise ratio (SNR) makes accurate reconstruction of a time course also
challenging. We therefore propose to use constrained reconstruction, modeling the relevant part of the brain
using a neural mass model: There is a small number of zones that are considered as entities, neurons within a zone
are assumed to be activated simultaneously. The location and spatial extend of the zones as well as the interzonal
connection pattern can be determined from functional MRI (fMRI), diffusion tensor MRI (DTMRI), and
other anatomical and brain mapping observation techniques. The observation model is linear, its deterministic
part is known from EEG/MEG forward modeling, the statistics of the stochastic part can be estimated. The
dynamics of the neural model is described by a moderate number of parameters that can be estimated from the
recorded EEG/MEG data. We explicitly model the long-distance communication delays. Our parameters have
physiological meaning and their plausible range is known. Since the problem is highly nonlinear, a quasi-Newton
optimization method with random sampling and automatic success evaluation is used. The actual connection
topology can be identified from several possibilities. The method was tested on synthetic data as well as on true
MEG somatosensory-evoked field (SEF) data.
Image Analysis II
Automated hierarchical partitioning of anatomical trees
Show abstract
A robust, fast and generally applicable algorithm is presented for the splitting of anatomical trees such as vessel and airway trees into meaningful subtrees, which relies on a straightforward mathematical objective function and produces subjectively very satisfactory results. The algorithm is applicable to unstructured 2D or 3D voxel sets or undirected graphs of centerlines with unknown anatomical root point as produced by unsupervised segmentation algorithms. The automated tree splitting improves clinical tree segmentation tasks by replacing tedious manual three-dimensional navigation and editing.
Clinical applications of three-dimensional tortuosity metrics
Show abstract
The measurement of abnormal vascular tortuosity is important in the diagnosis of many diseases. Metrics based on three-dimensional (3-D) curvature, using approximate polynomial spline-fitting to "data balls" centered along the mid-line of the vessel, minimize digitization errors and give tortuosity values largely independent of the resolution of the imaging system. In order to establish their clinical validity we applied them to a number of clinical vascular systems, using both 2-D (standard angiograms and retinal images) and 3-D datasets (from computed tomography angiography (CTA) and magnetic resonance angiography (MRA)). Using the abdominal aortograms we found that the metrics correlated well with the ranking of an expert panel of three vascular surgeons. Both the mean curvature and the root-mean square curvature provided good discrimination between vessels of different tortuosity: and using a data ball size of one-quarter of the local vessel radius in the spline fitting gave consistent results. Tortuous retinal vessels resulting from retinitis or diabetes, but not from vasculitis, could be distinguished from normal vessels. Tortuosity values based on 3-D data sets gave higher values than their 2-D projections, and could easily be implemented in automatic measurement. They produced values sufficiently discriminating to assess the relative utility of arteries for endoluminal repair of aneurysms.
Analysis of trabecular bone architectural changes induced by osteoarthritis in rabbit femur using 3D active shape model and digital topology
Show abstract
Osteoarthritis (OA) is the most common chronic joint disease, which causes the cartilage between the bone joints to
wear away, leading to pain and stiffness. Currently, progression of OA is monitored by measuring joint space width
using x-ray or cartilage volume using MRI. However, OA affects all periarticular tissues, including cartilage and bone.
It has been shown previously that in animal models of OA, trabecular bone (TB) architecture is particularly affected.
Furthermore, relative changes in architecture are dependent on the depth of the TB region with respect to the bone
surface and main direction of load on the bone. The purpose of this study was to develop a new method for accurately
evaluating 3D architectural changes induced by OA in TB. Determining the TB test domain that represents the same
anatomic region across different animals is crucial for studying disease etiology, progression and response to therapy. It
also represents a major technical challenge in analyzing architectural changes. Here, we solve this problem using a new
active shape model (ASM)-based approach. A new and effective semi-automatic landmark selection approach has been
developed for rabbit distal femur surface that can easily be adopted for many other anatomical regions. It has been
observed that, on average, a trained operator can complete the user interaction part of landmark specification process in
less than 15 minutes for each bone data set. Digital topological analysis and fuzzy distance transform derived
parameters are used for quantifying TB architecture. The method has been applied on micro-CT data of excised rabbit
femur joints from anterior cruciate ligament transected (ACLT) (n = 6) and sham (n = 9) operated groups collected at
two and two-to-eight week post-surgery, respectively. An ASM of the rabbit right distal femur has been generated from
the sham group micro-CT data. The results suggest that, in conjunction with ASM, digital topological parameters are
suitable for analyzing architectural changes induced by OA.
Benchmarking nonrigid techniques for hepato-pulmonary motion mapping
Show abstract
Physiological activities like respiration and interventional procedures non-linearly alter the structural and
functional configuration of Hepato-Pulmonary system. Structurally, respiration-induced motion poses a significant
obstacle in the precise target localization for minimally invasive hepato-pulmonary procedures. Current motion
compensating approaches with image guided advance-and-check intraOperative systems are inadequate. Spatiotemporal
augmentation of intraOperative images with motion maps derived from preOperative scans will provide a reliable
roadmap for successful intervention. However, judicious choice of deformable techniques is required to accurately
capture the organ specific motion. In this paper, we evaluate a number of oft-cited deformable registration techniques in
terms of deformation quality, algorithmic convergence and per-iteration cost. Recommendations are proposed based on
the convergence measures and smoothness of the motion maps.
Quantification of glomerular filtration rate by measurement of gadobutrol clearance from the extracellular fluid volume: comparison of a TurboFLASH and a TrueFISP approach
Show abstract
Purpose: As the MR contrast-medium gadobutrol is completely eliminated via glomerular filtration, the glomerular filtration rate (GFR) can be quantified after bolus-injection of gadobutrol and complete mixing in the extracellular fluid volume (ECFV) by measuring the signal decrease within the liver parenchyma. Two different navigator-gated single-shot saturation-recovery sequences have been tested for suitability of GFR quantification: a TurboFLASH and a TrueFISP readout technique.
Materials and Methods: Ten healthy volunteers (mean age 26.1±3.6) were equally devided in two subgroups. After bolus-injection of 0.05 mmol/kg gadobutrol, coronal single-slice images of the liver were recorded every 4-5 seconds during free breathing using either the TurboFLASH or the TrueFISP technique. Time-intensity curves were determined from manually drawn regions-of-interest over the liver parenchyma. Both sequences were subsequently evaluated regarding signal to noise ratio (SNR) and the behaviour of signal intensity curves. The calculated GFR values were compared to an iopromide clearance gold standard.
Results: The TrueFISP sequence exhibited a 3.4-fold higher SNR as compared to the TurboFLASH sequence and markedly lower variability of the recorded time-intensity curves. The calculated mean GFR values were 107.0±16.1 ml/min/1.73m2 (iopromide: 92.1±14.5 ml/min/1.73m2) for the TrueFISP technique and 125.6±24.1 ml/min/1.73m2 (iopromide: 97.7±6.3 ml/min/1.73m2) for the TurboFLASH approach. The mean paired differences with TrueFISP was lower (15.0 ml/min/1.73m2) than in the TurboFLASH method (27.9 ml/min/1.73m2).
Conclusion: The global GFR can be quantified via measurement of gadobutrol clearance from the ECFV. A saturation-recovery TrueFISP sequence allows for more reliable GFR quantification as a saturation recovery TurboFLASH technique.
Virtual Endoscopy I: Virtual Bronchoscopy and Related Methods
A model of respiratory airway motion for real-time tracking of an ultrathin bronchoscope
Show abstract
Deformable registration of chest CT scans taken of a subject at various phases of respiration provide a direct
measure of the spatially varying displacements that occur in the lung due to breathing. This respiratory motion
was studied as part of the development of a CT-based guidance system for a new electromagnetically tracked
ultrathin bronchoscope. Fifteen scans of an anesthesized pig were acquired at five distinct lung pressures between
full expiration to full inspiration. Deformation fields were computed by non-rigid registration using symmetric
"demons" forces followed by Gaussian regularization in a multi-resolution framework. Variants of the registration
scheme were tested including: initial histogram matching of input images, degree of field smoothing during
regularization, and applying an adaptive smoothing method that weights elements of the smoothing kernel by
the magnitude of the image gradient. Registration quality was quantified and compared using inverse and
transitive consistency metrics. After optimizing the algorithm parameters, deformation fields were computed by
registering each image in the set to a baseline image. Registration of the baseline image at full inspiration to
an image at full expiration produced the maximum deformation. Two hypotheses were made: first, that each
deformation could be modeled as a mathematical sub-multiple of the maximum deformation, and second, that
the deformation scales linearly with respiratory pressure. The discrepancy between the deformation measured by
image registration and that predicted by the linear model was 1.25 mm on average. At maximum deformation,
this motion compensation constitutes an 87% reduction in respiration-induced localization error.
A method for bronchoscope tracking using position sensor without fiducial markers
Show abstract
This paper proposes a method for tracking a bronchoscope using a position sensor without fiducial markers.
Recently, a very small electromagnetic position sensor has become available that can be inserted into the bronchoscope's
working channel to obtain bronchoscope camera motion. In most tracking methods using position
sensors, registration is performed using the positions of fiducial markers attached to a patient's body. However,
these methods need to measure the positions of fiducial markers on both the actual patient's body and the
reference image, such as a CT image of the patient. Therefore, we propose a method for bronchoscope tracking
without fiducial markers that estimates a transformation matrix between the actual patient's body and the CT
image taken prior to bronchoscope examination. This estimation is performed by computing the correspondences
between the outputs of the position sensor and the bronchi regions extracted from the CT image. We applied the proposed method to a rubber bronchial model. Experimental results showed that average target registration error of the five bronchial branches was a minimum of about 3.0mm, and the proposed method tracked a bronchoscope camera in real time.
Method for continuous guidance of endoscopy
Show abstract
Previous research has indicated that use of guidance systems during endoscopy can improve the performance
and decrease the skill variation of physicians. Current guidance systems, however, rely on
computationally intensive registration techniques or costly and error-prone electromagnetic (E/M)
registration techniques, neither of which fit seamlessly into the clinical workflow. We have previously
proposed a real-time image-based registration technique that addresses both of these problems. We
now propose a system-level approach that incorporates this technique into a complete paradigm for
real-time image-based guidance in order to provide a physician with continuously-updated navigational
and guidance information. At the core of the system is a novel strategy for guidance of endoscopy. Additional
elements such as global surface rendering, local cross-sectional views, and pertinent distances
are also incorporated into the system to provide additional utility to the physician. Phantom results
were generated using bronchoscopy performed on a rapid prototype model of a human tracheobronchial
airway tree. The system has also been tested in ongoing live human tests. Thus far, ten such tests,
focused on bronchoscopic intervention of pulmonary patients, have been run successfully.
Airway wall thickness assessment: a new functionality in virtual bronchoscopy investigation
Show abstract
While classic virtual bronchoscopy offers visualization facilities for investigating the shape of the inner airway
wall surface, it provides no information regarding the local thickness of the wall. Such information may be
crucial for evaluating the severity of remodeling of the bronchial wall in asthma and to guide bronchial biopsies
for staging of lung cancers. This paper develops a new functionality with the virtual bronchoscopy, allowing
to estimate and map the information of the bronchus wall thickness on the lumen wall surface, and to display
it as coded colors during endoluminal navigation. The local bronchus wall thickness estimation relies on a
new automated 3D segmentation approach using strong 3D morphological filtering and model-fitting. Such
an approach reconstructs the inner/outer airway wall surfaces from multi-detector CT data as follows. First,
the airway lumen is segmented and its surface geometry reconstructed using either a restricted Delaunay or a
Marching Cubes based triangulation approach. The lumen mesh is then locally deformed in the surface normal
direction under specific force constraints which stabilize the model evolution at the level of the outer bronchus
wall surface. The developed segmentation approach was validated with respect to both 3D mathematicallysimulated
image phantoms of bronchus-vessel subdivisions and to state-of-the-art cross-section area estimation
techniques when applied to clinical data. The investigation in virtual bronchoscopy mode is further enhanced by
encoding the local wall thickness at each vertex of the lumen surface mesh and displaying it during navigation,
according to a specific color map.
3D adaptive model-based segmentation of human vessels
Show abstract
We introduce an adaptive model fitting approach for the segmentation of vessels from 3D tomographic images.
With this approach the shape and size of the 3D region-of-interest (ROI) used for model fitting are automatically
adapted to the local width, curvature, and orientation of a vessel to increase the robustness and accuracy. The
approach uses a 3D cylindrical model and has been successfully applied to segment human vessels from 3D
MRA image data. Our experiments show that the new adaptive scheme yields superior segmentation results in
comparison to using a fixed size ROI. Moreover, a validation of the approach based on ground-truth provided
by a radiologist confirms its accuracy. In addition, we also performed an experimental comparison of the new
approach with a previous scheme.
Virtual Endoscopy II: CT Colonography
Colonoscopy simulation
Show abstract
Effective colonoscopic screening for polyps with optical or virtual means requires adequate visualization of the entire colon surface. The purpose of this study is to investigate by simulation the degree of colon surface coverage during a routine optical colonoscopy (OC). To simulate OC, a generic wide angle and fisheye camera model is used to calibrate the fisheye lens of an Olympus endoscope with a field of view of 140 degrees. Then, the colonoscopy procedure is simulated using volume rendering fly-through along the hugging corner path in the retrograde direction. This shortest path is computed using the segmented and cleansed colon CT datasets. A large number of virtual fisheye cameras are placed along the shortest path to simulate the OC. At each camera position, a discrete volumetric ray-casting method is used to determine which triangles can be seen from the camera. Then, the percentage of the covered colon surface of the OC simulation is computed. Surface coverage at this point may serve as a rough estimate of readily visualized mucosa in a standard OC examination. We also compute the percentage of the covered colon surface for the virtual colonoscopy (VC) by placing virtual pinhole cameras on the central path of the colon and flying in only the antegrade direction as well as flying in both antegrade and retrograde directions. Our simulation study reveals that about 23% of the colon surface is missed in the standard OC examination and about 9% of the colon surface is missed in the VC examination when navigating in both directions.
Colonic wall thickness using level sets for CT virtual colonoscopy visual assessment and polyp detection
Show abstract
The detection of polyps in virtual colonoscopy is an active area of research. One of the critical elements in detecting
cancerous polyps using virtual colonoscopy, especially in conjunction with computer-aided detection, is the accurate
segmentation of the colon wall. The large CT attenuation difference between the lumen and inner, mucosal layer of the
colon wall makes the segmentation of the lumen easily performed by traditional threshold segmentation techniques.
However, determining the location of the colon outer wall is often difficult due to the low contrast difference between
the colon wall's outer serosal layer and the fat surrounding the colon. We have developed an automatic, level set based
method to determine from a CT colonography scan the location of the colon inner boundary and the colon outer wall
boundary. From the location of the inner and outer colon wall boundaries, the wall thickness throughout the colon can
be computed. Color mapping of the wall thickness on the colon surface allows for easy visual determination of
potential regions of interest. Since the colon wall tends to be thicker at polyp locations, potential polyps also can be
detected automatically at sites of increased colon wall thickness. This method was validated on several CT
colonography scans containing optical colonoscopy-proven polyps. The method accurately determined thicker colonic
wall regions in areas where polyps are present in the ground truth datasets and detected the polyps at a false positive rate
between 44.4% and 82.8% lower than a state-of-the-art curvature-based method for initial polyp detection.
Slice-based guided navigation for colonography
Show abstract
A simple and efficient method for guiding 2D-image reading for colon screening is proposed. It provides visual
feedback by highlighting the region of interest in the current 2D cross section and indicates the direction in which to
scroll based on the anatomical structure of the colon given by the centerline. Unobserved areas are calculated using a
region growing algorithm and displayed in a 3D view to guarantee a complete inspection. This technique is intended to
significantly reduce any chance of inadvertently skipping over portions of the colon in the inspection process and to
generate faster examination times. The visual feedback can also be used as a guided learning tool for inexperienced
radiologists.
Using the teniae coli as a registration tool in CT colonography
Show abstract
We have found greater difficulty achieving desirable sensitivities and specificities with our computer-aided detection
(CAD) system on polyps sized 6-9 mm. Missed polyps in our ground truth CAD training datasets could be one possible
cause. Most CT colonography (CTC) protocols require supine and prone scans therefore the number of polyps visible to
a radiologist in at least one scan may increase. However, registration of a specific polyp visible in both scans can prove
difficult without a uniform coordinate system. Using a teniae coli registration tool we hypothesized we could register
and find a statistically significant number of 6-9 mm polyps believed to be not findable in one scan subsequently
reducing error in the training data and enabling better training of our CAD system. Database queries yielded 20 polyps
initially believed to be not findable in one scan. The teniae coli navigation and registration system allowed us to identify
30% (6/20) of the polyps as matches with confidence in both scans (rating 1) and 10% (2/20) of the polyps with a
potential match with some uncertainty (rating 2). No convincing match was found for 60% (12/20) of polyps (rating 3).
We conclude that this teniae coli registration tool is an effective means of identifying and reducing ground truth data
errors in 6-9 mm polyps initially believed not findable in one scan. The use of this tool has the potential to improve the
performance of a CAD system on the more difficult 6-9 mm polyps.
Gain by mixture-based image segmentation for virtual colonoscopy with colonic material tagging
Show abstract
Computed tomography-based virtual colonoscopy or CT colonography (CTC) currently utilizes oral contrast solution to
differentiate the colonic fluid and possibly residual stool from the colon wall. The enhanced image density of the tagged
colonic materials causes a significant partial volume (PV) effect into the colon wall as well as the lumen space (air or
CO2). The PV effect into the colon wall can "bury" polyps of small size by increasing their image densities to a
noticeable level, resulting in false negatives. It can also create false positives when PV effect goes into the lumen space.
Modeling the PV effect for mixture-based image segmentation has been a research topic for many years. This paper
presents the practical implementation of our newly developed statistical image segmentation framework, which utilizes
the EM (expectation-maximization) algorithm to estimate (1) tissue fractions in each image voxel and (2) statistical
model parameters of the image under the principle of maximum a posteriori probability (MAP). This partial-volume
expectation-maximization (PV-EM) mixture-based MAP image segmentation pipeline was tested on 52 CTC datasets
downloaded from the website of the VC Screening Resource Center, with each dataset consisting of two scans of supine
and prone positions, resulting in 104 CT volume images. The cleansed lumens by the automated PV-EM image
segmentation algorithm were visualized with comparison to our previous work, with the gain achieved mainly in the
following three aspects: (1) the tissue fraction information of those voxels with PV effect have been well preserved, (2)
the problem of incomplete cleansing of tagged materials in our previous work has been mitigated, and (3) the
interference caused by small bowel was significantly released.
Electronic stool subtraction using quadratic regression, morphological operations, and distance transforms
Show abstract
CT colonography (CTC) is being extensively studied for its potential value in colon examinations, since it offers
many advantages such as lower risk and less patient discomfort. However, CTC, like all other types of full structural
colorectal examinations to date, requires complete bowel preparation. The inconvenience and discomfort associated
with this preparation is an important obstacle to compliance with currently recommended colorectal screening
guidelines. To maximize compliance, CTC would ideally be performed on an unprepared colon. However, in an
unprepared colon residual stool and fluid can mimic soft tissue density and thus confound the identification of
polyps. An alternative is to tag the stool with an opacifying agent so that it is brighter than soft tissue and thus easily
recognized automatically and then reset to air values. However, such electronic stool subtraction in a totally
unprepared colon is difficult to perform accurately for several reasons, including poorly labeled areas of stool, the
need to accurately quantify partial volume effects, and noise. In this study the qualitative performance of a novel
stool subtraction algorithm was assessed in unprepared CT colonography screening exams of 26 consecutive
volunteers. Results showed that nearly all stool was removed in 62% of the cases, fold erosion was mild or nonexistent
in 75% of the cases, and wall erosion was mild or non-existent in 100% of cases. Although further study
and refinement of the stool subtraction process is required, CT colonography of the unprepared colon with electronic
stool subtraction is feasible.
Lung Imaging
In vivo quantification of human lung dose response relationship
Show abstract
Purpose: To implement a new non-invasive in-vivo assay to compute the dose-response relationship following radiation-induced
injury to normal lung tissue, using computed tomography (CT) scans of the chest.
Methods and Materials: Follow-up volumetric CT scans were acquired in patients with metastatic tumors to the lung
treated using stereotactic radiation therapy. The images reveal a focal region of fibrosis corresponding to the high-dose
region and no observable long-term damage in distant sites. For each pixel in the follow-up image the treatment dose
and the change in apparent tissue density was compiled. For each of 12 pre-selected dose levels the average pixel tissue
density change was computed and fit to a two-parameter dose-response model. The sensitivity of the resulting fits to
registration error was also quantified.
Results: Complete in vivo dose-response relationships in human normal lung tissue were computed. Increasing radiation
sensitivity was found with larger treatment volume. Radiation sensitivity increased also over time up to 12 months, but
decreased at later time points. The time-course of dose response correlated with the time-course of levels of circulating
IL-1&agr;, TGF&bgr; and MCP-1. The method was found to be robust to registration errors up to 3 mm.
Conclusions: This approach for the first time enables the quantification of the full range dose response relationship in
human subjects. The method may be used to assess quantitatively the efficacy of various agents thought to illicit
radiation protection to the lung.
Surface based cardiac and respiratory motion extraction for pulmonary structures from multi-phase CT
Show abstract
During medical imaging and therapeutic interventions, pulmonary structures are in general subject to cardiac
and respiratory motion. This motion leads potentially to artefacts and blurring in the resulting image material
and to uncertainties during interventions. This paper presents a new automatic approach for surface based
motion tracking of pulmonary structures and reports on the results for cardiac and respiratory induced motion.
The method applies an active shape approach to ad-hoc generated surface representations of the pulmonary
structures for phase to phase surface tracking. Input of the method are multi-phase CT data, either cardiac or
respiratory gated. The iso-surface representing the transition between air or lung parenchyma to soft tissue,
is triangulated for a selected phase p0. An active shape procedure is initialised in the image of phase p1 using
the generated surface in p0. The used internal energy term penalizes shape deformation as compared to p0.
The process is iterated for all phases pi to pi+1 of the complete cycle. Since the mesh topology is the same for
all phases, the vertices of the triangular mesh can be treated as pseudo-landmarks defining tissue trajectories.
A dense motion field is interpolated. The motion field was especially designed to estimate the error margins
for radiotherapy. In the case of respiratory motion extraction, a validation on ten biphasic thorax CT images
(2.5mm slice distance) was performed with expert landmarks placed at vessel bifurcations. The mean error on
landmark position was below 2.6mm. We further applied the method to ECG gated images and estimated the
influence of the heart beat on lung tissue displacement.
The effect of lung orientation on functional imaging of blood flow
Show abstract
Advancing technology has enabled rapid improvements in imaging and image processing techniques providing
increasing amounts of structural and functional information. While these imaging modalities now offer a wealth of
information about function within the body in health and disease certain limitations remain. We believe these can
largely be addressed through a combined medical imaging - computational modeling approach. For example, imaging
may only be performed in the prone or supine postures but humans function naturally in the upright position. We have
developed an image-based computational model of coupled tissue mechanics and pulmonary blood flow to enable
predictions of pulmonary perfusion in various postures and lung volumes. Lung and vascular geometries are derived
using a combination of imaging reconstruction and computational algorithms. Solution of finite deformation equations
provides predictions of tissue deformation and internal pressure distributions within the lung parenchyma. By
embedding vascular models within the lung volume we obtain a coupled model of blood vessel deformation as a result
of changes in lung volume. A 1D form of the Navier-Stokes flow equations are solved within the vascular model to
predict perfusion. Tissue pressures calculated from the mechanics model are incorporated into the vascular constitutive
pressure-radius relationship. Results demonstrated a relatively consistent flow distribution in all postures indicating the
large influence of branching structure on flow distribution. It is hoped that this modeling approach may provide insights
to enable interpolation of imaging measurements in alternate postures and lung volumes and enable an increased
understanding of the mechanisms influencing pulmonary perfusion distribution.
Automated detection of mucus plugs within bronchial tree in MSCT images
Show abstract
Pulmonary diseases characterized by chronic airway inflammation, such as Chronic Obstructive Pulmonary (COPD),
result in abnormal bronchial wall thickening, lumen dilatation and mucus plugs. Multi-Slice Computed Tomography
(MSCT) allows for assessment of these abnormalities, even in airways that are obliquely oriented to the scan plane.
Chronic airway inflammation typically results in limitations of airflow, allowing for the accumulation of mucus,
especially in the distal airways. In addition to obstructing airways, retained secretions make the airways prone to
infection. Patients with chronic airway disease are clinically followed over time to assess disease progression and
response to treatment. In this regard, the ability to obtain an automatic standardized method to rapidly and objectively
assess the entire airway tree morphologically, including the extent of mucus plugging, would be of particular clinical
value. We have developed a method to automatically detect the presence and location of mucus plugs within the
peripheral airways. We first start with segmentation of the bronchial tree using a previously developed method. The
skeleton-based tree structure is then computed and each terminal branch is individually extended using an adaptive
threshold algorithm. We compute a local 2-dimensional model, based on airway luminal diameter and wall thickness.
We then select a few points along the principal axis beyond the terminal branches, to extract 2D cross sections for
correlation with a model of mucus plugging. Airway shape is validated with a correlation value, and the lumen
distribution is analyzed and compared to the model. A high correlation indicates the presence of a mucus plug. We tested
our method on 5 datasets containing a total of 40 foci of mucoid impaction. Preliminary results show sensitivity of
77.5% with a specificity of 98.2% and positive predictive value of 66%.
Novel method and applications for labeling and identifying lymph nodes
Show abstract
The lymphatic system comprises a series of interconnected lymph nodes that are commonly distributed along branching
or linearly oriented anatomic structures. Physicians must evaluate lymph nodes when staging cancer and planning
optimal paths for nodal biopsy. This process requires accurately determining the lymph node's position with respect to
major anatomical landmarks. In an effort to standardize lung cancer staging, The American Joint Committee on Cancer
(AJCC) has classified lymph nodes within the chest into 4 groups and 14 sub groups. We present a method for
automatically labeling lymph nodes according to this classification scheme, in order to improve the speed and accuracy
of staging and biopsy planning. Lymph nodes within the chest are clustered around the major blood vessels and the
airways. Our fully automatic labeling method determines the nodal group and sub-group in chest CT data by use of
computed airway and aorta centerlines to produce features relative to a given node location. A classifier then determines
the label based upon these features. We evaluate the efficacy of the method on 10 chest CT datasets containing 86
labeled lymph nodes. The results are promising with 100% of the nodes assigned to the correct group and 76% to the
correct sub-group. We anticipate that additional features and training data will further improve the results. In addition to
labeling, other applications include automated lymph node localization and visualization. Although we focus on chest
CT data, the method can be generalized to other regions of the body as well as to different imaging modalities.
MRI Brain Analysis
Partial correlation mapping of brain functional connectivity with resting state fMRI
Show abstract
The methods to detect resting state functional connectivity presented so far mainly focus on Pearson correlation
analysis which calculates Pearson Product Moment correlation coefficient between the time series of two distinct
voxels or regions to measure the functional dependency between them. Due to artifacts and noises in the
data, functional connectivity maps resulting from the Pearson correlation analysis may risk arising from the
correlation of interfering signals other than the neural sources. In the paper, partial correlation analysis is
proposed to map resting state functional connectivity. By eliminating of the contributions of interfering signals
to pairwise correlations between different voxels or regions, partial correlation analysis allows us to measure the
real functional connectivity induced by neural activity. Experiments with real fMRI data, demonstrate that
mapping functional connectivity with partial correlation analysis leads to disappearance of a considerable part
of the functional connectivity networks relative to that from Pearson correlation analysis and showing small, but
consistent networks. The results indicate that partial correlation analysis could perform a better mapping of
brain functional connectivity than Pearson correlation analysis.
A novel approach to analyzing fMRI and SNP data via parallel independent component analysis
Jingyu Liu,
Godfrey Pearlson,
Vince Calhoun,
et al.
Show abstract
There is current interest in understanding genetic influences on brain function in both the healthy and the disordered
brain. Parallel independent component analysis, a new method for analyzing multimodal data, is proposed in this paper
and applied to functional magnetic resonance imaging (fMRI) and a single nucleotide polymorphism (SNP) array. The
method aims to identify the independent components of each modality and the relationship between the two modalities.
We analyzed 92 participants, including 29 schizophrenia (SZ) patients, 13 unaffected SZ relatives, and 50 healthy
controls. We found a correlation of 0.79 between one fMRI component and one SNP component. The fMRI component
consists of activations in cingulate gyrus, multiple frontal gyri, and superior temporal gyrus. The related SNP
component is contributed to significantly by 9 SNPs located in sets of genes, including those coding for apolipoprotein
A-I, and C-III, malate dehydrogenase 1 and the gamma-aminobutyric acid alpha-2 receptor. A significant difference in
the presences of this SNP component is found between the SZ group (SZ patients and their relatives) and the control
group. In summary, we constructed a framework to identify the interactions between brain functional and genetic
information; our findings provide new insight into understanding genetic influences on brain function in a common
mental disorder.
Real-time fMRI-based activation analysis and stimulus control
Show abstract
The real-time analysis of brain activation using functional MRI data offers a wide range of new experiments such
as investigating self-regulation or learning strategies. However, besides special data acquisition and real-time data
analysing techniques such examination requires dynamic and adaptive stimulus paradigms and self-optimising
MRI-sequences.
This paper presents an approach that enables the unified handling of parameters influencing the different software
systems involved in the acquisition and analysing process. By developing a custom-made Experiment Description
Language (EDL) this concept is used for a fast and flexible software environment which treats aspects like
extraction and analysis of activation as well as the modification of the stimulus presentation. We describe how
extracted real-time activation is subsequently evaluated by comparing activation patterns to previous acquired
templates representing activated regions of interest for different predefined conditions. According to those results
the stimulus presentation is adapted.
The results showed that the developed system in combination with EDL is able to reliably detect and evaluate
activation patterns in real-time. With a processing time for data analysis of about one second the approach is
only limited by the natural time course of the hemodynamic response function of the brain activation.
Detection of fine-scale activity patterns by integration of information in local regions
Show abstract
The widespread statistical parametric mapping standardly performs spatial smoothing of the data with a Gaussian
kernel (GK) to improve signal to noise ratio and statistical power. However, the best filtering is dependent
on the shape of the activation regions, which is irregular in nature and not well matched by a constant GK. As a
result, smoothing the data with a GK will obscure fine-scale patterns of weak effects that contain neuroscientifically relevant information. To improve the sensitivity of activation detection, in the presented work, multivariate
statistical technique (PCA) and univariate statistical technique (GLM) were combined together to discover the
fine-grained activity patterns. The time courses from every local homogenous regions were first integrated with
PCA; then, GLM was used to construct the interests of statistic. The approach has implicitly taken account
of the structures of both BOLD signal and noise existed in local regions. Therefore, it can highlight details
of different regions. Experiments with real fMRI data, demonstrate that proposed technique can dramatically
increase the sensitivity of the detection of the fine-scale brain activity patterns which contain subtle information
about the experimental conditions.
Dimensionality estimation for group fMRI data reduction at multiple levels
Sharon Chen,
Thomas J. Ross,
Keh-Shih Chuang,
et al.
Show abstract
Current techniques substantially overestimate the dimensionality of group fMRI data, and this problem worsens
when principle component analysis (PCA) based data reductions are applied at multiple levels. In this paper, the
mechanism of the overestimation is investigated, and a new method is developed for more reliable dimensionality
estimation for group fMRI data at multiple levels. Simulation suggests that small variation of the signal components
within a group is a major cause of dimensionality overestimation. To obtain an improved estimation, appropriate
colored noise is added into the group fMRI data in order to blur the signal component variations. The noise
parameters are estimated from the original fMRI data, and the improved dimensionality is determined by applying a
first-order autoregressive (AR(1)) noise fitting technique to the PCA spectrum. The proposed method was tested on
group resting-state fMRI datasets acquired from 14 normal human subjects in 5 different sessions. The PCA-based
data reductions were performed at 3 levels in either "individual-session-subject" or "individual-subject-session"
order. Results indicate that the proposed method significantly reduces the dimensionality overestimation for multiple
level data reductions. Consistency of the estimated dimensionalities is observed with different group orders of the
data reduction.
Mechanical Properties and Elastography
Rapid 3D isotropic cartilage assessment with VIPR MRI
Show abstract
While current MRI technology is adequate for imaging severe cartilage degeneration, significant increases in resolution
are necessary to image early changes and defects in cartilage. Though MRI advocates often tout its 3D capabilities, most
clinical scans consist of a series of 2D thick slices with gaps in between. Partial volume artifact can cause several low
grade lesions to be missed or incompletely characterized. Robust fat suppression is also necessary to provide high
contrast between bone and cartilage. Commonly available clinical 3D techniques are largely based on sequences which
spend considerable amounts of scan time suppressing fat instead of imaging.
We present a method that provides a comprehensive 3D evaluation of cartilage in the knee with isotropic resolution and
bright fluid through T2-like contrast. Termed VIPR-SSFP, the method separates fat and water and thus spends the entire
exam imaging cartilage and relevant joint tissues. A single VIPR-SSFP scan may be reformatted into multiple
orthogonal or oblique reformats where the variable thickness of the reformat allows a trade-off between SNR and partial
volume artifact.
The radial trajectory in VIPR-SSFP is ideally suited to exploit larger coil arrays using the parallel imaging strategy
known as PILS. Relative to our previous work, we have reduced voxel volume by 100%, demonstrating 0.56 mm
isotropic resolution at 1.5T and 0.33 mm at 3.0T in a five minute scan, using a new eight channel coil. Improved
cartilage assessment is demonstrated in a study of nearly 100 patients through reduction in partial volume artifact.
Diffusion tensor imaging of the lower leg musculature during exercise
Show abstract
Echoplanar diffusion tensor imaging of musculature was performed using an adapted sequence with stimulated echo
preparation and eddy current compensation. Reliable diffusion tensor data were obtained in short measuring time of 2
minutes. Image distortion problems due to eddy currents arising from long lasting diffusion sensitizing gradients could
be overcome by insertion of additional gradient pulses in the TM interval of the stimulated echo preparation. In addition,
a T2-weighted multi-contrast spin-echo sequence with seven echoes was applied for assessment of changes in T2 during
exercise. The diffusion tensor and T2 in the musculature of the lower leg was investigated in 4 healthy subjects and
maps of the trace and the three eigenvalues of the diffusion tensor, fractional anisotropy maps, and angle maps were
calculated from examinations before and after 90 seconds of exhausting tiptoe exercises. For both fractional anisotropy
and muscle fibre orientation obvious differences for the miscellaneous muscle groups could be observed, whereas the
eigenvalues of the diffusion tensor were found rather homogenous in the whole calf musculature. All eigenvalues of the
diffusion tensor of loaded muscles were significantly increased by 7-17% immediately after the exercise. Maximum
increase (14-17%) was found in the smallest eigenvalue in gastrocnemius lateralis and soleus muscle.
Time reversal principles for wave optimization in multiple driver magnetic resonance elastography
Show abstract
Magnetic Resonance Elastography (MRE) quantitatively maps the stiffness of tissues by imaging propagating
shear waves induced by mechanical transducers. It has been shown that by using multiple drivers, certain limitations of
conventional single driver MRE can be reduced, and that by suitably adjusting the waveforms applied to these drivers,
any arbitrary region of interest can be optimally illuminated (wave optimization). Typically these adjustments were
derived from wave response data collected for each transducer individually, which increases the total scan time. To
address this issue, we investigated the use of time reversal principles to calculate the appropriate waveforms and their
potential advantages in MRE exams. A phased array acoustic driver system with four independent 'daughter' transducers
was used. An additional shear 'parent' transducer was used to create shear waves at the ROI, and wave propagation data
was collected with MRE both in continuous and transient wave mode. From these single source wave data, the
appropriate phase and time offset relationships between the daughter transducers were derived. Separate experiments
were then carried out driving the daughter transducers with these calculated motions, and wave optimization was
achieved in both continuous and transient wave MRE. We conclude that time reversal principles could be used for wave
optimization with multiple drivers and could potentially reduce the total scan time.
The effects of interstitial tissue pressure on the measured shear modulus in vivo
Show abstract
It is well known that many pathologic processes, like cancer, result in increased tissue
stiffness but the biologic mechanisms which cause pathologies to be stiffer than normal tissues
are largely unknown. Increased collagen density has been presumed to be largely responsible
because it has been shown to cause variations in normal tissue stiffness. However, other effects
such as increased tissue pressure are also thought to be significant. We examined the effects of
tissue pressure on shear modulus measured using MR elastography (MRE) by comparing the
shear modulus in the pre-mortem, edematous and post-mortem porcine brain and found that the
measured shear modulus increases with tissue pressure as expected. The slope of a linear fit to
this preliminary data varied from 0.3 kPa/mmHg to 0.1 kPa/mmHg. These results represent the
first in vivo demonstration of tissue pressure affecting intrinsic mechanical properties and have
several implications. First, if the linear relationship described is correct, tissue pressure could
contribute significantly (~20%) to the increase in stiffness observed in cancer. Second, tissue
pressure effects must be considered when in vitro mechanical properties are extrapolated to in
vivo settings. Moreover, MRE might provide a means to characterize pathologic conditions
associated with increased or decreased tissue pressure, such as edema and ischemia, in a diverse
set of diseases including cancer, diabetes, stroke, and transplant rejection.
3D finite element solution to the dynamic poroelasticity problem for use in MR elastography
Show abstract
Magnetic Resonance Elastography (MRE) has emerged as a noninvasive, quantitative physical means of examining
the elastic properties of biological tissues. While it is common to assume simplified elasticity models for
purposes of MRE image reconstruction, it is well-accepted that many soft tissues display complex time-dependent
behavior not described by linear elasticity. Understanding how the mechanical properties of biological materials
change with the frequency of the applied stresses and strains is paramount to the reconstructive imaging
techniques used in steady-state MRE. Alternative continuum models, such as consolidation theory, offer the
ability to model tissue and other materials comprised of two distinct phases, generally consisting of an elastic
solid phase and an infiltrating fluid. For these materials, the time-dependent response under a given load is a
function not only of the elastic properties of the solid matrix, but also of the rate at which fluid can flow through
the matrix under a pressure gradient. To better study the behavior of the dynamic poroelasticity equations, a
three-dimensional finite element model was constructed. Confined, time-harmonic excitation of simulated soil
and tissue-like columns was performed to determined material deformation and pore pressure distributions, as
well as to identify the influence of the key model parameters under loading conditions and frequencies relevant
in steady-state MRE. The results show that the finite element implementation is able to represent the analytical
behavior with errors on the order of 1% over a broad range of frequencies. Further, differences between poroelastic
and elastic responses in the column can be significant over the frequency range relevant to MRE depending
on the value of hydraulic conductivity assumed for the medium.
An elastography framework for use in dermoscopy
Show abstract
Multiple skin conditions exist which involve clinically significant changes in elastic properties.
Early detection of such changes may prove critical in formulating a proper treatment plan. However,
most diagnoses still rely primarily on visual inspection followed by biopsy for histological analysis. As a
result, there would be considerable clinical benefit if a noninvasive technology to study the skin were
available. The primary hypothesis of this work is that skin elasticity may serve as an important method
for assisting diagnosis and treatment. Perhaps the most apparent application would be for the
differentiation of skin cancers, which are a growing health concern in the United States as total annual
cases are now being reported in the millions by the American Cancer Society. In this paper, we use our
novel modality independent elastography (MIE) method to perform dermoscopic skin elasticity
evaluation. The framework involves applying a lateral stretching to the skin in which dermoscopic
images are acquired before and after mechanical excitation. Once collected, an iterative elastographic
reconstruction method is used to generate images of tissue elastic properties and is based on a twodimensional
(2-D) membrane model framework. Simulation studies are performed that show the effects
of three-dimensional data, varying subdermal tissue thickness, and nonlinear large deformations on the
framework. In addition, a preliminary in vivo reconstruction is demonstrated. The results are
encouraging and indicate good localization with satisfactory degrees of elastic contrast resolution.
Vessel Imaging and Dynamics
Longitudinal vascular imaging using a novel nano-encapsulated CT and MR contrast agent
Show abstract
Contrast agents are widely employed in medical imaging for improved visualization of anatomy and disease
characterization. In recent years, there is increasing interest in developing novel contrast agents and using their tissue
accumulation and clearance patterns to obtain physiological information. The goal of this investigation is to assess the
utility of a long circulating dual modality liposomal contrast agent for longitudinal imaging applications in computed
tomography (CT) and magnetic resonance (MR) imaging. It was demonstrated that this high molecular weight contrast
agent is retained in healthy vasculature (circulation half-life of ~20 hours in mice and ~100 hours in rabbits), but it is
able to leak through abnormal tumor vasculature into the tumor interstitium. The rate of its differential tumor uptake was
monitored in CT and MR longitudinally over a 48-hour period and a map of the rate of change of contrast enhancement
was produced. This contrast agent has shown potential for anatomic and physiological imaging of healthy and abnormal
blood vessels in CT and MR. It may become a useful tool for tumor vasculature assessment before, during and after antitumor
treatments.
Qualitative comparison of intra-aneurysmal flow structures determined from conventional and virtual angiograms
Show abstract
In this study we qualitatively compare the flow structures observed in cerebral aneurysms using conventional
angiography and virtual angiograms produced from patient-specific computational fluid dynamics (CFD) models. For
this purpose, high frame rate biplane angiograms were obtained during a rapid injection of contrast agent in three
patients with intracranial aneurysms. Patient-specific CFD models were then constructed from 3D rotational
angiography images of each aneurysm. Time dependent flow fields were obtained from the numerical solution of the
incompressible Navier-Stokes equations under pulsatile flow conditions derived from phase-contrast magnetic
resonance measurements performed on normal subjects. These flow fields were subsequently used to simulate the
transport of a contrast agent by solving the advection-diffusion equation. Both the fluid and transport equations were
solved with an implicit finite element formulation on unstructured grids. Virtual angiograms were then constructed by
volume rendering of the simulated dye concentration field. The flow structures observed in the conventional and virtual
angiograms were then qualitatively compared. It was found that the finite element models showed distinct flow types
for each aneurysm, ranging from simple to complex. The virtual angiograms showed good agreement with the images
from the conventional angiograms for all three aneurysms. Analogous size and orientation of the inflow jet, regions of
flow impaction, major intraaneurysmal vortices and regions of outflow were observed in both the conventional and
virtual angiograms. In conclusion, patient-specific image-based computational models of intracranial aneurysms can
realistically reproduce the major intraaneurysmal flow structures observed with conventional angiography.
Combined clinical and computational information in complex cerebral aneurysms: application to mirror cerebral aneurysms
Show abstract
Although the incidence of ruptured cerebral aneurysms is relatively small, when rupture occurs, morbidity and mortality
are exceptionally high. The understanding of the pathological and physiological forces driving aneurysmal pathogenesis
and progression is crucial. In this paper we analyze the occurrence of mirror cerebral aneurysms in 8 patients and
speculate on the effect of haemodynamics on the localization and course of the disease. By mirror cerebral aneurysms
we indicate two aneurysms in the same patient and at the same location in the cerebral vasculature but symmetrically
with respect to a sagittal plane. In particular we focus on cases of mirror cerebral aneurysms where only one of the two
aneurysms presented subarachnoid hemorrhage (SAH). Anatomical information is extracted from 3D rotational
angiography (3DRA) images and haemodynamic information is obtained through blood flow simulation in patientspecific
anatomical models. The distribution of Wall Shear Stress (WSS) and the flow patterns through the vessels and
inside the aneurysms are reported. By combining clinical observations on asymmetry of the cerebral vasculature and
aneurysmal shape and size with computed information on blood flow patterns we explore the causes behind a specific
localization and a different outcome of disease progression.
Semi-automatic aortic aneurysm analysis
Osman Bodur,
Leo Grady,
Arthur Stillman,
et al.
Show abstract
Aortic aneurysms are the 13th leading cause of death in the United States. In
standard clinical practice, assessing the progression of disease in the aorta, as well as
the risk of aneurysm rupture, is based on measurements of aortic diameter. We
propose a method for automatically segmenting the aortic vessel border allowing the
calculation of aortic diameters on CTA acquisitions which is accurate and fast,
allowing clinicians more time for their evaluations. While segmentation of aortic
lumen is straightforward in CTA, segmentation of the outer vessel wall (epithelial
layer) in a diseased aorta is difficult; furthermore, no clinical tool currently exists to
perform this task. The difficulties are due to the similarities in intensity of
surrounding tissue (and thrombus due to lack of contrast agent uptake), as well as the
complications from bright calcium deposits.
Our overall method makes use of a centerline for the purpose of resampling
the image volume into slices orthogonal to the vessel path. This centerline is
computed semi-automatically via a distance transform. The difficult task of
automatically segmenting the aortic border on the orthogonal slices is performed via
a novel variation of the isoperimetric algorithm which incorporates circular
constraints (priors). Our method is embodied in a prototype which allows the loading
and registration of two datasets simultaneously, facilitating longitudinal
comparisons. Both the centerline and border segmentation algorithms were evaluated
on four patients, each with two volumes acquired 6 months to 1.5 years apart, for a
total of eight datasets. Results showed good agreement with clinicians' findings.
3D visualization of strain in abdominal aortic aneurysms based on navigated ultrasound imaging
Show abstract
The criterion for recommending treatment of an abdominal aortic aneurysm is that the diameter exceeds 50-55 mm or
shows a rapid increase. Our hypothesis is that a more accurate prediction of aneurysm rupture is obtained by estimating
arterial wall strain from patient specific measurements. Measuring strain in specific parts of the aneurysm reveals
differences in load or tissue properties. We have previously presented a method for in vivo estimation of circumferential
strain by ultrasound. In the present work, a position sensor attached to the ultrasound probe was used for combining
several 2D ultrasound sectors into a 3D model. The ultrasound was registered to a computed-tomography scan (CT), and
the strain values were mapped onto a model segmented from these CT data. This gave an intuitive coupling between
anatomy and strain, which may benefit both data acquisition and the interpretation of strain. In addition to potentially
provide information relevant for assessing the rupture risk of the aneurysm in itself, this model could be used for
validating simulations of fluid-structure interactions. Further, the measurements could be integrated with the simulations
in order to increase the amount of patient specific information, thus producing a more reliable and accurate model of the
biomechanics of the individual aneurysm. This approach makes it possible to extract several parameters potentially
relevant for predicting rupture risk, and may therefore extend the basis for clinical decision making.
Cardiac and Aortic Imaging
Automatic segmentation and co-registration of gated CT angiography datasets: measuring abdominal aortic pulsatility
Show abstract
Purpose: To develop robust, novel segmentation and co-registration software to analyze
temporally overlapping CT angiography datasets, with an aim to permit automated measurement
of regional aortic pulsatility in patients with abdominal aortic aneurysms.
Methods: We perform retrospective gated CT angiography in patients with abdominal aortic
aneurysms. Multiple, temporally overlapping, time-resolved CT angiography datasets are
reconstructed over the cardiac cycle, with aortic segmentation performed using a priori anatomic
assumptions for the aorta and heart. Visual quality assessment is performed following automatic
segmentation with manual editing. Following subsequent centerline generation, centerlines are
cross-registered across phases, with internal validation of co-registration performed by
examining registration at the regions of greatest diameter change (i.e. when the second derivative
is maximal).
Results: We have performed gated CT angiography in 60 patients. Automatic seed placement is
successful in 79% of datasets, requiring either no editing (70%) or minimal editing (less than 1
minute; 12%). Causes of error include segmentation into adjacent, high-attenuating, nonvascular
tissues; small segmentation errors associated with calcified plaque; and segmentation of
non-renal, small paralumbar arteries. Internal validation of cross-registration demonstrates
appropriate registration in our patient population. In general, we observed that aortic pulsatility
can vary along the course of the abdominal aorta. Pulsation can also vary within an aneurysm as
well as between aneurysms, but the clinical significance of these findings remain unknown.
Conclusions: Visualization of large vessel pulsatility is possible using ECG-gated CT
angiography, partial scan reconstruction, automatic segmentation, centerline generation, and coregistration
of temporally resolved datasets.
Patient specific coronary territory maps
Show abstract
It is standard practice for physicians to rely on empirical, population based models to define the relationship
between regions of left ventricular (LV) myocardium and the coronary arteries which supply them with
blood. Physicians use these models to infer the presence and location of disease within the coronary arteries
based on the condition of the myocardium within their distribution (which can be established non-invasively
using imaging techniques such as ultrasound or magnetic resonance imaging). However, coronary artery
anatomy often varies from the assumed model distribution in the individual patient; thus, a non-invasive
method to determine the correspondence between coronary artery anatomy and LV myocardium would have
immediate clinical impact. This paper introduces an image-based rendering technique for visualizing maps of
coronary distribution in a patient-specific approach. From an image volume derived from computed
tomography (CT) images, a segmentation of the LV epicardial surface, as well as the paths of the coronary
arteries, is obtained. These paths form seed points for a competitive region growing algorithm applied to the
surface of the LV. A ray casting procedure in spherical coordinates from the center of the LV is then
performed. The cast rays are mapped to a two-dimensional circular based surface forming our coronary
distribution map. We applied our technique to a patient with known coronary artery disease and a qualitative
evaluation by an expert in coronary cardiac anatomy showed promising results.
A statistical shape model of the heart and its application to model-based segmentation
Show abstract
In the present paper we describe the automatic construction of a statistical shape model of the whole heart built
from a training set of 100 Multi-Slice Computed Tomography (MSCT) studies of pathologic and asymptomatic
patients, including 15 (temporal) cardiac phases each. With these data sets we were able to build a compact
and representative shape model of both inter-subject and temporal variability. A practical limitation in building
statistical shape models, and in particular point distribution models (PDM), is the manual delineation of the
training set. A key advantage of the proposed method is to overcome this limitation by not requiring them.
Another one is the use of MSCT images, which thanks to their excellent anatomical depiction, have allowed
for a realistic heart representation, including the four chambers and connected vasculature. The generalization
ability of the shape model permits its deformation to unseen anatomies with an acceptable accuracy. Moreover,
its compactness allows for having a reduced set of parameters to describe the modeled population. By varying
these parameters, the statistical model can generate a set of valid examples. This is especially useful for the
generation of synthetic populations of cardiac shapes, that may correspond e.g. to healthy or diseased cases.
Finally, an illustrative example of the use of the constructed shape model for cardiac segmentation is provided.
Structure and function relationship of human heart from DENSE MRI
Show abstract
The study here, suggests a macroscopic structure for the Left Ventricle (LV), based on the heart kinematics which is
obtained through imaging. The measurement of the heart muscle deformation using the Displacement ENcoding with
Stimulated Echoes (DENSE) MRI, which describes the heart kinematics in the Lagrangian frame work, is used to
determine the high resolution patterns of true myocardial strain. Subsequently, the tangential Shortening Index (SI) and
the thickening of the LV wall are calculated for each data point. Considering the heart as a positive-displacement pump,
the contribution of each segment of LV in the heart function, can be determined by the SI and thickening of the wall in
the same portion. Hence the SI isosurfaces show the extent and spatial distribution of the heart activity and reveals its
macro structure. The structure and function of the heart are, therefore, related which in turn results in a macroscopic
model for the LV. In particular, it was observed that the heart functionality is not uniformly distributed in the LV, and
the regions with greater effect on the pumping process, form a band which wraps around the heart. These results, which
are supported by the established histological evidence, may be considered as a landmark in connecting the structure and
function of the heart through imaging. Furthermore, the compatibility of this model with microscopic observations
about the fiber direction is investigated. This method may be used for planning as well as post evaluation of the
ventriculoplasty.
Four-dimensional functional analysis of left and right ventricles using MR images and active appearance models
Show abstract
Conventional analysis of cardiac ventricular function from magnetic resonance images is typically relying on
short axis image information only. Usually, two cardiac phases of the cardiac cycle are analyzed- the end-diastole
and end-systole. Unfortunately, the short axis ventricular coverage is incomplete and inconsistent due to
the lack of image information about the ventricular apex and base. In routine clinical images, this information is
only available in long axis image planes. Additionally, the standard ventricular function indices such as ejection
fraction are only based on a limited temporal information and therefore do not fully describe the four-dimensional
(4D, 3D+time) nature of the heart's motion. We report a novel approach in which the long and short axis image
data are fused to correct for respiratory motion and form a spatio-temporal 4D data sequence with cubic voxels.
To automatically segment left and right cardiac ventricles, a 4D active appearance model was built. Applying
the method to cardiac segmentation of tetralogy of Fallot (TOF) and normal hearts, our method achieved mostly
subvoxel signed surface positioning errors of 0.2±1.1 voxels for normal left ventricle, 0.6±1.5 voxels for normal
right ventricle, 0.5±2.1 voxels for TOF left ventricle, and 1.3±2.6 voxels for TOF right ventricle. Using the
computer segmentation results, the cardiac shape and motion indices and volume-time curves were derived as
novel indices describing the ventricular function in 4D.
Novel methods for parameter-based analysis of myocardial tissue in MR images
Show abstract
The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify
the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a
different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination
of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this
simplification comes along with a considerable loss of information, our purpose is to provide methods for a more
accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented
registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement
information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images
containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are
combined with the late enhancement information and form the basis for the tissue examination. For the exploration of
data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas
automatically segmented using the late enhancement information, the inspection of regions segmented in parameter
space by user defined threshold intervals and the topological comparison of regions segmented with different settings.
Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.
Poster Session
Virtual hybrid bronchoscopy using PET/CT data sets
Show abstract
The aim of this study was to demonstrate the possibilities, advantages and limitations of virtual bronchoscopy using data sets from positron emission tomography (PET) and computed tomography (CT). Eight consecutive patients with lung cancer underwent PET/CT. PET was performed with F-18-labelled 2-[fluorine-18]-fluoro-2-deoxy-D: -glucose ((18)F-FDG). The tracheobronchial system was segmented with a volume-growing algorithm, using the CT data sets, and visualized with a shaded-surface rendering method. The primary tumours and the lymph node metastases were segmented for virtual CT-bronchoscopy using the CT data set and for virtual PET/CT-bronchoscopy using the PET/CT data set. Virtual CT-bronchoscopy using the low-dose or diagnostic CT facilitates the detection of anatomical/morphological structure changes of the tracheobronchial system. Virtual PET/CT-bronchoscopy was superior to virtual CT-bronchoscopy in the detection of lymph node metastases (P=0.001), because it uses the CT information and the molecular/metabolic information from PET. Virtual PET/CT-bronchoscopy with a transparent colour-coded shaded-surface rendering model is expected to improve the diagnostic accuracy of identification and characterization of malignancies, assessment of tumour staging, differentiation of viable tumour tissue from atelectases and scars, verification of infections, evaluation of therapeutic response and detection of an early stage of recurrence that is not detectable or is misjudged in comparison with virtual CT-bronchoscopy.
A comparison of lung motion measured using implanted electromagnetic transponders and motion algorithmically predicted using external surrogates as an alternative to respiratory correlated CT imaging
Show abstract
Three-dimensional volumetric imaging correlated with respiration (4DCT) typically utilizes external breathing
surrogates and phase-based models to determine lung tissue motion. However, 4DCT requires time consuming post-processing
and the relationship between external breathing surrogates and lung tissue motion is not clearly defined. This
study compares algorithms using external respiratory motion surrogates as predictors of internal lung motion tracked in
real-time by electromagnetic transponders (Calypso® Medical Technologies) implanted in a canine model.
Simultaneous spirometry, bellows, and transponder positions measurements were acquired during free breathing and
variable ventilation respiratory patterns. Functions of phase, amplitude, tidal volume, and airflow were examined by
least-squares regression analysis to determine which algorithm provided the best estimate of internal motion. The cosine
phase model performed the worst of all models analyzed (R2 = 31.6%, free breathing, and R2 = 14.9%, variable
ventilation). All algorithms performed better during free breathing than during variable ventilation measurements. The
5D model of tidal volume and airflow predicted transponder location better than amplitude or either of the two phasebased
models analyzed, with correlation coefficients of 66.1% and 64.4% for free breathing and variable ventilation
respectively. Real-time implanted transponder based measurements provide a direct method for determining lung tissue
location. Current phase-based or amplitude-based respiratory motion algorithms cannot as accurately predict lung tissue
motion in an irregularly breathing subject as a model including tidal volume and airflow. Further work is necessary to
quantify the long term stability of prediction capabilities using amplitude and phase based algorithms for multiple lung
tumor positions over time.
Nonlinear histogram binning for quantitative analysis of lung tissue fibrosis in high-resolution CT data
Show abstract
Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis
of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512
x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional
abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques
which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of
the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans
range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive
so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray
level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use
a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic
programming is computationally efficient and improves the discriminatory power of the second and higher order
statistics for more accurate quantification of diffuse lung disease.
Quantification of airway morphometry: the effect of CT acquisition and reconstruction parameters
Show abstract
This study measured the accuracy of our airway quantification scheme using phantoms airway under different CT
protocols. Airway remodeling is associated with several thoracic diseases (e.g., chronic bronchitis, asthma, and
bronchiectasis), and, therefore, quantification of airway remodeling may have wide clinical application. Our scheme
assigns pixels partial membership in the airway wall and lumen based on the pixel's HU value, which is intended to
account for partial volume averaging inherent in CT image reconstruction. Twenty-four phantom airways with an
outer diameter from 2.6 to 14.0 mm and wall thicknesses from 0.5 to 2.0 mm were analyzed. The absolute
differences between measurements supplied by the manufacture and computed from CT images acquired at 40 mAs
and reconstructed at 1.25 mm thickness using GE's "soft" and "lung" reconstruction kernels for lumen area ranged
from 1.4% to 49.3% and 0.4% to 33.0%, respectively, and for wall area ranged from 0.3% to 118.0% and 2.1 to
92.9%, respectively. Accuracy typically improved as the kernel's spatial frequency increased. Airways whose wall
thickness was close to the pixels dimensions were challenging to quantify. The partial membership assignment of
our airway quantification accurately computed airway morphometry across a range of phantom airway sizes.
A phase unwrapping method for large-motion phase data in MR elastography
Show abstract
We have developed a one-dimensional route-finding phase unwrapping method to handle Magnetic Resonance
Elastography (MRE) phase data from very large induced motion. The method is able to unwrap data where the adjacent
phase differences are within range [-2π, 2π) as opposed to the [-π, π) requirement for most phase unwrapping methods.
With more unwrapping paths, the range of phase difference can be easily expanded to [-2Nπ, 2Nπ) (N = 1,2,3,...). The
possible unwrapping path when using different numbers of relative phase offsets can be found by Monte Carlo
simulation. Two phantom studies were performed to test the new unwrapping method. One study compared the new
method with the classical Itoh's one-dimensional method and the other study combined the new method with twodimensional
phase unwrapping method and unwraped three-dimensional MRE phase data with a sequential threedimensional unwrapping approach. The results from using different phase unwrapping methods were then compared and analyzed.
Reproducibility of MRE shear modulus estimates
Show abstract
A significant effort has been expended to measure the accuracy of the shear modulus estimates. Conversely, very little effort has been expended to establish the reproducibility of the
method in a clinical context. Previously we established the reproducibility in phantoms to be
3% for repeated measurements without moving the phantom and 5% when the phantom was moved,however, the clinical reproducibility has not been demonstrated. The reproducibility of the method was estimated by scanning subjects' heels repeatedly on a GE 1.5T scanner using previously described methods. Three subjects were scanned three times on different days (termed non-consecutive) and three subjects were scanned three times in the same session without changing the position of the foot (termed consecutive). The average difference between mean values within the field of view for the non-consecutive group was 7.75% ± 3.76% and for the consecutive group it was 5.30% ± 4.16%. These values represent remarkably good reproducibility considering the 20% variation in shear modulus observed within individual heels and the several hundred percent changes observed between normal and pathologic tissues. The variation in repeated examinations was caused by four factors: positioning error between examinations accounted for 4.8%, computational noise 3.0%, and the combination of MR noise and patient motion during the examination, 5.3%. Each of these sources of variation can be reduced in relatively straightforward ways if necessary but the current level of reproducibility is sufficient for most current applications.
Boundary element methods in elastography: a first explorative study
Show abstract
Next to Magnet Resonance Elastography and Ultrasound Elastography, Digital Image Elasto-Tomography
(DIET) is a new imaging-technique, using only motion data available on the boundary, to reconstruct mechanical
material parameters, i.e. the interior sti.ness of a domain, in order to diagnose tissue related disease
such as breast cancer. Where classically Finite Element Methods have been employed to solve this inverse
problem, this paper explores a new approach to the reconstruction of mechanical material properties of tissue
and tissue defects by the use of Boundary Element Methods (BEM). Using the Boundary Integral Equations
for Linear Elasticity in two dimensions within a Conjugate Gradients based inverse solver, material properties
of healthy and malicious tissue could be determined from displacement data on the boundary. First simulation
results are presented.
Damping models in elastography
Show abstract
Current optimization based Elastography reconstruction algorithms encounter difficulties when the motion approaches
resonant conditions, where the model does a poor job of approximating the real behavior of the material.
Model accuracy can be improved through the addition of damping effects. These effects occur in-vivo due to the
complex interaction between microstructural elements of the tissue; however reconstruction models are typically
formulated at larger scales where the structure can be treated as a continuum. Attenuation behavior in an
elastic continuum can be described as a mixture of inertial and viscoelastic damping effects. In order to develop
a continuum damping model appropriate for human tissue, the behavior of each aspect of this proportional, or
Rayleigh damping needs to be characterized.
In this paper we investigate the nature of these various damping representations with a goal of best describing
in-vivo behavior of actual tissue in order to improve the accuracy and performance of optimization based elastographic
reconstruction. Inertial damping effects are modelled using a complex density, where the imaginary part
is equivalent to a damping coefficient, and the effects of viscoelasticity are modelled through the use of complex
shear moduli, where the real and imaginary parts represent the storage and loss moduli respectively.
The investigation is carried out through a combination of theoretical analysis, numerical experiment, investigation
of gelatine phantoms and comparison with other continua such as porous media models.
An evaluation of 3D modality independent elastography robustness to boundary condition noise
Show abstract
This work explores an inverse problem technique of extracting soft tissue elasticity information via nonrigid model-based
image registration. The algorithm uses the elastic properties of the tissue in a biomechanical model to achieve
maximal similarity between image data acquired under different states of loading. A framework capable of handling
fully three-dimensional models and image data has been recently developed utilizing parallel computing and iterative
sparse matrix solvers. For this preliminary investigation, a series of simulation experiments with clinical image data of
human breast are used to test the robustness of the algorithm to expected mis-estimation of displacement boundary
conditions encountered in real-world situations. Three methods of automated point correspondence are also examined as
means of generating boundary conditions for the algorithm.
Molecular and structural analysis of viscoelastic properties
Show abstract
Elasticity imaging is emerging as an important tool for breast
cancer detection and monitoring of treatment. Viscoelastic image
contrast in breast lesions is generated by disease specific
processes that modify the molecular structure of connective
tissues. We showed previously that gelatin hydrogels exhibit
mechanical behavior similar to native collagen found in breast
tissue and therefore are suitable as phantoms for elasticity
imaging. This paper summarizes our study of the viscoelastic
properties of hydrogels designed to discover molecular-scale
sources of elasticity image contrast.
Bolus tracking by cone-beam reconstruction and reprojection
Show abstract
Contrast agent bolus is used in angiography for vascular imaging. The bolus flow through a local region in the bolus
wash-in phase can be captured by cone-beam scanning, producing a time series of projection images. During the
bolus/blood equilibrium phase, circular cone-beam volume scanning produces a dataset that can be used for vessel (or
bolus) volume reconstruction. From a bolus (or vessel) volume, we can depict the vessel anatomy and extract the 3D
bolus passageways. For bolus velocity measurements, we need to calculate the 3D bolus pathlength and to determine
the time interval indicated by the frame time of the dynamic bolus wash-in images. The cone-beam volume
reconstruction allows 3D vessel depiction with isotropic grid resolution, thus facilitating the measurement of 3D vessel
lumen and centerline. The timing information of the dynamic bolus flow corresponds to the frame time of the bolus
wash-in images. In order to add time divisions to a 3D bolus passageway, we suggest a cone-beam reprojection scheme,
which consists of vessel-centerline extraction, cone-beam reprojection, and image registration between reprojected
images and wash-in projection images. With the accurate measurements of 3D pathlength and time interval, we can
calculate the local blood flow in terms of velocity and flux. Simulations of bolus traveling along a sinuate vessel inside
a cylinder are provided.
Spatio-temporal patterns of ERP-based on combined ICA-LORETA analysis
Show abstract
In contrast to the FMRI methods widely used up to now, this method try to understand more profoundly how the brain
systems work under sentence processing task map accurately the spatiotemporal patterns of activity of the large
neuronal populations in the human brain from the analysis of ERP data recorded on the brain scalp. In this study, an
event-related brain potential (ERP) paradigm to record the on-line responses to the processing of sentences is chosen as
an example. In order to give attention to both utilizing the ERPs' temporal resolution of milliseconds and overcoming
the insensibility of cerebral location ERP sources, we separate these sources in space and time based on a combined
method of independent component analysis (ICA) and low-resolution tomography (LORETA) algorithms. ICA blindly
separate the input ERP data into a sum of temporally independent and spatially fixed components arising from distinct
or overlapping brain or extra-brain sources. And then the spatial maps associated with each ICA component are
analyzed, with use of LORETA to uniquely locate its cerebral sources throughout the full brain according to the
assumption that neighboring neurons are simultaneously and synchronously activated. Our results show that the cerebral
computation mechanism underlies content words reading is mediated by the orchestrated activity of several spatially
distributed brain sources located in the temporal, frontal, and parietal areas, and activate at distinct time intervals and are
grouped into different statistically independent components. Thus ICA-LORETA analysis provides an encouraging and
effective method to study brain dynamics from ERP.
Spatiotemporal analysis of single-trial EEG of emotional pictures based on independent component analysis and source location
Show abstract
The present study combined the Independent Component Analysis (ICA) and low-resolution brain electromagnetic
tomography (LORETA) algorithms to identify the spatial distribution and time course of single-trial EEG
record differences between neural responses to emotional stimuli vs. the neutral. Single-trial multichannel
(129-sensor) EEG records were collected from 21 healthy, right-handed subjects viewing the emotion emotional
(pleasant/unpleasant) and neutral pictures selected from International Affective Picture System (IAPS). For
each subject, the single-trial EEG records of each emotional pictures were concatenated with the neutral, and a
three-step analysis was applied to each of them in the same way. First, the ICA was performed to decompose
each concatenated single-trial EEG records into temporally independent and spatially fixed components, namely
independent components (ICs). The IC associated with artifacts were isolated. Second, the clustering analysis
classified, across subjects, the temporally and spatially similar ICs into the same clusters, in which nonparametric
permutation test for Global Field Power (GFP) of IC projection scalp maps identified significantly different
temporal segments of each emotional condition vs. neutral. Third, the brain regions accounted for those significant segments were localized spatially with LORETA analysis. In each cluster, a voxel-by-voxel randomization
test identified significantly different brain regions between each emotional condition vs. the neutral. Compared
to the neutral, both emotional pictures elicited activation in the visual, temporal, ventromedial and dorsomedial
prefrontal cortex and anterior cingulated gyrus. In addition, the pleasant pictures activated the left middle prefrontal
cortex and the posterior precuneus, while the unpleasant pictures activated the right orbitofrontal cortex,
posterior cingulated gyrus and somatosensory region. Our results were well consistent with other functional
imaging studies, while revealed temporal dynamics of emotional processing of specific brain structure with high
temporal resolution.
The functional connectivity of semantic task changes in the recovery from stroke aphasia
Show abstract
Little is known about the difference of functional connectivity of semantic task between the recovery aphasic patients
and normal subject. In this paper, an fMRI experiment was performed in a patient with aphasia following a left-sided
ischemic lesion and normal subject. Picture naming was used as semantic activation task in this study. We compared the
preliminary functional connectivity results of the recovery aphasic patient with the normal subject. The fMRI data were
separated by independent component analysis (ICA) into 90 components. According to our experience and other papers,
we chose a region of interest (ROI) of semantic (x=-57, y=15, z=8, r=11mm). From the 90 components, we chose one
component as the functional connectivity of the semantic ROI according to one criterion. The criterion is the mean value
of the voxels in the ROI. So the component of the highest mean value of the ROI is the functional connectivity of the
ROI. The voxel with its value higher than 2.4 was thought as activated (p<0.05). And the functional connectivity
networks of the normal subjects were t-tested as group network. From the result, we can know the semantic functional
connectivity of stroke aphasic patient and normal subjects are different. The activated areas of the left inferior frontal
gyrus and inferior/middle temporal gyrus are larger than the ones of normal. The activated area of the right inferior
frontal gyrus is smaller than the ones of normal. The functional connectivity of stroke aphasic patient under semantic
condition is different with the normal one. The focus of the stroke aphasic patient can affect the functional connectivity.
Adaptive selection of fMRI spatial data in canonical correlation method
Show abstract
Although simple averaging and Gaussian spatial smoothing of neighboring time series can suppress the noise of
fMRI, but they may degrade the activated areas. As an alternative approach, the canonical correlation analysis
(CCA) performs a weighted averaging of time series data such that the resulted time series has maximum
correlation with the bases of a signal subspace. In this paper, we select only the most similar neighbors of each
voxel for further adaptive averaging via CCA. Thus for an inactive central voxel, the surrounding active voxels
are eliminated from weighted averaging. This intelligent selection prevents the false spreading of activated
areas. After spatial filtering, we used the results of CCA (maximum cross correlation) for activation detection.
We applied our method on simulated and experimental fMRI data and compared it with the conventional CCA
(without intelligent selection) and match (spatial) filter. The ROC curve obtained from simulated data shows the
superior performance of our proposed method.
Retrospective processing of DTI tractography to compensate for partial volume effects
Show abstract
Partial volume effects are one of the most common sources of error in diffusion tensor imaging (DTI) tractography. For
example, in data from older subjects or Alzheimer's disease probable subjects, the situation is especially exacerbated
around the dilated ventricle, which causes erroneous merging of tracts. Rescanning the subject at higher resolution is the
best solution, but often times unattainable. We offer a retrospective filtering algorithm, which is purely subtractive,
based on a region of interest (ROI) filtering methodology that filters tracts by their shape and seed points. The ROIs are
defined using both anatomic images and fractional anisotropy (FA) maps in normalized space allowing for consistency
across all subjects. Our algorithm helps correct the partial volume effects by reducing the overestimation of tract length,
giving a more accurate regional tract count. The objective of our retrospective algorithm is reclamation of data sets from
partial volume effects.
A comparison between EEG source localization and fMRI during the processing of emotional visual stimuli
Show abstract
The purpose of this paper is to compare between EEG source localization and fMRI during emotional processing. 108
pictures for EEG (categorized as positive, negative and neutral) and 72 pictures for fMRI were presented to 24 healthy,
right-handed subjects. The fMRI data were analyzed using statistical parametric mapping with SPM2. LORETA was
applied to grand averaged ERP data to localize intracranial sources. Statistical analysis was implemented to compare
spatiotemporal activation of fMRI and EEG. The fMRI results are in accordance with EEG source localization to some
extent, while part of mismatch in localization between the two methods was also observed. In the future we should
apply the method for simultaneous recording of EEG and fMRI to our study.
Investigation of effective connectivity in the motor cortex of fMRI data using Granger causality model
Show abstract
Effective connectivity of brain regions based on brain data (e.g. EEG, fMRI, etc.) is a focused research at present. Many
researchers tried to investigate it using different methods. Granger causality model (GCM) is presently used to
investigate effective connectivity of brain regions more and more. It can explore causal relationship between time series,
meaning that if a time-series y causes x, then knowledge of y should help predict future values of x. In present work,
time invariant GCM was applied to fMRI data considering slow changing of blood oxygenation level dependent
(BOLD). The time invariant GCM often requires determining model order, estimating model parameters and significance
test. In particular, we extended significance test method to make results more reasonable. The fMRI data were acquired
from finger movement experiment of two right-handed subjects. We obtained the activation maps of two subjects using
SPM'2 software firstly. Then we chose left SMA and left SMC as regions of interest (ROIs) with different radiuses, and
calculated causality from left SMA to left SMC using the mean time courses of the two ROIs. The results from both
subjects showed that left SMA influenced on left SMC. Hence GCM was suggested to be an effective approach in
investigation of effective connectivity based on fMRI data.
Optimization of a single-shot EPI sequence for diffusion imaging of the human spinal cord
Show abstract
Diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI) are established techniques of magnetic
resonance widely used for the characterization of the cerebral tissue. Despite the successful application in the brain,
diffusion-weighted single-shot echo-planar-imaging (EPI) of the spinal cord is hindered by the need for highly-resolved
spatial encoding in an area of strong magnetic field inhomogeneities, and the shortness of transverse relaxation time.
The major aim of this study was the optimization of a reliable single-shot EPI sequence for DTI of the spinal cord at
1.5T.
Ten healthy volunteers participated in the study (mean age=28.4±3.1). A single-shot EPI sequence with double spinecho
diffusion preparation and nominal in-plane resolution of 0.9x0.9mm2 was optimized with regard to cerebrospinal
fluid artifacts, and contrast-to-noise ratio between gray matter (GM) and white matter (WM). The effective sequence
resolution was evaluated on a phantom.
A cardiac-pulse gated sequence with optimal matrix size (read x phase=64x32) and b-value (700s/mm2) allowed for the
acquisition of highly-resolved images of the spinal cord (effective in-plane resolution=1.1mm). Preliminary results on
two healthy volunteers showed that the butterfly-shaped GM is clearly recognizable in the reconstructed fractional
anisotropy (FA) maps. Measured WM FA values were 0.698±0.076 and 0.756±0.046. No significant differences were
found in the mean diffusivity computed in the WM as compared to the GM areas.
Optimized spinal cord diffusion imaging provided promising preliminary results on healthy volunteers. The application
of the proposed protocol in the assessment of neurological disorders may allow for improved characterization of healthy
and impaired WM and GM.
Analysis of intracranial aneurysm wall motion and its effects on hemodynamic patterns
Show abstract
Hemodynamics, and in particular Wall Shear Stress (WSS), is thought to play a critical role in the progression
and rupture of intracranial aneurysms. Wall motion is related to local biomechanical properties of the aneurysm,
which in turn are associated with the amount of damage undergone by the tissue. The underlying hypothesis
in this work is that injured regions show differential motion with respect to normal ones, allowing a connection
between local wall biomechanics and a potential mechanism of wall injury such as elevated WSS. In a previous
work, a novel method was presented combining wall motion estimation using image registration techniques with
Computational Fluid Dynamics (CFD) simulations in order to provide realistic intra-aneurysmal flow patterns.
It was shown that, when compared to compliant vessels, rigid models tend to overestimate WSS and produce
smaller areas of elevated WSS and force concentration, being the observed differences related to the magnitude
of the displacements. This work aims to further study the relationships between wall motion, flow patterns and
risk of rupture in aneurysms. To this end, four studies containing both 3DRA and DSA studies were analyzed,
and an improved version of the method developed previously was applied to cases showing wall motion. A
quantification and analysis of the displacement fields and their relationships to flow patterns are presented. This
relationship may play an important role in understanding interaction mechanisms between hemodynamics, wall
biomechanics, and the effect on aneurysm evolution mechanisms.
Hemodynamic patterns of anterior communicating artery aneurysms: a possible association with rupture
Show abstract
The aim of this study is to characterize the different flows present at anterior communicating artery (AcoA)
aneurysms and investigate possible associations with rupture. For that purpose, patient-specific
computational models of 26 AcoA aneurysms were constructed from 3D rotational angiography images.
Bilateral images were acquired in 15 patients who had both A1 segments of the anterior cerebral arteries
and models were created by fusing the reconstructed left and right arterial trees. Computational fluid
dynamics simulations were performed under pulsatile flow conditions. Visualizations of the flow velocity
pattern were created to classify the aneurysms into the following flow types: A) inflow from both A1
segments, B) flow jet in the parent artery splits into three secondary jets, one enters the aneurysm and the
other two are directed to the A2 segments, C) the parent artery jet splits into two secondary jets, one is
directed to one of the A2 segments and the other enters the aneurysm before being directed to the other A2
segment, and D) the parent artery jet enters the aneurysm before being directed towards the A2 segments.
The maximum wall shear stress in the aneurysm at the systolic peak (MWSS) was calculated. Most
aneurysms in group A were unruptured and had the lowest MWSS. Group B had the same number of
unruptured and ruptured aneurysms, and a low MWSS. Groups C and D had high rupture ratios, being the
average MWSS significantly higher in group C. Finally, it was found that the MWSS was higher for
ruptured aneurysms of all flow types.
Hemodynamics before and after bleb formation in cerebral aneurysms
Show abstract
We investigate whether blebs in cerebral aneurysms form in regions of low or high wall shear stress (WSS), and how
the intraaneurysmal hemodynamic pattern changes after bleb formation. Seven intracranial aneurysms harboring well
defined blebs were selected from our database and subject-specific computational models were constructed from 3D
rotational angiography. For each patient, a second anatomical model representing the aneurysm before bleb formation
was constructed by smoothing out the bleb. Computational fluid dynamics simulations were performed under pulsatile
flow conditions for both models of each aneurysm. In six of the seven aneurysms, the blebs formed in a region of
elevated WSS associated to the inflow jet impaction zone. In one, the bleb formed in a region of low WSS associated to
the outflow zone. In this case, the inflow jet maintained a fairly concentrated structure all the way to the outflow zone,
while in the other six aneurysms it dispersed after impacting the aneurysm wall. In all aneurysms, once the blebs
formed, new flow recirculation regions were formed inside the blebs and the blebs progressed to a state of low WSS.
Assuming that blebs form due to a focally damaged arterial wall, these results seem to indicate that the localized injury
of the vessel wall may be caused by elevated WSS associated with the inflow jet. However, the final shape of the
aneurysm is probably also influenced by the peri-aneurysmal environment that can provide extra structural support via
contact with structures such as bone or dura matter.
Patient-specific modeling of intracranial aneurysmal stenting
Sunil Appanaboyina,
Fernando Mut,
Rainald Löhner,
et al.
Show abstract
Simulating blood flow around stents in intracranial aneurysms is important for designing better stents and to
personalize and optimize endovascular stenting procedures in the treatment of these aneurysms. However, the
main difficulty lies in the generation of acceptable computational grids inside the blood vessels and around the
stents. In this paper, a hybrid method that combines body-fitted grid for the vessel walls and adaptive embedded
grids for the stent is presented. Also an algorithm to map a particular stent to the parent vessel is described.
These approaches tremendously simplify the simulation of blood flow past these devices. The methodology is
evaluated with an idealized stented aneurysm under steady flow conditions and demonstrated in various patient-specific
cases under physiologic pulsatile flow conditions. These examples show that the methodology can be
used with ease in modeling any patient-specific anatomy and using different stent designs. This paves the way
for using these techniques during the planning phase of endovascular stenting interventions, particularly for
aneurysms that are difficult to treat with coils or by surgical clipping.
Linear programming approach to optimize 3D data obtained from multiple view angiograms
Show abstract
Three-dimensional (3D) vessel data from CTA or MRA are not always available prior to or during endovascular
interventional procedures, whereas multiple 2D projection angiograms often are. Unfortunately, patient movement,
table movement, and gantry sag during angiographic procedures can lead to large errors in gantry-based imaging
geometries and thereby incorrect 3D. Therefore, we are developing methods for combining vessel data from
multiple 2D angiographic views obtained during interventional procedures to provide 3D vessel data during these
procedures. Multiple 2D projection views of carotid vessels are obtained, and the vessel centerlines are indicated.
For each pair of views, endpoints of the 3D centerlines are reconstructed using triangulation based on the provided
gantry geometry. Previous investigations indicated that translation errors were the primary source of error in the
reconstructed 3D. Therefore, the errors in the translations relating the imaging systems are corrected by minimizing
the L1 distance between the reconstructed endpoints, after which the 3D centerlines are reconstructed using epipolar
constraints for every pair of views. Evaluations were performed using simulations, phantom data, and clinical cases.
In simulation and phantom studies, the RMS error decreased from 6.0 mm obtained with biplane approaches to 0.5
mm with our technique. Centerlines in clinical cases are smoother and more consistent than those calculated from
individual biplane pairs. The 3D centerlines are calculated in about 2 seconds. These results indicate that reliable
3D vessel data can be generated for treatment planning or revision during interventional procedures.
Comparative study of diverse model building strategies for 3D-ASM segmentation of dynamic gated SPECT data
Show abstract
Over the course of the last two decades, myocardial perfusion with Single Photon Emission Computed Tomography
(SPECT) has emerged as an established and well-validated method for assessing myocardial ischemia,
viability, and function. Gated-SPECT imaging integrates traditional perfusion information along with global
left ventricular function. Despite of these advantages, inherent limitations of SPECT imaging yield a challenging
segmentation problem, since an error of only one voxel along the chamber surface may generate a huge difference
in volume calculation. In previous works we implemented a 3-D statistical model-based algorithm for Left Ventricle
(LV) segmentation of in dynamic perfusion SPECT studies. The present work evaluates the relevance of
training a different Active Shape Model (ASM) for each frame of the gated SPECT imaging acquisition in terms
of their subsequent segmentation accuracy. Models are subsequently employed to segment the LV cavity of gated
SPECT studies of a virtual population. The evaluation is accomplished by comparing point-to-surface (P2S)
and volume errors, both against a proper Gold Standard. The dataset comprised 40 voxel phantoms (NCAT,
Johns Hopkins, University of of North Carolina). Monte-Carlo simulations were generated with SIMIND (Lund
University) and reconstructed to tomographic slices with ASPIRE (University of Michigan).
Estimation of 3D myocardial motion from tagged MRI using LDDMM
Show abstract
Non-invasive estimation of regional cardiac function is important for assessment of myocardial contractility.
The use of MR tagging technique enables acquisition of intra-myocardial tissue motion by placing a spatially
modulated pattern of magnetization whose deformation with the myocardium over the cardiac cycle can be
imaged. Quantitative computation of parameters such as wall thickening, shearing, rotation, torsion and strain
within the myocardium is traditionally achieved by processing the tag-marked MR image frames to 1) segment
the tag lines and 2) detect the correspondence between points across the time-indexed frames. In this paper,
we describe our approach to solving this problem using the Large Deformation Diffeomorphic Metric Mapping
(LDDMM) algorithm in which tag-line segmentation and motion reconstruction occur simultaneously. Our
method differs from earlier proposed non rigid registration based cardiac motion estimation methods in that
our matching cost incorporates image intensity overlap via the L2 norm and the estimated tranformations are
diffeomorphic. We also present a novel method of generating synthetic tag line images with known ground truth
and motion characteristics that closely follow those in the original data; these can be used for validation of
motion estimation algorithms. Initial validation shows that our method is able to accurately segment tag-lines
and estimate a dense 3D motion field describing the motion of the myocardium in both the left and the right
ventricle.
Analysis of kernel method for surface curvature estimation
Show abstract
Surface curvature estimation is a common component of CT colonography computer-aided polyp detection algorithms.
A commonly used method to compute such curvatures employs convolution kernels. We have observed situations where
the kernel method produces inaccurate results that could lead to undesirable false negative and false positive polyp
diagnoses. In this paper, we numerically examine this method of curvature estimation. We propose optimal choices for
smoothing parameters intrinsic to the method. The proposed smoothing parameters achieve more accurate and reliable
curvatures compared to those reported in the literature. Our results include responses of the system with respect to
Gaussian smoothing and Gaussian noise, results on the accuracy of the curvature estimation as a function of the distance
from the true surface, and examples of specific topologies of the colonic surface for which the kernel method yields
inaccurate responses.
A new electronic colon cleansing method for virtual colonoscopy
Show abstract
Virtual colonoscopy has been developed as a non-invasive, safe, and low-cost method to evaluate colon polyps.
Implementation and efficiency of virtual colonoscopy requires rigorous cleansing of colon prior to the examination.
Electronic colon cleansing is a new technology that virtually clean stool residues tagged with contrast agents from the
obtained computed tomography (CT) images. From our previous studies on electronic colon cleansing, we found that
residual stool and fluid are often problematic for optimal viewing of colon. In this paper, we focus on developing a
model-based approach to correct both non-uniformity and partial volume effects appearing in regions of bone and tagged
stool residues. A statistical method for maximum a posterior probability (MAP) was developed to identify and virtually
clean the tagged stool residuals. In calculating the solution, the well-known expectation maximization (EM) algorithm is
employed. Experimental results of electronic colon cleansing are promising.
Magnetic resonance microwave absorption imaging: initial experimental results in phantoms
Show abstract
We have used phase contrast magnetic resonance gradients to image small displacements induced by a pulsed 434
MHz microwave field. Thermoelastic expansions, which are related to the tissues' local microwave absorption
properties and the applied microwave field distribution are encoded into the phase of MR images. The imaging
principles are applicable to other irradiation sources. Initial efforts to develop the signal generation and control
necessary for synchronous pulsed microwave power deposition and MR image acquisition have been successful.
Preliminary results suggest that MR phase accumulation associated with microwave power deposition in a
localized absorber has been observed.
Optical tomography for breast cancer imaging using a two-layer tissue model to include chest-wall effects
Show abstract
In this paper we have combined the solutions of diffusion equations for two-layer tissue structure with the
linear perturbation method for imaging and shown the advantages of this method over the use of semi-infinite
tissue model for imaging two-layer tissue structures. Analytical solutions have been derived for
diffusion equations of light propagation in a two-layer tissue structures and several groups have used them
to fit for the optical properties of the two layers. Using these solutions for imaging tumors embedded in
two-layer tissue structures is shown to yield better images due to more accurate evaluation of the weight
matrix that takes into account the light propagation effects in both the tissue layers, as compared to using
the solutions of semi-infinite tissue model which is a good approximation for the problem when the upper
layer of tissue is at least 2 cm thick. Although this method can be used for imaging any layered tissue, in
this paper we have shown examples of breast imaging and account for the effect of underlying chest-wall.
It is shown that considering a semi-infinite tissue model leads to higher errors in breast imaging when the
patient has a breast thickness smaller than about 2 cm. We have shown improved imaging contrast
especially for cases with smaller breast using this new method. The method was optimized using data
obtained from Finite-element method (FEM) for target embedded in two-layer tissue structures and
phantom experiments. Clinical results are also presented for breast imaging using this new method.
Comparison of the depth of an optic nerve head obtained using stereo retinal images and HRT
Show abstract
The analysis of the optic nerve head (ONH) in the retinal fundus is important for the early detection of glaucoma. In this
study, we investigate an automatic reconstruction method for producing the 3-D structure of the ONH from a stereo
retinal image pair; the depth value of the ONH measured by using this method was compared with the measurement
results determined from the Heidelberg Retina Tomograph (HRT). We propose a technique to obtain the depth value
from the stereo image pair, which mainly consists of four steps: (1) cutout of the ONH region from the retinal images,
(2) registration of the stereo pair, (3) disparity detection, and (4) depth calculation. In order to evaluate the accuracy of
this technique, the shape of the depression of an eyeball phantom that had a circular dent as generated from the stereo
image pair and used to model the ONH was compared with a physically measured quantity. The measurement results
obtained when the eyeball phantom was used were approximately consistent. The depth of the ONH obtained using the
stereo retinal images was in accordance with the results obtained using the HRT. These results indicate that the stereo
retinal images could be useful for assessing the depth of the ONH for the diagnosis of glaucoma.
Accuracy limits for efficiently determining shape and size of low-density lipoprotein macromolecules from cryogenic transmission electron microscope images
Show abstract
Previous research has shown that that the size (diameter) of LDL particles can have an effect on cardiovascular health
and that LDL macromolecules may be non-spherical in shape. Some of these studies, however, used methods that are not
conducive to automatic determination of the shape and size parameters of the particles. In particular, these prior methods
used either centrifugal separations leading to mass/volume ratios or manual determination of parameters from scanned
micrographs. This paper describes the investigation of methods of efficiently determining the geometric shape and size
of LDL macromolecules from scanned micrographs. Variants of direct correlation of computer-generated geometric
models to the orthonormal projection CTEM micrographs were investigated to determine applicability of determining
pertinent geometric parameters of the expected discoid shape of the LDL. Analysis software was developed to analyze
artificially generated discoid objects to determine the limits of the method. The results of this research show that the
described method can be used to determine the shape and size of LDL particles to within a few pixels in both radius and
height of the purported discoid shapes. By allowing for efficient generation of a histogram of LDL parameters in samples
of blood, it is hoped that the pertinent parameters can then be correlated to observed cardiovascular state in order to
assist in the determination of pertinent relationships between LDL geometry and overall cardiovascular health.
Determination of the chemical composition of human renal stones with MDCT: influence of the surrounding media
Show abstract
The selection of the optimal treatment method for urinary stones diseases depends on the chemical composition of
the stone and its corresponding fragility. MDCT has become the most used modality to determine rapidly and
accurately the presence of stones when evaluating urinary lithiasis treatment. That is why several studies have
tempted to determine the chemical composition of the stones based on the stone X-ray attenuation in-vitro and invivo.
However, in-vitro studies did not reproduce the normal abdominal wall and fat, making uncertain the
standardization of the obtained values.
The aim of this study is to obtain X-ray attenuation values (in Hounsfield Units) of the six more frequent types of
human renal stones (n=217) and to analyze the influence of the surrounding media on these values. The stones were
first placed in a jelly, which X-ray attenuation is similar to that of the human kidney (30 HU at 120 kV). They were
then stuck on a grid, scanned in a water tank and finally scanned in the air.
Significant differences in CT-attenuation values were obtained with the three different surrounding media (jelly,
water, air). Furthermore there was an influence of the surrounding media and consequently discrepancies in
determination of the chemical composition of the renal stones.
Consequently, CT-attenuation values found in in-vitro studies cannot really be considered as a reference for the
determination of the chemical composition except if the used phantom is an anthropomorphic one.