The OCT penlight: in-situ image guidance for microsurgery
Author(s):
John Galeotti;
Areej Sajjad;
Bo Wang;
Larry Kagemann;
Gaurav Shukla;
Mel Siegel;
Bing Wu;
Roberta Klatzky;
Gadi Wollstein;
Joel S. Schuman;
George Stetten
Show Abstract
We have developed a new image-based guidance system for microsurgery using optical coherence tomography
(OCT), which presents a virtual image in its correct location inside the scanned tissue. Applications include surgery of
the cornea, skin, and other surfaces below which shallow targets may advantageously be displayed for the naked eye or
low-power magnification by a surgical microscope or loupes (magnifying eyewear). OCT provides real-time highresolution
(3 micron) images at video rates within a two or more millimeter axial range in soft tissue, and is therefore
suitable for guidance to various shallow targets such as Schlemm's canal in the eye (for treating Glaucoma) or skin
tumors. A series of prototypes of the "OCT penlight" have produced virtual images with sufficient resolution and
intensity to be useful under magnification, while the geometrical arrangement between the OCT scanner and display
optics (including a half-silvered mirror) permits sufficient surgical access. The two prototypes constructed thus far have
used, respectively, a miniature organic light emitting diode (OLED) display and a reflective liquid crystal on silicon
(LCoS) display. The OLED has the advantage of relative simplicity, satisfactory resolution (15 micron), and color
capability, whereas the LCoS can produce an image with much higher intensity and superior resolution (12 micron),
although it is monochromatic and more complicated optically. Intensity is a crucial limiting factor, since light flux is
greatly diminished with increasing magnification, thus favoring the LCoS as the more practical system.
Fusion of intraoperative cone-beam CT and endoscopic video for image-guided procedures
Author(s):
M. J. Daly;
H. Chan;
E. Prisman;
A. Vescan;
S. Nithiananthan;
J. Qiu;
R. Weersink;
J. C. Irish;
J. H. Siewerdsen
Show Abstract
Methods for accurate registration and fusion of intraoperative cone-beam CT (CBCT) with endoscopic video have been
developed and integrated into a system for surgical guidance that accounts for intraoperative anatomical deformation and
tissue excision. The system is based on a prototype mobile C-Arm for intraoperative CBCT that provides low-dose 3D
image updates on demand with sub-mm spatial resolution and soft-tissue visibility, and also incorporates subsystems for
real-time tracking and navigation, video endoscopy, deformable image registration of preoperative images and surgical
plans, and 3D visualization software. The position and pose of the endoscope are geometrically registered to 3D CBCT
images by way of real-time optical tracking (NDI Polaris) for rigid endoscopes (e.g., head and neck surgery), and
electromagnetic tracking (NDI Aurora) for flexible endoscopes (e.g., bronchoscopes, colonoscopes). The intrinsic (focal
length, principal point, non-linear distortion) and extrinsic (translation, rotation) parameters of the endoscopic camera
are calibrated from images of a planar calibration checkerboard (2.5×2.5 mm2 squares) obtained at different
perspectives. Video-CBCT registration enables a variety of 3D visualization options (e.g., oblique CBCT slices at the
endoscope tip, augmentation of video with CBCT images and planning data, virtual reality representations of CBCT
[surface renderings]), which can reveal anatomical structures not directly visible in the endoscopic view - e.g., critical
structures obscured by blood or behind the visible anatomical surface. Video-CBCT fusion is evaluated in pre-clinical
sinus and skull base surgical experiments, and is currently being incorporated into an ongoing prospective clinical trial in
CBCT-guided head and neck surgery.
Integrating the visualization concept of the medical imaging interaction toolkit (MITK) into the XIP-Builder visual programming environment
Author(s):
Ivo Wolf;
Marco Nolden;
Tobias Schwarz;
Hans-Peter Meinzer
Show Abstract
The Medical Imaging Interaction Toolkit (MITK) and the eXtensible Imaging Platform (XIP) both aim at
facilitating the development of medical imaging applications, but provide support on different levels. MITK
offers support from the toolkit level, whereas XIP comes with a visual programming environment.
XIP is strongly based on Open Inventor. Open Inventor with its scene graph-based rendering paradigm was
not specifically designed for medical imaging, but focuses on creating dedicated visualizations. MITK has a
visualization concept with a model-view-controller like design that assists in implementing multiple, consistent
views on the same data, which is typically required in medical imaging. In addition, MITK defines a unified means
of describing position, orientation, bounds, and (if required) local deformation of data and views, supporting
e.g. images acquired with gantry tilt and curved reformations. The actual rendering is largely delegated to the
Visualization Toolkit (VTK).
This paper presents an approach of how to integrate the visualization concept of MITK with XIP, especially
into the XIP-Builder. This is a first step of combining the advantages of both platforms. It enables experimenting
with algorithms in the XIP visual programming environment without requiring a detailed understanding of Open
Inventor. Using MITK-based add-ons to XIP, any number of data objects (images, surfaces, etc.) produced by
algorithms can simply be added to an MITK DataStorage object and rendered into any number of slice-based
(2D) or 3D views. Both MITK and XIP are open-source C++ platforms. The extensions presented in this paper
will be available from www.mitk.org.
High-accuracy registration of intraoperative CT imaging
Author(s):
A. Oentoro;
R. E. Ellis
Show Abstract
Image-guided interventions using intraoperative 3D imaging can be less cumbersome than systems dependent on preoperative
images, especially by needing neither potentially invasive image-to-patient registration nor a lengthy process of
segmenting and generating a 3D surface model. In this study, a method for computer-assisted surgery using direct navigation
on intraoperative imaging is presented. In this system the registration step of a navigated procedure was divided into
two stages: preoperative calibration of images to a ceiling-mounted optical tracking system, and intraoperative tracking
during acquisition of the 3D medical image volume. The preoperative stage used a custom-made multi-modal calibrator
that could be optically tracked and also contained fiducial spheres for radiological detection; a robust registration algorithm
was used to compensate for the very high false-detection rate that was due to the high physical density of the optical
light-emitting diodes. Intraoperatively, a tracking device was attached to plastic bone models that were also instrumented
with radio-opaque spheres; A calibrated pointer was used to contact the latter spheres as a validation of the registration.
Experiments showed that the fiducial registration error of the preoperative calibration stage was approximately 0.1 mm.
The target registration error in the validation stage was approximately 1.2 mm. This study suggests that direct registration,
coupled with procedure-specific graphical rendering, is potentially a highly accurate means of performing image-guided
interventions in a fast, simple manner.
Intraoperative positioning of mobile C-arms using artificial fluoroscopy
Author(s):
Philipp Dressel;
Lejing Wang;
Oliver Kutter;
Joerg Traub;
Sandro-Michael Heining;
Nassir Navab
Show Abstract
In trauma and orthopedic surgery, imaging through X-ray fluoroscopy with C-arms is ubiquitous. This leads to
an increase in ionizing radiation applied to patient and clinical staff. Placing these devices in the desired position
to visualize a region of interest is a challenging task, requiring both skill of the operator and numerous X-rays
for guidance. We propose an extension to C-arms for which position data is available that provides the surgeon
with so called artificial fluoroscopy. This is achieved by computing digitally reconstructed radiographs (DRRs)
from pre- or intraoperative CT data. The approach is based on C-arm motion estimation, for which we employ
a Camera Augmented Mobile C-arm (CAMC) system, and a rigid registration of the patient to the CT data.
Using this information we are able to generate DRRs and simulate fluoroscopic images. For positioning tasks,
this system appears almost exactly like conventional fluoroscopy, however simulating the images from the CT
data in realtime as the C-arm is moved without the application of ionizing radiation. Furthermore, preoperative
planning can be done on the CT data and then visualized during positioning, e.g. defining drilling axes for
pedicle approach techniques. Since our method does not require external tracking it is suitable for deployment
in clinical environments and day-to-day routine. An experiment with six drillings into a lumbar spine phantom
showed reproducible accuracy in positioning the C-arm, ranging from 1.1 mm to 4.1 mm deviation of marker
points on the phantom compared in real and virtual images.
3D model-based catheter tracking for motion compensation in EP procedures
Author(s):
Alexander Brost;
Rui Liao;
Joachim Hornegger;
Norbert Strobel
Show Abstract
Atrial fibrillation is the most common sustained heart arrhythmia and a leading cause
of stroke. Its treatment by radio-frequency catheter ablation, performed using fluoroscopic image
guidance, is gaining increasingly more importance. Two-dimensional fluoroscopic navigation
can take advantage of overlay images derived from pre-operative 3-D data to add anatomical
details otherwise not visible under X-ray. Unfortunately, respiratory motion may impair
the utility of these static overlay images for catheter navigation. We developed an approach for
image-based 3-D motion compensation as a solution to this problem. A bi-plane C-arm system
is used to take X-ray images of a special circumferential mapping catheter from two directions.
In the first step of the method, a 3-D model of the device is reconstructed. Three-dimensional
respiratory motion at the site of ablation is then estimated by tracking the reconstructed catheter
model in 3-D. This step involves bi-plane fluoroscopy and 2-D/3-D registration. Phantom data
and clinical data were used to assess our model-based catheter tracking method. Experiments
involving a moving heart phantom yielded an average 2-D tracking error of 1.4 mm and an average
3-D tracking error of 1.1 mm. Our evaluation of clinical data sets comprised 469 bi-plane
fluoroscopy frames (938 monoplane fluoroscopy frames). We observed an average 2-D tracking
error of 1.0 mm ± 0.4 mm and an average 3-D tracking error of 0.8 mm ± 0.5 mm. These results
demonstrate that model-based motion-compensation based on 2-D/3-D registration is both
feasible and accurate.
Respiratory motion compensated overlay of surface models from cardiac MR on interventional x-ray fluoroscopy for guidance of cardiac resynchronization therapy procedures
Author(s):
R. Manzke;
A. Bornstedt;
A. Lutz;
M. Schenderlein;
V. Hombach;
L. Binner;
V. Rasche
Show Abstract
Various multi-center trials have shown that cardiac resynchronization therapy (CRT) is an effective procedure
for patients with end-stage drug invariable heart failure (HF). Despite the encouraging results of CRT, at least
30% of patients do not respond to the treatment. Detailed knowledge of the cardiac anatomy (coronary
venous tree, left ventricle), functional parameters (i.e. ventricular synchronicity) is supposed to improve
CRT patient selection and interventional lead placement for reduction of the number of non-responders.
As a pre-interventional imaging modality, cardiac magnetic resonance (CMR) imaging has the potential
to provide all relevant information. With functional information from CMR optimal implantation target
sites may be better identified. Pre-operative CMR could also help to determine whether useful vein target
segments are available for lead placement. Fused with X-ray, the mainstay interventional modality, improved
interventional guidance for lead-placement could further help to increase procedure outcome.
In this contribution, we present novel and practicable methods for a) pre-operative functional and anatomical
imaging of relevant cardiac structures to CRT using CMR, b) 2D-3D registration of CMR anatomy and
functional meshes with X-ray vein angiograms and c) real-time capable breathing motion compensation for
improved fluoroscopy mesh overlay during the intervention based on right ventricular pacer lead tracking.
With these methods, enhanced interventional guidance for left ventricular lead placement is provided.
Estimating heart shift and morphological changes during minimally invasive cardiac interventions
Author(s):
Cristian A. Linte;
Mathew Carias;
Daniel S. Cho;
Danielle F. Pace;
John Moore;
Chris Wedlake;
Daniel Bainbridge;
Bob Kiaii;
Terry M. Peters
Show Abstract
Image-guided interventions rely on the common assumption that pre-operative information can depict intraoperative
morphology with sufficient accuracy. Nevertheless, in the context of minimally invasive cardiac therapy
delivery, this assumption loses ground; the heart is a soft-tissue organ prone to changes induced during access to
the heart and especially intracardiac targets. In addition to its clinical value for cardiac interventional guidance
and assistance with the image- and model-to-patient registration, here we show how ultrasound imaging may be
used to estimate changes in the heart position and morphology of structures of interest at different stages in the
procedure. Using a magnetically tracked 2D transesophageal echocardiography transducer, we acquired in vivo
images of the heart at different stages during the procedural workflow of common minimally invasive cardiac
procedures, including robot-assisted coronary artery bypass grafting, mitral valve replacement/repair, or modelenhanced
US-guided intracardiac interventions, all in the coordinate system of the tracking system. Anatomical
features of interest (mitral and aortic valves) used to register the pre-operative anatomical models to the intraoperative
coordinate frame were identified from each dataset. This information allowed us to identify the global
position of the heart and also characterize the valvular structures at various peri-operative stages, in terms of
their orientation, size, and geometry. Based on these results, we can estimate the differences between the preand
intra-operative anatomical features, their effect on the model-to-subject registration, and also identify the
need to update or optimize any pre-operative surgical plan to better suit the intra-operative procedure workflow.
Artifact reduction method for improved visualization of 3D coronary artery reconstructions from rotational angiography acquisitions
Author(s):
Anne M. Neubauer;
Eberhard Hansis;
John D. Carroll;
Michael Grass
Show Abstract
High quality and high resolution three dimensional reconstruction of the coronary arteries from clinically obtained
rotational X-ray images during contrast injection has recently been attained through the use of advanced image
processing techniques, including gating, optimal heart phase selection, motion compensation, and iterative
reconstruction. While these strategies have produced excellent results despite severe angular under-sampling, the
volumes that result from these techniques contain artifact/background signal features which impede both the qualitative
as well as the quantitative analysis. This paper details a method for artifact removal from reconstructed 3D coronary
angiograms that uses a priori image content information to maximize the background removal while minimizing
influence on the reconstructed vessels. A variety of parameters are explored, and results indicate that this method can
greatly improve visualization for use in the catheterization laboratory as well as reduce the impact of the visualization
grey scale (window/level) on qualitative evaluation of the data.
Semi-automatic segmentation of major aorto-pulmonary collateral arteries (MAPCAs) for image guided procedures
Author(s):
David Rivest-Hénault;
Luc Duong;
Chantale Lapierre;
Sylvain Deschênes;
Mohamed Cheriet
Show Abstract
Manual segmentation of pre-operative volumetric dataset is generally time consuming and results are subject
to large inter-user variabilities. Level-set methods have been proposed to improve segmentation consistency by
finding interactively the segmentation boundaries with respect to some priors. However, in thin and elongated
structures, such as major aorto-pulmonary collateral arteries (MAPCAs), edge-based level set methods might be
subject to flooding whereas region-based level set methods may not be selective enough. The main contribution
of this work is to propose a novel expert-guided technique for the segmentation of the aorta and of the attached
MAPCAs that is resilient to flooding while keeping the localization properties of an edge-based level set method.
In practice, a two stages approach is used. First, the aorta is delineated by using manually inserted seed points
at key locations and an automatic segmentation algorithm. The latter includes an intensity likelihood term that
prevents leakage of the contour in regions of weak image gradients. Second, the origins of the MAPCAs are
identified by using another set of seed points, then the MAPCAs' segmentation boundaries are evolved while
being constrained by the aorta segmentation. This prevents the aorta to interfere with the segmentation of the
MAPCAs. Our preliminary results are promising and constitute an indication that an accurate segmentation of
the aorta and MAPCAs can be obtained with reasonable amount of effort.
A system for visualization and automatic placement of the endoclamp balloon catheter
Author(s):
Hugo Furtado;
Thomas Stüdeli;
Mauro Sette;
Eigil Samset;
Borut Gersak
Show Abstract
The European research network "Augmented Reality in Surgery" (ARIS*ER) developed a system that supports
minimally invasive cardiac surgery based on augmented reality (AR) technology. The system supports the surgical team
during aortic endoclamping where a balloon catheter has to be positioned and kept in place within the aorta. The
presented system addresses the two biggest difficulties of the task: lack of visualization and difficulty in maneuvering the
catheter.
The system was developed using a user centered design methodology with medical doctors, engineers and human factor
specialists equally involved in all the development steps. The system was implemented using the AR framework
"Studierstube" developed at TU Graz and can be used to visualize in real-time the position of the balloon catheter inside
the aorta. The spatial position of the catheter is measured by a magnetic tracking system and superimposed on a 3D
model of the patient's thorax. The alignment is made with a rigid registration algorithm. Together with a user defined
target, the spatial position data drives an actuator which adjusts the position of the catheter in the initial placement and
corrects migrations during the surgery.
Two user studies with a silicon phantom show promising results regarding usefulness of the system: the users perform
the placement tasks faster and more accurately than with the current restricted visual support. Animal studies also
provided a first indication that the system brings additional value in the real clinical setting. This work represents a major
step towards safer and simpler minimally invasive cardiac surgery.
Automatic generation of boundary conditions using Demons non-rigid image registration for use in 3D modality-independent elastography
Author(s):
Thomas S. Pheiffer;
Jao J. Ou;
Michael I. Miga
Show Abstract
Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue
using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical
input to the algorithm, and are often determined by time-consuming point correspondence methods requiring manual
user input. Unfortunately, generation of accurate boundary conditions for the biomechanical model is often difficult due
to the challenge of accurately matching points between the source and target surfaces and consequently necessitates the
use of large numbers of fiducial markers. This study presents a novel method of automatically generating boundary
conditions by non-rigidly registering two image sets with a Demons diffusion-based registration algorithm. The use of
this method was successfully performed in silico using magnetic resonance and X-ray computed tomography image data
with known boundary conditions. These preliminary results have produced boundary conditions with accuracy of up to
80% compared to the known conditions. Finally, these boundary conditions were utilized within a 3D MIE
reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Preliminary results show a
reasonable characterization of the material properties on this first attempt and a significant improvement in the
automation level and viability of the method.
Bladder wall flattening with conformal mapping for MR cystography
Author(s):
Ruirui Jiang;
Hongbin Zhu;
Wei Zeng;
Xiaokang Yu;
Yi Fan;
Xianfeng Gu;
Zhengrong Liang
Show Abstract
Magnetic resonance visual cystoscopy or MR cystography (MRC) is an emerging tool for bladder tumor detection,
where three-dimensional (3D) endoscopic views on the inner bladder surface are being investigated by researchers. In
this paper, we further investigate an innovative strategy of visualizing the inner surface by flattening the 3D surface into
a 2D display, where conformal mapping, a mathematically-proved algorithm with shape preserving, is used. The
original morphological, textural and even geometric information can be visualized in the flattened 2D image. Therefore,
radiologists do not have to manually control the view point and angle to locate the possible abnormalities like what they
do in the 3D endoscopic views. Once an abnormality is detected on the 2D flattened image, its locations in the original
MR slice images and in the 3D endoscopic views can be retrieved since the conformal mapping is an invertible
transformation. In such a manner, the reading time needed by a radiologist can be expected to be reduced. In addition to
the surface information, the bladder wall thickness can be visualized with encoded colors on the flattened image. Both
normal volunteer and patient studies were performed to test the reconstruction of 3D surface, the conformal flattening,
and the visualization of the color-coded flattened image. A bladder tumor of 3 cm size is so obvious on the 2D flattened
image such that it can be perceived only at the first sight. The patient dataset shows a noticeable difference on the wall
thickness distribution than that of the volunteer's dataset.
General approach to error prediction in point registration
Author(s):
Andrei Danilchenko;
J. Michael Fitzpatrick
Show Abstract
A method for the first-order analysis of the point registration problem is presented and validated. The method is a unified
approach to the problem that allows for inhomogeneous and anisotropic fiducial localization error (FLE) and arbitrary
weighting in the registration algorithm. Cross-covariance matrices are derived both for target registration error (TRE)
and for weighted fiducial registration error (FRE). Furthermore, it is shown that for ideal weighting, in which the
weighting matrix for each fiducial equals the inverse of the square root of the cross covariance of the two-space FLE for
that fiducial, fluctuations of FRE and TRE are independent. These results are validated by comparison with previously
published expressions for special cases and by simulation and shown to be correct. Furthermore, simulations for
randomly generated fiducial positions and FLEs are presented that show that correlation is negligible (correlation
coefficient < 0.1) for uniform weighting (i.e., no weighting) as well. From these results we conclude that measures of the
goodness of fit of the fiducials, e.g., FRE, are unreliable estimators of registration accuracy, i.e., TRE, and should be
avoided.
Correlation of hemodynamic forces and atherosclerotic plaque components
Author(s):
Gádór Canton;
Bernard Chiu;
Chun Yuan;
William S. Kerwin
Show Abstract
Local hemodynamic forces in atherosclerotic carotid arteries are thought to trigger cellular and molecular mechanisms
that determine plaque vulnerability. Magnetic Resonance Imaging (MRI) has emerged as a powerful tool to characterize
human carotid atherosclerotic plaque composition and morphology, and to identify plaque features shown to be key
determinants of plaque vulnerability. Image-based computational fluid dynamics (CFD) has allowed researchers to
obtain time-resolved wall shear stress (WSS) information for atherosclerotic carotid arteries. A deeper understanding of
the mechanisms of initiation and progression of atherosclerosis can be obtained through the comparison of WSS and
plaque composition. The aim of this study was to explore the hypothesis that intra-plaque hemorrhage, a feature
associated with adverse outcomes and plaque progression, is more likely to occur in plaques with elevated WSS levels.
We compared 2D representations of the WSS distribution and the amount of intra-plaque hemorrhage to determine
relationships between WSS patterns and plaque vulnerability. We extracted WSS data to compare patterns between cases
with and without hemorrhage. We found elevated values of WSS at regions where intra-plaque hemorrhage was
detected, suggesting that WSS might be used as a marker for the risk of intra-plaque hemorrhage and subsequent
complications.
Aortic valve and ascending aortic root modeling from 3D and 3D+t CT
Author(s):
Saša Grbic;
Razvan Ioan Ionasec;
Dominik Zäuner;
Yefeng Zheng;
Bogdan Georgescu;
Dorin Comaniciu
Show Abstract
Aortic valve disorders are the most frequent form of valvular heart disorders (VHD) affecting nearly 3% of
the global population. A large fraction among them are aortic root diseases, such as aortic root aneurysm,
often requiring surgical procedures (valve-sparing) as a treatment. Visual non-invasive assessment techniques
could assist during pre-selection of adequate patients, planning procedures and afterward evaluation of the same.
However state of the art approaches try to model a rather short part of the aortic root, insufficient to assist
the physician during intervention planning. In this paper we propose a novel approach for morphological and
functional quantification of both the aortic valve and the ascending aortic root. A novel physiological shape
model is introduced, consisting of the aortic valve root, leaflets and the ascending aortic root. The model
parameters are hierarchically estimated using robust and fast learning-based methods. Experiments performed
on 63 CT sequences (630 Volumes) and 20 single phase CT volumes demonstrated an accuracy of 1.45mm and
an performance of 30 seconds (3D+t) for this approach. To the best of our knowledge this is the first time a
complete model of the aortic valve (including leaflets) and the ascending aortic root, estimated from CT, has
been proposed.
Trajectory planning method for reduced patient risk in image-guided neurosurgery: concept and preliminary results
Author(s):
Reuben R. Shamir;
Leo Joskowicz;
Luca Antiga;
Roberto I. Foroni;
Yigal Shoshan
Show Abstract
We present a new preoperative planning method to quantify and help reduce the risk associated with needle and tool
insertion trajectories in image-guided keyhole neurosurgery. The goal is to quantify the risk of a proposed straight
trajectory, and/or to find the trajectory with the lowest risk to nearby brain structures based on pre-operative CT/MRI
images. The method automatically computes the risk associated with a given trajectory, or finds the trajectory with the
lowest risk to nearby brain structures based on preoperative image segmentation and on a risk volume map. The
surgeon can revise the suggested trajectory, add a new one using interactive 3D visualization, and obtain a quantitative
risk measure. The trajectory risk is evaluated based on the tool placement uncertainty, on the proximity of critical brain
structures, and on a predefined table of quantitative geometric risk measures. Our preliminary results on a clinical
dataset with eight targets show a significant reduction in trajectory risk and a shortening of the preoperative planning
time as compared to the conventional method.
Enhancement of subsurface brain shift model accuracy: a preliminary study
Author(s):
Ishita Garg;
Siyi Ding;
Aaron M. Coffey;
Prashanth Dumpuri;
Reid C. Thompson;
Benoit M. Dawant;
Michael I. Miga
Show Abstract
Biomechanical models that describe soft-tissue deformations provide a relatively inexpensive way to correct registration
errors in image guided neurosurgical systems caused by non-rigid brain shifts. Quantifying the factors that cause this
deformation to sufficient precision is a challenging task. To circumvent this difficulty, atlas-based method have been
developed recently which allow for uncertainty yet still capture the first order effects associated with brain deformations.
More specifically, the technique involves building an atlas of solutions to account for the statistical uncertainty in factors
that control the direction and magnitude of brain shift. The inverse solution is driven by a sparse intraoperative surface
measurement. Since this subset of data only provides surface information, it could bias the reconstruction and affect the
subsurface accuracy of the model prediction. Studies in intraoperative MR have shown that the deformation in the
midline, tentorium, and contralateral hemisphere is relatively small. The falx cerebri and tentorium cerebelli, two of the
important dural septa, act as rigid membranes supporting the brain parenchyma and compartmentalizing the brain.
Accounting for these structures in models may be an important key to improving subsurface shift accuracy. The goals of
this paper are to describe a novel method developed to segment the tentorium cerebelli, develop the procedure for
modeling the dural septa and study the effect of those membranes on subsurface brain shift.
Evaluating the feasibility of C-arm CT for brain perfusion imaging: an in vitro study
Author(s):
A. Ganguly;
A. Fieselmann;
J. Boese;
C. Rohkohl;
J. Hornegger;
R. Fahrig
Show Abstract
C-arm cone-beam CT (CBCT) is increasingly being used to supplement 2D real-time data with 3D information.
Temporal resolution is currently limited by the mechanical rotation speed of the C-arm which presents challenges
for applications such as imaging of contrast flow in brain perfusion CT (PCT). We present a novel scheme where
multiple scans are obtained at different start times with respect to the contrast injection. The data is interleaved
temporally and interpolated during 3D reconstruction. For evaluation we developed a phantom to generate the range
of temporal frequencies relevant for PCT. The highest requirements are for imaging the arterial input function (AIF)
modeled as a gamma-variate function. Fourier transform analysis of the AIF showed that 90% of the spectral energy
is contained at frequencies lower than 0.08Hz. We built an acrylic cylinder phantom of diameter 1.9 cm, with 25
sections of 1cm length each. Iodine concentration in each compartment was varied to produce a half-cycle sinusoid
variation in HU in version 1, and 2.5 cycles in version 2 of the phantom. The phantom was moved linearly at speeds
from 0.5cm/s to 4cm/s (temporal frequencies of 0.02Hz to 0.09Hz) and imaged using a C-arm system. Phantom CT
numbers in a slice reconstructed at isocenter were measured and sinusoidal fits to the data were obtained. The fitted
sinusoids had frequencies that were within 3±2% of the actual temporal frequencies of the sinusoid. This suggests
that the imaging and reconstruction scheme is adequate for PCT imaging.
Demons deformable registration for cone-beam CT guidance: registration of pre- and intra-operative images
Author(s):
S. Nithiananthan;
K. K. Brock;
M. J. Daly;
H. Chan;
J. C. Irish;
J. H. Siewerdsen
Show Abstract
High-quality intraoperative 3D imaging systems such as cone-beam CT (CBCT) hold considerable promise for imageguided
surgical procedures in the head and neck. With a large amount of preoperative imaging and planning information
available in addition to the intraoperative images, it becomes desirable to be able to integrate all sources of imaging
information within the same anatomical frame of reference using deformable image registration. Fast intensity-based
algorithms are available which can perform deformable image registration within a period of time short enough for
intraoperative use. However, CBCT images often contain voxel intensity inaccuracy which can hinder registration
accuracy - for example, due to x-ray scatter, truncation, and/or erroneous scaling normalization within the 3D
reconstruction algorithm. In this work, we present a method of integrating an iterative intensity matching step within the
operation of a multi-scale Demons registration algorithm. Registration accuracy was evaluated in a cadaver model and
showed that a conventional Demons implementation (with either no intensity match or a single histogram match)
introduced anatomical distortion and degradation in target registration error (TRE). The iterative intensity matching
procedure, on the other hand, provided robust registration across a broad range of intensity inaccuracies.
Biomechanical based image registration for head and neck radiation treatment
Author(s):
Adil Al-Mayah;
Joanne Moseley;
Shannon Hunter;
Mike Velec;
Lily Chau;
Stephen Breen;
Kristy Brock
Show Abstract
Deformable image registration of four head and neck cancer patients was conducted using biomechanical based model.
Patient specific 3D finite element models have been developed using CT and cone beam CT image data of the planning
and a radiation treatment session. The model consists of seven vertebrae (C1 to C7), mandible, larynx, left and right
parotid glands, tumor and body. Different combinations of boundary conditions are applied in the model in order to find
the configuration with a minimum registration error. Each vertebra in the planning session is individually aligned with
its correspondence in the treatment session. Rigid alignment is used for each individual vertebra and to the mandible
since deformation is not expected in the bones. In addition, the effect of morphological differences in external body
between the two image sessions is investigated. The accuracy of the registration is evaluated using the tumor, and left
and right parotid glands by comparing the calculated Dice similarity index of these structures following deformation in
relation to their true surface defined in the image of the second session. The registration improves when the vertebrae
and mandible are aligned in the two sessions with the highest Dice index of 0.86±0.08, 0.84±0.11, and 0.89±0.04 for the
tumor, left and right parotid glands, respectively. The accuracy of the center of mass location of tumor and parotid
glands is also improved by deformable image registration where the error in the tumor and parotid glands decreases from
4.0±1.1, 3.4±1.5, and 3.8±0.9 mm using rigid registration to 2.3±1.0, 2.5±0.8 and 2.0±0.9 mm in the deformable image
registration when alignment of vertebrae and mandible is conducted in addition to the surface projection of the body.
Real-time fiber selection using the Wii remote
Author(s):
Jan Klein;
Mike Scholl;
Alexander Köhn;
Horst K. Hahn
Show Abstract
In the last few years, fiber tracking tools have become popular in clinical contexts, e.g., for pre- and intraoperative
neurosurgical planning. The efficient, intuitive, and reproducible selection of fiber bundles still constitutes one
of the main issues. In this paper, we present a framework for a real-time selection of axonal fiber bundles
using a Wii remote control, a wireless controller for Nintendo's gaming console. It enables the user to select
fiber bundles without any other input devices. To achieve a smooth interaction, we propose a novel spacepartitioning
data structure for efficient 3D range queries in a data set consisting of precomputed fibers. The data
structure which is adapted to the special geometry of fiber tracts allows for queries that are many times faster
compared with previous state-of-the-art approaches. In order to extract reliably fibers for further processing,
e.g., for quantification purposes or comparisons with preoperatively tracked fibers, we developed an expectationmaximization
clustering algorithm that can refine the range queries. Our initial experiments have shown that
white matter fiber bundles can be reliably selected within a few seconds by the Wii, which has been placed in a
sterile plastic bag to simulate usage under surgical conditions.
Multi-slice to volume registration of ultrasound data to a statistical atlas of human pelvis
Author(s):
Sahar Ghanavati;
Parvin Mousavi;
Gabor Fichtinger;
Pezhman Foroughi;
Purang Abolmaesumi
Show Abstract
Identifying the proper orientation of the pelvis is a critical step in accurate placement of the femur prosthesis in the
acetabulum in Total Hip Replacement (THR) surgeries. The general approach to localize the orientation of the pelvis
coordinate system is to use X-ray fluoroscopy to guide the procedure. An alternative can be employing intra-operative
ultrasound (US) imaging with pre-operative CT scan or fluoroscopy imaging. In this paper, we propose to replace the
need of pre-operative imaging by using a statistical shape model of the pelvis, constructed from several CT images. We
then propose an automatic deformable intensity-based registration of the anatomical atlas to a sparse set of 2D
ultrasound images of the pelvis in order to localize its anatomical coordinate system. In this registration technique, we
first extract a set of 2D slices from a single instance of the pelvic atlas. Each individual 2D slice is generated based on
the location of a corresponding 2D ultrasound image. Next, we create simulated ultrasound images out of the 2D atlas
slices and calculate a similarity metric between the simulated images and the actual ultrasound images. The similarity
metric guides an optimizer to generate an instance of the atlas that best matches the ultrasound data. We demonstrated
the feasibility of our proposed approach on two male human cadaver data. The registration was able to localize a
patient-specific pelvic coordinate system with origin translation error of 2 mm and 3.45 mm, and average axes rotation
error of 3.5 degrees and 3.9 degrees for the two cadavers, respectively.
An image-guided femoroplasty system: development and initial cadaver studies
Author(s):
Yoshito Otake;
Mehran Armand;
Ofri Sadowsky;
Robert S. Armiger;
Michael D. Kutzer;
Simon C. Mears M.D.;
Peter Kazanzides;
Russell H. Taylor
Show Abstract
This paper describes the development and initial cadaver studies using a prototype image-guided surgery system for
femoroplasty, which is a potential alternative treatment for reducing fracture risk in patients with severe osteoporosis.
Our goal is to develop an integrated surgical guidance system that will allow surgeons to augment the femur using
patient-specific biomechanical planning and intraoperative analysis tools. This paper focuses on the intraoperative
module, which provides real-time navigation of an injection device and estimates the distribution of the injected material
relative to the preoperative plan. Patient registration is performed using intensity-based 2D/3D registration of X-ray
images and preoperative CT data. To co-register intraoperative X-ray images and optical tracker coordinates, we
integrated a custom optically-tracked fluoroscope fiducial allowing real-time visualization of the injection device with
respect to the patient's femur. During the procedure, X-ray images were acquired to estimate the 3D distribution of the
injected augmentation material (e.g. bone cement). Based on the injection progress, the injection plan could be adjusted
if needed to achieve optimal distribution. In phantom experiments, the average target registration error at the center of
the femoral head was 1.4 mm and the rotational error was 0.8 degrees when two images were used. Three cadaveric
studies demonstrated efficacy of the navigation system. Our preliminary simulation study of the 3D shape reconstruction
algorithm demonstrated that the 3D distribution of the augmentation material could be estimated within 12% error from
six X-ray images.
Active self-calibration of thoracoscopic images for assisted minimally invasive spinal surgery
Author(s):
Fantin Girard;
Fouzi Benboujja;
Stefan Parent;
Farida Cheriet
Show Abstract
Registration of thoracoscopic images to a preoperative 3D model of the spine is a prerequisite for minimally invasive
surgical guidance. We propose an active self-calibration method of thoracoscopic image sequences acquired by an
angled monocular endoscope with varying focal length during minimally invasive surgery of the spine. The extrinsic
parameters are updated in real time by a motion tracking system while the intrinsic parameters are determined from a set
of geometrical primitives extracted from the image of the surgical instrument tracked throughout the thoracoscopic
sequence. A particle filter was used for the tracking of the instrument on the image sequence that was preprocessed to
detect and correct reflexions due to the light source. The proposed method requires undertaking a pure rotation of the
endoscope to update the focal length and exploits the inherent temporal rigid motion of the instrument through
consecutive frames. A pure rotation is achievable by undertaking a rotation of the scope cylinder with respect to the head
of the camera. Therefore, the surgeon may take full advantage of an angled endoscope by adjusting focus and zoom
during surgery. Simulation experiments have assessed the accuracy of the obtained parameters and the optimal number
of geometrical primitives required for an active self-calibration of the angled monocular endoscope. Finally, an in vitro
experiment demonstrated that 3D reconstruction of rigid structures tracked throughout a monocular thoracoscopic image
sequence is feasible and its accuracy is adequate for the registration of thoracoscopic images to a preoperative MRI 3D
model of the spine.
Group-wise feature-based registration of CT and ultrasound images of spine
Author(s):
Abtin Rasoulian;
Parvin Mousavi;
Mehdi Hedjazi Moghari;
Pezhman Foroughi;
Purang Abolmaesumi
Show Abstract
Registration of pre-operative CT and freehand intra-operative ultrasound of lumbar spine could aid surgeons in
the spinal needle injection which is a common procedure for pain management. Patients are always in a supine
position during the CT scan, and in the prone or sitting position during the intervention. This leads to a difference
in the spinal curvature between the two imaging modalities, which means a single rigid registration cannot be
used for all of the lumbar vertebrae. In this work, a method for group-wise registration of pre-operative CT and
intra-operative freehand 2-D ultrasound images of the lumbar spine is presented. The approach utilizes a pointbased
registration technique based on the unscented Kalman filter, taking as input segmented vertebrae surfaces
in both CT and ultrasound data. Ultrasound images are automatically segmented using a dynamic programming
approach, while the CT images are semi-automatically segmented using thresholding. Since the curvature of the
spine is different between the pre-operative and the intra-operative data, the registration approach is designed to
simultaneously align individual groups of points segmented from each vertebra in the two imaging modalities. A
biomechanical model is used to constrain the vertebrae transformation parameters during the registration and to
ensure convergence. The mean target registration error achieved for individual vertebrae on five spine phantoms
generated from CT data of patients, is 2.47 mm with standard deviation of 1.14 mm.
Plan to procedure: combining 3D templating with rapid prototyping to enhance pedicle screw placement
Author(s):
Kurt E. Augustine;
Anthony A. Stans;
Jonathan M. Morris;
Paul M. Huddleston;
Jane M. Matsumoto;
David R. Holmes III;
Richard A. Robb
Show Abstract
Spinal fusion procedures involving the implantation of pedicle screws have steadily increased over the past decade
because of demonstrated improvement in biomechanical stability of the spine. However, current methods of spinal
fusion carries a risk of serious vascular, visceral, and neurological injury caused by inaccurate placement or
inappropriately sized instrumentation, which may lead to patient paralysis or even fatality. 3D spine templating software
developed by the Biomedical Imaging Resource (BIR) at Mayo Clinic allows the surgeon to virtually place pedicle
screws using pre-operative 3D CT image data. With the template plan incorporated, a patient-specific 3D anatomic
model is produced using a commercial rapid prototyping system. The pre-surgical plan and the patient-specific model
then are used in the procedure room to provide real-time visualization and quantitative guidance for accurate placement
of each pedicle screw, significantly reducing risk of injury. A pilot study was conducted at Mayo Clinic by the
Department of Radiology, the Department of Orthopedics, and the BIR, involving seven complicated pediatric spine
cases. In each case, pre-operative 3D templating was carried out and patient specific models were generated. The plans
and the models were used intra-operatively, providing precise pedicle screw starting points and trajectories. Postoperative
assessment by the surgeon confirmed all seven operations were successful. Results from the study suggest that
patient-specific, 3D anatomic models successfully acquired from 3D templating tools are valuable for planning and
conducting pedicle screw insertion procedures.
Validation platform for ultrasound-based monitoring of thermal ablation
Author(s):
Alexandra M. Pompeu-Robinson;
James Gray;
Joshua Marble;
Hamed Peikari;
Jena Hall;
Paweena U-Thainual;
Mohammad Aboofazeli;
Andras Lasso;
Gabor Fichtinger
Show Abstract
PURPOSE: A ground-truth validation platform was developed to provide spatial correlation between ultrasound
(US), temperature measurements and histopathology images to validate US based thermal ablation monitoring
methods. METHOD: The test-bed apparatus consists of a container box with integrated fiducial lines. Tissue
samples are suspended within the box using agar gel as the fixation medium. Following US imaging, the gel block
is sliced and pathology images are acquired. Interactive software segments the fiducials as well as structures
of interest in the pathology and US images. The software reconstructs the regions in 3D space and performs
analysis and comparison of the features identified from both imaging modalities. RESULTS: The apparatus and
software were constructed to meet technical requirements. Tissue samples were contoured, reconstructed and
registered in the common coordinate system of fiducials. There was agreement between the sample shapes, but
systematic shift of several millimeters was found between histopathology and US. This indicates that during
pathology slicing shear forces tend to dislocate the fiducial lines. Softer fiducial lines and harder gel material
can eliminate this problem. CONCLUSION: Viability of concept was presented. Despite our straightforward
approach, further experimental work is required to optimize all materials and customize software.
Calibration of temperature measurements with CT for ablation of liver tissue
Author(s):
G. D. Pandeya;
M. J. W. Greuter;
B. Schmidt;
T. G. Flohr;
M. Oudkerk
Show Abstract
The purpose of this study was to determine the relationships between the CT value and temperature for the range of
ablation therapy. Bovine liver was slowly heated and image acquisition was performed with a clinical CT. Real time
temperature was measured and stored using calibrated thermal sensors. Images were analyzed at CT workstation. It was
feasible to validate the spatial and temporal temperature growth during heating by means of declining CT values in the
performed images. The thermal sensitivity for liver tissue was -0.54±0.10 HU/oC. It is concluded that CT can be
calibrated to predict temperature distribution during heating.
Particle filtering for respiratory motion compensation during navigated bronchoscopy
Author(s):
Ingmar Gergel;
Thiago R. dos Santos;
Ralf Tetzlaff;
Lena Maier-Hein;
Hans-Peter Meinzer;
Ingmar Wegner
Show Abstract
Although the field of a navigated bronchoscopy gains increasing attention in the literature, robust guidance in the presence of respiratory motion and electromagnetic noise remains challenging.
The robustness of a previously introduced motion compensation approach was increased by taking into account the already traveled trajectory of the instrument within the lung. To evaluate the performance of the method a virtual environment, which accounts for respiratory motion and electromagnetic noise was used. The simulation is based on a deformation field computed from human computed tomography data. According to the results, the proposed method outperforms the original method and is suitable for lung motion compensation during electromagnetically guided interventions.
Structured light 3D tracking system for measuring motions in PET brain imaging
Author(s):
Oline V. Olesen;
Morten R. Jørgensen;
Rasmus R. Paulsen;
Liselotte Højgaard;
Bjarne Roed;
Rasmus Larsen
Show Abstract
Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal
for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A
prototype tracking system based on structured light with a DLP projector and a CCD camera is set up on a model of the
High Resolution Research Tomograph (HRRT). Methods to reconstruct 3D point clouds of simple surfaces based on
phase-shifting interferometry (PSI) are demonstrated. The projector and camera are calibrated using a simple stereo
vision procedure where the projector is treated as a camera. Additionally, the surface reconstructions are corrected for the
non-linear projector output prior to image capture. The results are convincing and a first step toward a fully automated
tracking system for measuring head motions in PET imaging.
Model-based lasso catheter tracking in monoplane fluoroscopy for 3D breathing motion compensation during EP procedures
Author(s):
Rui Liao
Show Abstract
Radio-frequency catheter ablation (RFCA) of the pulmonary veins (PVs) attached to the left atrium (LA) is usually
carried out under fluoroscopy guidance. Overlay of detailed anatomical structures via 3-D CT and/or MR volumes onto
the fluoroscopy helps visualization and navigation in electrophysiology procedures (EP). Unfortunately, respiratory
motion may impair the utility of static overlay of the volume with fluoroscopy for catheter navigation. In this paper, we
propose a B-spline based method for tracking the circumferential catheter (lasso catheter) in monoplane fluoroscopy.
The tracked motion can be used for the estimation of the 3-D trajectory of breathing motion and for subsequent motion
compensation. A lasso catheter is typically used during EP procedures and is pushed against the ostia of the PVs to be
ablated. Hence this method does not require additional instruments, and achieves motion estimation right at the site of
ablation. The performance of the proposed tracking algorithm was evaluated on 340 monoplane frames with an average
error of 0.68 ± 0.36 mms. Our contributions in this work are twofold. First and foremost, we show how to design an
effective, practical, and workflow-friendly 3-D motion compensation scheme for EP procedures in a monoplane setup.
In addition, we develop an efficient and accurate method for model-based tracking of the circumferential lasso catheter
in the low-dose EP fluoroscopy.
4D MR imaging of respiratory organ motion using an intersection profile method
Author(s):
Yoshitada Masuda;
Hideaki Haneishi
Show Abstract
We propose an intersection profile method for reconstructing a 4D-MRI of respiratory organ motion from time sequential
images of 2D-MRI. In the proposed method, first, time sequential MR images in many coronal planes set to widely cover
the lung region are acquired as the data slices. Second, the time sequential MR images in a proper sagittal plane are
acquired as the navigator slice. 4D-MRI is reconstructed by extracting and combining a proper respiratory pattern from
each data slice which is most similar to an adequately selected respiratory pattern in the navigator slice on the
intersection between the navigation slice and each data slice. Successful visualization of the respiratory organ motion is
demonstrated and the validation of reconstruction is also presented. Such a 4D-MRI has a great potential for many
medical applications. In this paper, we further propose to construct a diaphragmatic function map from a 4D-MRI
reconstructed by the intersection profile method to evaluate diaphragmatic motion quantitatively. Experimental results
using three healthy volunteers and three patients are shown.
Toward image-based global registration for bronchoscopy guidance
Author(s):
Rahul Khare;
William E. Higgins
Show Abstract
Virtual image-based bronchoscopy guidance systems have been found to be useful for carrying out accurate and skillindependent
bronchoscopies. A crucial step to the success of these systems during a live procedure is the local registration
of the current real bronchoscope position to the virtual bronchoscope of the guidance system. The synchronization between
the live and the virtual bronchoscope is generally lost during adverse events such as patient coughing with guidance often
adversely disrupted. Manual intervention by an assisting technician often helps in recovering from such a disruption,
but this results in extra procedure time and some potential uncertainty in the locally registered position. To rectify this
difficulty, we present for the first time a global registration algorithm that identifies the bronchoscope position without
the need for significant bronchoscope maneuvers or technician intervention. The method involves a fast local registration
search over all the branches in a global airway-bifurcation search space, with the weighted normalized sum of squares
distance metric used for finding the best match. We have achieved a global registration accuracy near 90% in tests over
a set of three different virtual bronchoscopic cases and with live guidance in an airway phantom. The method shows
considerable potential for enabling global technician-independent guidance of bronchoscopy, without the need for any
external device such as an electromagnetic sensor.
Rapid block matching based nonlinear registration on GPU for image guided radiation therapy
Author(s):
An Wang;
Brandon Disher;
Greg Carnes;
Terry M. Peters
Show Abstract
To compensate for non-uniform deformation due to patient motion within and between fractions in image guided
radiation therapy, a block matching technique was adapted and implemented on a standard graphics processing unit
(GPU) to determine the displacement vector field that maps the nonlinear transformation between successive CT images.
Normalized cross correlation (NCC) was chosen as the similarity metric for the matching step, with regularization of the
displacement vector field being performed by Gaussian smoothing. A multi-resolution framework was adopted to further
improve the performance of the algorithm. The nonlinear registration algorithm was first applied to estimate the intrafractional
motion from 4D lung CT images. It was also used to calculate the inter-fractional organ deformation between
planning CT (PCT) and Daily Cone Beam CT (CBCT) images of thorax. For both experiments, manual landmark-based
evaluation was performed to quantify the registration performance. In 4D CT registration, the mean TRE of 5 cases was
1.75 mm. In PCT-CBCT registration, the TRE of one case was 2.26mm. Compared to the CPU-based AtamaiWarp
program, our GPU-based implementation achieves comparable registration accuracy and is ~25 times faster. The results
highlight the potential utility of our algorithm for online adaptive radiation treatment.
Towards real-time 2D/3D registration for organ motion monitoring in image-guided radiation therapy
Author(s):
C. Gendrin;
J. Spoerk;
C. Bloch;
S. A. Pawiro;
C. Weber;
M. Figl;
P. Markelj;
F. Pernus;
D. Georg;
H. Bergmann;
W. Birkfellner
Show Abstract
Nowadays, radiation therapy systems incorporate kV imaging units which allow for the real-time acquisition
of intra-fractional X-ray images of the patient with high details and contrast. An application of this technology
is tumor motion monitoring during irradiation. For tumor tracking, implanted markers or position sensors
are used which requires an intervention. 2D/3D intensity based registration is an alternative, non-invasive
method but the procedure must be accelerate to the update rate of the device, which lies in the range of 5
Hz. In this paper we investigate fast CT to a single kV X-ray 2D/3D image registration using a new porcine
reference phantom with seven implanted fiducial markers. Several parameters influencing the speed and accuracy
of the registrations are investigated. First, four intensity based merit functions, namely Cross-Correlation,
Rank Correlation, Mutual Information and Correlation Ratio, are compared. Secondly, wobbled splatting and
ray casting rendering techniques are implemented on the GPU and the influence of each algorithm on the
performance of 2D/3D registration is evaluated. Rendering times for a single DRR of 20 ms were achieved.
Different thresholds of the CT volume were also examined for rendering to find the setting that achieves the best
possible correspondence with the X-ray images. Fast registrations below 4 s became possible with an inplane
accuracy down to 0.8 mm.
User-driven 3D mesh region targeting
Author(s):
Peter Karasev;
James Malcolm;
Marc Niethammer;
Ron Kikinis;
Allen Tannenbaum
Show Abstract
We present a method for the fast selection of a region on a 3D mesh using geometric information.
This is done using a weighted arc length minimization with a conformal factor based on the mean
curvature of the 3D surface. A careful analysis of the geometric estimation process enables our
geometric curve shortening to use a reliable smooth estimate of curvature and its gradient. The result
is a robust way for a user to easily interact with particular regions of a 3D mesh construced from
medical imaging.
In this study, we focus on building a robust and semi-automatic method for extracting selected
folds on the cortical surface, specifically for isolating gyri by drawing a curve along the surrounding
sulci. It is desirable to make this process semi-automatic because manually drawing a curve through
the complex 3D mesh is extremely tedious, while automatic methods cannot realistically be expected
to select the exact closed contour a user desires for a given dataset. In the technique described here, a
user places a handful of seed points surrounding the gyri of interest; an initial curve is made from these
points which then evolves to capture the region. We refer to this user-driven procedure as targeting
or selection interchangeably. To illustrate the applicability of these methods to other medical data,
we also give an example of bone fracture CT surface parcellation.
Segmenting TRUS video sequences using local shape statistics
Author(s):
Pingkun Yan;
Sheng Xu;
Baris Turkbey;
Jochen Kruecker
Show Abstract
Automatic segmentation of the prostate in transrectal ultrasound (TRUS) may improve the fusion of TRUS
with magnetic resonance imaging (MRI) for TRUS/MRI-guided prostate biopsy and local therapy. It is very
challenging to segment the prostate in TRUS images, especially for the base and apex of the prostate due to
the large shape variation and low signal-to-noise ratio. To successfully segment the whole prostate from 2D
TRUS video sequences, this paper presents a new model based algorithm using both global population-based
and adaptive local shape statistics to guide segmentation. By adaptively learning shape statistics in a local
neighborhood during the segmentation process, the algorithm can effectively capture the patient-specific shape
statistics and the large shape variations in the base and apex areas. After incorporating the learned shape
statistics into a deformable model, the proposed method can accurately segment the entire gland of the prostate
with significantly improved performance in the base and apex. The proposed method segments TRUS video in
a fully automatic fashion. In our experiments, 19 video sequences with 3064 frames in total grabbed from 19
different patients for prostate cancer biopsy were used for validation. It took about 200ms for segmenting one
frame on a Core2 1.86 GHz PC. The average mean absolute distance (MAD) error was 1.65±0.47mm for the
proposed method, compared to 2.50±0.81mm and 2.01±0.63mm for independent frame segmentation and frame
segmentation result propagation, respectively. Furthermore, the proposed method reduced the MAD errors by
49.4% and 18.9% in the base and by 55.6% and 17.7% in the apex, respectively.
Development and validation of a real-time reduced field of view imaging driven by automated needle detection for MRI-guided interventions
Author(s):
Roland A. Görlitz;
Junichi Tokuda;
Scott W. Hoge;
Renxin Chu;
Lawrence P. Panych;
Clare Tempany;
Nobuhiko Hata
Show Abstract
Automatic tracking and scan plane control in MRI-guided therapy is an active area of research. However, there has been
little research on tracking needles without the use of external markers. Current methods also do not account for possible
needle bending, because the tip does not get tracked explicitly. In this paper, we present a preliminary method to track a
biopsy needle in real-time MR images based on its visible susceptibility artifact and automatically adjust the next scan
plane in a closed loop to keep the needle's tip in the field of view. The images were acquired with a Single Shot Fast Spin
Echo (SSFSE) sequence combined with a reduced field of view (rFOV) technique using 2D RF pulses, which allows a
reduction in scan time without compromising spatial resolution. The needle tracking software was implemented as a
plug-in module for open-source medical image visualization software 3D Slicer to display the current scan plane with the
highlighted needle. Tests using a gel phantom and an ex vivo tissue sample are reported and evaluated in respect to
performance and accuracy. The results proved that the method allows an image update rate of one frame per second with
a root mean squared error within 4 mm. The proposed method may therefore be feasible in MRI-guided targeted therapy,
such as prostate biopsies.
Assessment of registration accuracy in three-dimensional transrectal ultrasound images of prostates
Author(s):
V. Karnik;
A. Fenster;
J. Bax;
D. Cool;
L. Gardi;
I. Gyacskov;
C. Romagnoli;
A. D. Ward
Show Abstract
In order to obtain a definitive diagnosis of prostate cancer, over one million men undergo prostate biopsies every year.
Currently, biopsies are performed under two-dimensional (2D) transrectal ultrasound (TRUS) guidance with manual
stabilization of a hand-held end- or side-firing transducer probe. With this method, it is challenging to precisely guide a
needle to its target due to a potentially unstable ultrasound probe and limited anatomic information, and it is impossible
to obtain a 3D record of biopsy locations. We have developed a mechanically-stabilized, 3-dimensional (3D) TRUSguided
prostate biopsy system, which provides additional anatomic information and permits a 3D record of biopsies. A
critical step in this system's performance is the registration of 3D-TRUS images obtained during the procedure, which
compensates for intra-session motion and deformation of the prostate. We evaluated the accuracy and variability of
surface-based 3D-TRUS to 3D-TRUS rigid and non-rigid registration by measuring the target registration (TRE) error as
the post-registration misalignment of manually marked, corresponding, intrinsic fiducials. We also measured the fiducial
localization error (FLE), to measure its contribution to the TRE. Our results yielded mean TRE values of 2.13 mm and
2.09 mm for rigid and non-rigid techniques, respectively. Our FLE of 0.21 mm did not dominate the overall TRE. These
results compare favorably with a clinical need for a TRE of less than 2.5 mm.
Accuracy validation for MRI-guided robotic prostate biopsy
Author(s):
Helen Xu;
Andras Lasso;
Siddharth Vikal;
Peter Guion;
Axel Krieger;
Aradhana Kaushal;
Louis L. Whitcomb;
Gabor Fichtinger
Show Abstract
We report a quantitative evaluation of the clinical accuracy of a MRI-guided robotic prostate biopsy system that has
been in use for over five years at the U.S. National Cancer Institute. A two-step rigid volume registration using mutual
information between the pre and post needle insertion images was performed. Contour overlays of the prostate before
and after registration were used to validate the registration. A total of 20 biopsies from 5 patients were evaluated. The
maximum registration error was 2 mm. The mean biopsy target displacement, needle placement error, and biopsy error
was 5.4 mm, 2.2 mm, and 5.1 mm respectively. The results show that the pre-planned biopsy target did dislocate during
the procedure and therefore causing biopsy errors.
Registration of ultrasound to CT angiography of kidneys: a porcine phantom study
Author(s):
Jing Xiang;
Sean Gill;
Christopher Nguan;
Purang Abolmaesumi;
Robert N. Rohling
Show Abstract
3D ultrasound (US) to computed tomography (CT) registration is a topic of significant interest because it
can potentially improve many minimally invasive procedures such as laparoscopic partial nephrectomy. Partial
nephrectomy patients often receive preoperative CT angiography, which helps define the important structures
of the kidney such as the vasculature. Intraoperatively, dynamic real-time imaging information can be captured
using ultrasound and compared with the preoperative data. Providing accurate registration between the two
modalities would enhance navigation and guidance for the surgeon. However, one of the major problems of
developing and evaluating registration techniques is obtaining sufficiently accurate and realistic phantom data
especially for soft tissue. We present a detailed procedure for constructing tissue phantoms using porcine kidneys,
which incorporates contrast agent into the tissue such that the kidneys appear representative of in vivo human CT
angiography. These phantoms are also imaged with US and resemble US images from human patients. We then
perform registration on corresponding CT and US datasets using a simulation-based algorithm. The method
simulates an US image from the CT, generating an intermediate modality that resembles ultrasound. This
simulated US is then registered to the original US dataset. Embedded fiducial markers provide a gold standard
for registration. Being able to test our registration method on realistic datasets facilitates the development of
novel CT to US registration techniques such that we can generate an effective method for human studies.
Localization of brachytherapy seeds in ultrasound by registration to fluoroscopy
Author(s):
P. Fallavollita;
Z. KarimAghaloo;
E. C. Burdette;
D. Y. Song;
P. Abolmaesumi;
G. Fichtinger
Show Abstract
Motivation: In prostate brachytherapy, transrectal ultrasound (TRUS) is used to visualize the anatomy, while implanted
seeds can be seen in C-arm fluoroscopy or CT. Intra-operative dosimetry optimization requires localization of the
implants in TRUS relative to the anatomy. This could be achieved by registration of TRUS images and the implants
reconstructed from fluoroscopy or CT. Methods: TRUS images are filtered, compounded, and registered on the
reconstructed implants by using an intensity-based metric based on a 3D point-to-volume registration scheme. A
phantom was implanted with 48 seeds, imaged with TRUS and CT/X-ray. Ground-truth registration was established
between the two. Seeds were reconstructed from CT/X-ray. Seven TRUS filtering techniques and two image similarity
metrics were analyzed as well. Results: For point-to-volume registration, noise reduction combined with beam profile
filter and mean squares metrics yielded the best result: an average of 0.38 ± 0.19 mm seed localization error relative to
the ground-truth. In human patient data C-arm fluoroscopy images showed 81 radioactive seeds implanted inside the
prostate. A qualitative analysis showed clinically correct agreement between the seeds visible in TRUS and
reconstructed from intra-operative fluoroscopy imaging. The measured registration error compared to the manually
selected seed locations by the clinician was 2.86 ± 1.26 mm. Conclusion: Fully automated seed localization in TRUS
performed excellently on ground-truth phantom, adequate in clinical data and was time efficient having an average
runtime of 90 seconds.
Design of a predictive targeting error simulator for MRI-guided prostate biopsy
Author(s):
Shachar Avni;
Siddharth Vikal;
Gabor Fichtinger
Show Abstract
Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in
standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes
is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors
inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial
purpose of validating this procedure, we introduce a fully interactive and customizable simulator which determines the
resulting targeting errors of simulated registrations between prostate volumes given user-provided parameters for organ
deformation, segmentation, and targeting. We present the workflow executed by the simulator in detail and discuss the
parameters involved. We also present a segmentation error introduction algorithm, based on polar curves and natural
cubic spline interpolation, which introduces statistically realistic contouring errors. One simulation, including all I/O and
preparation for rendering, takes approximately 1 minute and 40 seconds to complete on a system with 3 GB of RAM and
four Intel Core 2 Quad CPUs each with a speed of 2.40 GHz. Preliminary results of our simulation suggest the maximum
tolerable segmentation error given the presence of a 5.0 mm wide small tumor is between 4-5 mm. We intend to validate
these results via clinical trials as part of our ongoing work.
Towards hybrid bronchoscope tracking under respiratory motion: evaluation on a dynamic motion phantom
Author(s):
Xiongbiao Luo;
Marco Feuerstein;
Takamasa Sugiura;
Takayuki Kitasaka;
Kazuyoshi Imaizumi;
Yoshinori Hasegawa;
Kensaku Mori
Show Abstract
This paper presents a hybrid camera tracking method that uses electromagnetic (EM) tracking and intensitybased
image registration and its evaluation on a dynamic motion phantom. As respiratory motion can significantly
affect rigid registration of the EM tracking and CT coordinate systems, a standard tracking approach
that initializes intensity-based image registration with absolute pose data acquired by EM tracking will fail
when the initial camera pose is too far from the actual pose. We here propose two new schemes to address this
problem. Both of these schemes intelligently combine absolute pose data from EM tracking with relative motion
data combined from EM tracking and intensity-based image registration. These schemes significantly improve
the overall camera tracking performance. We constructed a dynamic phantom simulating the respiratory motion
of the airways to evaluate these schemes. Our experimental results demonstrate that these schemes can
track a bronchoscope more accurately and robustly than our previously proposed method even when maximum
simulated respiratory motion reaches 24 mm.
Representing flexible endoscope shapes with hermite splines
Author(s):
Elvis C. S. Chen;
Sharyle A. Fowler;
Lawrence C. Hookey;
Randy E. Ellis
Show Abstract
Navigation of a flexible endoscope is a challenging surgical task: the shape of the end effector of the endoscope, interacting
with surrounding tissues, determine the surgical path along which the endoscope is pushed. We present a navigational
system that visualized the shape of the flexible endoscope tube to assist gastrointestinal surgeons in performing Natural
Orifice Translumenal Endoscopic Surgery (NOTES). The system used an electromagnetic positional tracker, a catheter
embedded with multiple electromagnetic sensors, and graphical user interface for visualization. Hermite splines were used
to interpret the position and direction outputs of the endoscope sensors. We conducted NOTES experiments on live swine
involving 6 gastrointestinal and 6 general surgeons. Participants who used the device first were 14.2% faster than when not
using the device. Participants who used the device second were 33.6% faster than the first session. The trend suggests that
spline-based visualization is a promising adjunct during NOTES procedures.
Airway shape assessment with visual feed-back in asthma and obstructive diseases
Author(s):
Catalin Fetita;
Margarete Ortner;
Pierre-Yves Brillet;
Yahya Ould Hmeidi;
Françoise Prêteux
Show Abstract
Airway remodeling in asthma patients has been studied in vivo by means of endobronchial biopsies allowing to
assess structural and inflammatory changes. However, this technique remains relatively invasive and difficult to
use in longitudinal trials. The development of alternative non-invasive tests, namely exploiting high-resolution
imaging modalities such as MSCT, is gaining interest in the medical community. This paper develops a fullyautomated
airway shape assessment approach based on the 3D segmentation of the airway lumen from MSCT
data. The objective is to easily notify the radiologist on bronchus shape variations (stenoses, bronchiectasis)
along the airway tree during a simple visual investigation. The visual feed-back is provided by means of a volumerendered
color coding of the airway calibers which are robustly defined and computed, based on a specific 3D
discrete distance function able to deal with small size structures. The color volume rendering (CVR) information
is further on reinforced by the definition and computation of a shape variation index along the airway medial axis
enabling to detect specific configurations of stenoses. Such cases often occur near bifurcations (bronchial spurs)
and they are either missed in the CVR or difficult to spot due to occlusions by other segments. Consequently, all
detected shape variations (stenoses, dilations and thickened spurs) can be additionally displayed on the medial
axis and investigated together with the CVR information. The proposed approach was evaluated on a MSCT
database including twelve patients with severe or moderate persistent asthma, or severe COPD, by analyzing
segmental and subsegmental bronchi of the right lung. The only CVR information provided for a limited number
of views allowed to detect 78% of stenoses and bronchial spurs in these patients, whereas the inclusion of the
shape variation index enabled to complement the missing information.
Anatomical modeling of the bronchial tree
Author(s):
Gerrit Hentschel;
Tobias Klinder;
Thomas Blaffert;
Thomas Bülow;
Rafael Wiemker;
Cristian Lorenz
Show Abstract
The bronchial tree is of direct clinical importance in the context of respective diseases, such as chronic obstructive
pulmonary disease (COPD). It furthermore constitutes a reference structure for object localization in the lungs and it
finally provides access to lung tissue in, e.g., bronchoscope based procedures for diagnosis and therapy. This paper
presents a comprehensive anatomical model for the bronchial tree, including statistics of position, relative and absolute
orientation, length, and radius of 34 bronchial segments, going beyond previously published results. The model has been
built from 16 manually annotated CT scans, covering several branching variants. The model is represented as a
centerline/tree structure but can also be converted in a surface representation. Possible model applications are either to
anatomically label extracted bronchial trees or to improve the tree extraction itself by identifying missing segments or
sub-trees, e.g., if located beyond a bronchial stenosis. Bronchial tree labeling is achieved using a naïve Bayesian
classifier based on the segment properties contained in the model in combination with tree matching. The tree matching
step makes use of branching variations covered by the model. An evaluation of the model has been performed in a leaveone-
out manner. In total, 87% of the branches resulting from preceding airway tree segmentation could be correctly
labeled. The individualized model enables the detection of missing branches, allowing a targeted search, e.g., a local rerun
of the tree-segmentation segmentation.
The MITK image guided therapy toolkit and its application for augmented reality in laparoscopic prostate surgery
Author(s):
Matthias Baumhauer;
Jochen Neuhaus;
Klaus Fritzsche;
Hans-Peter Meinzer
Show Abstract
Image Guided Therapy (IGT) faces researchers with high demands and efforts in system design, prototype implementation,
and evaluation. The lack of standardized software tools, like algorithm implementations, tracking
device and tool setups, and data processing methods escalate the labor for system development and sustainable
system evaluation. In this paper, a new toolkit component of the Medical Imaging and Interaction Toolkit
(MITK), the MITK-IGT, and its exemplary application for computer-assisted prostate surgery are presented.
MITK-IGT aims at integrating software tools, algorithms and tracking device interfaces into the MITK toolkit
to provide a comprehensive software framework for computer aided diagnosis support, therapy planning, treatment
support, and radiological follow-up. An exemplary application of the MITK-IGT framework is introduced
with a surgical navigation system for laparos-copic prostate surgery. It illustrates the broad range of application
possibilities provided by the framework, as well as its simple extensibility with custom algorithms and other
software modules.
Model-updated image-guided liver surgery: preliminary results using intra-operative surface characterization
Author(s):
Prashanth Dumpuri;
Logan W. Clements;
Benoit M. Dawant;
Michael I. Miga
Show Abstract
The current protocol for image-guidance in liver surgeries involves rigid registration algorithm. Systematic studies
have shown that the liver can deform up to 2cms during surgeries thereby compromising the accuracy of the surgical
navigation systems. Compensating for intraoperative deformations using computational models has shown promising
results. In this work, we follow up the initial rigid registration with a computational approach. The proposed
computational approach relies on the closest point distances between the undeformed pre-operative surface and the
rigidly registered deformed intra-operative surface. We also introduce a spatial smoothing filter to generate a
realistic deformation field using the closest point distances. The proposed approach was validated in both phantom
experiments and clinical cases. Preliminary results are encouraging and suggest that computational models can be
used to improve the accuracy of image-guided liver surgeries.
A 3D-elastography-guided system for laparoscopic partial nephrectomies
Author(s):
Philipp J. Stolka;
Matthias Keil;
Georgios Sakas;
Elliot McVeigh;
Mohamad E. Allaf;
Russell H. Taylor;
Emad M. Boctor
Show Abstract
We present an image-guided intervention system based on tracked 3D elasticity imaging (EI) to provide a novel
interventional modality for registration with pre-operative CT. The system can be integrated in both laparoscopic and
robotic partial nephrectomies scenarios, where this new use of EI makes exact intra-operative execution of pre-operative
planning possible. Quick acquisition and registration of 3D-B-Mode and 3D-EI volume data allows intra-operative
registration with CT and thus with pre-defined target and critical regions (e.g. tumors and vasculature). Their real-time
location information is then overlaid onto a tracked endoscopic video stream to help the surgeon avoid vessel damage
and still completely resect tumors including safety boundaries.
The presented system promises to increase the success rate for partial nephrectomies and potentially for a wide range of
other laparoscopic and robotic soft tissue interventions. This is enabled by the three components of robust real-time
elastography, fast 3D-EI/CT registration, and intra-operative tracking. With high quality, robust strain imaging (through
a combination of parallelized 2D-EI, optimal frame pair selection, and optimized palpation motions), kidney tumors that
were previously unregistrable or sometimes even considered isoechoic with conventional B-mode ultrasound can now be
imaged reliably in interventional settings. Furthermore, this allows the transformation of planning CT data of kidney
ROIs to the intra-operative setting with a markerless mutual-information-based registration, using EM sensors for intraoperative
motion tracking.
Overall, we present a complete procedure and its development, including new phantom models - both ex vivo and
synthetic - to validate image-guided technology and training, tracked elasticity imaging, real-time EI frame selection,
registration of CT with EI, and finally a real-time, distributed software architecture. Together, the system allows the
surgeon to concentrate on intervention completion with less time pressure.
Fast automatic path proposal computation for hepatic needle placement
Author(s):
Christian Schumann;
Jennifer Bieberstein;
Christoph Trumm;
Diethard Schmidt;
Philipp Bruners;
Matthias Niethammer;
Ralf T. Hoffmann;
Andreas H. Mahnken;
Philippe L. Pereira;
Heinz-Otto Peitgen
Show Abstract
Percutaneous image-guided interventions, such as radiofrequency ablation (RFA), biopsy, seed implantation, and
several types of drainage, employ needle shaped instruments which have to be inserted into the patient's body.
Precise planning of needle placement is a key to a successful intervention. The planning of the access path has
to be carried out with respect to a variety of criteria for all possible trajectories to the selected target. Since
the planning is performed in 2D slices, it demands considerable experience and constitutes a significant mental
task. To support the process of finding a suitable path for hepatic interventions, we propose a fast automatic
method that computes a list of path proposals for a given target point inside the liver with respect to multiple
criteria that affect safety and practicability. Prerequisites include segmentation masks of the liver, of all relevant
risk structures and, depending on the kind of procedure, of the tumor. The path proposals are computed
based on a weighted combination of cylindrical projections. Each projection represents one path criterion and
is generated using the graphics hardware of the workstation. The list of path proposals is generated in less
than one second. Hence, updates of the proposals upon changes of the target point and other relevant input
parameters can be carried out interactively. The results of a preliminary evaluation indicate that the proposed
paths are comparable to those chosen by experienced radiologists and therefore are suited to support planning
in the clinical environment. Our implementation focuses on RFA and biopsy in the liver but may be adapted to
other types of interventions.
Application of collision detection to assess implant insertion in elbow replacement surgery
Author(s):
O. Remus Tutunea-Fatan;
Joshua H. Bernick;
Emily Lalone;
Graham J. W. King;
James A. Johnson
Show Abstract
An important aspect of implant replacement of the human joint is the fit achieved between the implant and bone canal.
As the implant is inserted within the medullary canal, its position and orientation is subjected to a variety of constraints
introduced either by the external forces and moments applied by the surgeon or by the interaction of the implant with the
cortical wall of the medullary canal. This study evaluated the implant-bone interaction of a humeral stem in elbow
replacement surgery as an example, but the principles can also be applied to other joints. After converting CT scan data
of the humerus to the parametric NURBS-based representation, a collision detection procedure based on existing
Computer-Aided Engineering techniques was employed to control the instantaneous kinematics and dynamics of the
insertion of a humeral implant in an attempt to determine its final posture within the canal. By measuring the
misalignment between the native flexion-extension (FE) axis of the distal humerus and the prosthesis, a prediction was
made regarding the fit between the canal and the implant. This technique was shown to be effective in predicting the
final misalignment of the implant axis with respect to the native FE axis of the distal humerus using a cadaver specimen
for in-vitro validation.
A novel technique for analysis of accuracy of magnetic tracking systems used in image guided surgery
Author(s):
David M. Kwartowitz;
Marayam E. Rettmann;
David R. Holmes III;
Richard A. Robb
Show Abstract
With the increased use and development of image-guided surgical applications, there is a need for methods of
analysis of the accuracy and precision of the components which compose these systems. One primary component
of an image-guided surgery system is the position tracking system which allows for the localization of a tool within
the surgical field and provides information which is translated back to the images. Previously much work has been
done in characterizing these systems for spatial accuracy and precision. Much of this previous work examines
single tracking systems or modalities. We have devised a method which allows for the characterization of a novel
tracking system independent of modality and location. We describe the development of a phantom system which
allows for rapid design and creation of surfaces with different geometries. We have also demonstrated a method of
analysis of the data generated by this phantom system, and used it to compare Biosense-Webster's CartoXPTM,
and Northern Digital's AuroraTMmagnetic trackers. We have determined that the accuracy and precision of the
CartoXP was best, followed closely by the Aurora's dome volume, then the Aurora's cube volume. The mean
accuracy for all systems was better than 3mm and decays with distance from the field generator.
3D visualization of medical imaging using static volumetric display: CSpace
Author(s):
Hakki H. Refai;
Basel Salahieh;
James J. Sluss Jr.
Show Abstract
Advances in medical imaging technologies are assisting radiologists in more accurate diagnoses. This paper details an
autostereoscopic static volumetric display, called CSpace®, capable of projecting three-dimensional (3D) medical
imaging data in 3D world coordinates. Using this innovative technology, the displayed 3D data set can be viewed in the
optical medium from any perspective angle without the use of any viewing aid. The design of CSpace® allows a volume
rendering of the surface and the interior of any organ of the human body. As a result, adjacent tissues can be better
monitored, and disease diagnoses can be more accurate. In conjunction with CSpace hardware, we have developed a
software architecture that can read digital imaging and communication in medicine (DICOM) files whether captured by
ultrasound devices, magnetic resonance imaging (MRI), or computed tomography (CT) scanners. The software acquires
the imaging parameters from the files' header, and then applies the parameters on the rendered 3D object to display it in
the exact form it was captured.
Cross modality registration of video and magnetic tracker data for 3D appearance and structure modeling
Author(s):
Dusty Sargent;
Chao-I Chen;
Yuan-Fang Wang
Show Abstract
The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic
tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance,
structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly
colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure
operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working
channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers
the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few
seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between
the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the
readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation)
of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement
over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While
there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they
are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed
method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient
alternative for 3D model construction. Furthermore, the calibration procedure does not require special training
nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that
can be printed on any laser or inkjet printer).
Reconstruction and visualization of model-based volume representations
Author(s):
Ziyi Zheng;
Klaus Mueller
Show Abstract
In modern medical CT, the primary source of data is a set of X-ray projections acquired around the object, which are
then used to reconstruct a discrete regular grid of sample points. Conventional volume rendering methods use this
reconstructed regular grid to estimate unknown off-grid values via interpolation. However, these interpolated values may
not match the values that would have been generated had they been reconstructed directly with CT. The consequence can
be simple blurring, but also the omission of fine object detail which usually contains precious information. To avoid
these problems, in the method we propose, instead of reconstructing a lattice of volume sample points, we derive a highfidelity
object model directly from the reconstruction process, fitting a localized object model to the acquired raw data
within tight tolerances. This model can then be easily evaluated both for slice-based viewing as well as in GPU 3D
volume rendering, offering excellent detail preservation in zooming operations. Furthermore, the model-driven
representation also supports high-precision analytical ray casting.
Automatic feature detection for 3D surface reconstruction from HDTV endoscopic videos
Author(s):
Anja Groch;
Matthias Baumhauer;
Hans-Peter Meinzer;
Lena Maier-Hein
Show Abstract
A growing number of applications in the field of computer-assisted laparoscopic interventions depend on accurate and fast 3D surface acquisition. The most commonly applied methods for 3D reconstruction of organ surfaces from 2D endoscopic images involve establishment of correspondences in image pairs to allow for computation of 3D point coordinates via triangulation. The popular feature-based approach for correspondence search applies a feature descriptor to compute high-dimensional feature vectors describing the characteristics of selected image points. Correspondences are established between image points with similar feature vectors. In a previous study, the performance of a large set of state-of-the art descriptors for the use in minimally invasive surgery was assessed. However, standard Phase Alternating Line (PAL) endoscopic images were utilized for this purpose. In this paper, we apply some of the best performing feature descriptors to in-vivo PAL endoscopic images as well as to High Definition Television (HDTV) endoscopic images of the same scene and show that the quality of the correspondences can be increased significantly when using high resolution images.
Parameter space visualizer: an interactive parameter selection interface for iterative CT reconstruction algorithms
Author(s):
Wei Xu;
Klaus Mueller
Show Abstract
Previous work indicated that using ordered subsets (OS-SIRT) for iterative CT can optimize the reconstruction
performance once optimal settings for parameters such as number of subsets and relaxation factor have been identified.
However, recent work also indicated that the optimal settings have dependent relations with regards to the quality of the
projection data (such as SNR-level), which are hard to obtain a-priori. In addition, users may also have preferences in
trading off between the dependent parameters, such as reconstruction speed and quality, which makes these
(independent) parameters even more difficult to determine in an automated manner. Therefore, we devise an effective
parameter space navigation interface allowing users to interactively assist parameter selection for iterative CT
reconstruction algorithms (here for OS-SIRT). It is based on a 2D scatter plot with six display modes to show different
features of the reconstruction results based on the user preferences. It also enables a dynamic visualization by gradual
parameter alteration for illustrating the rate of impact of a given parameter constellation. Finally, we note the generality
of our approach, which could be applied to assist any parameter selection related systems.
Real-time simulation of dynamic fluoroscopy of ERCP
Author(s):
Hoeryong Jung;
Doo Yong Lee
Show Abstract
This paper discusses the methods for real-time rendering of time-varying dynamic fluoroscope images including fluid
flow for the ERCP (Endoscopic Retrograde Cholangiopancreatography) simulation. A volume rendering technique is
used to generate virtual fluoroscopy images. This paper develops an image-overlaying method which overlaps the timevarying
images onto the constant background image. The full size fluoroscopy image is computed from the initial
volume data set during the pre-processing stage, which is then saved as the background image. Only the time-varying
images are computed from the time-varying volume data set during the actual simulation. This involves relatively small
computation compared with the background image. The time-varying images are then overlaid onto the background
image to obtain the complete images. This method reduces computational overhead by removing redundant
computations. A simplified particle dynamics model is employed for fast simulation of the fluid flow. The fluid model, a
collection of particles, interacts only with the ducts based on principles of a complete elastic collision. Hence, the
velocity of the particles, when they collide with the duct, can be computed by using simple algebraic equations. The
methods are implemented for fast simulation of the ERCP.
PIRATE: pediatric imaging response assessment and targeting environment
Author(s):
Russell Glenn;
Yong Zhang;
Matthew Krasin;
Chiaho Hua
Show Abstract
By combining the strengths of various imaging modalities, the multimodality imaging approach has potential to improve
tumor staging, delineation of tumor boundaries, chemo-radiotherapy regime design, and treatment response assessment
in cancer management. To address the urgent needs for efficient tools to analyze large-scale clinical trial data, we have
developed an integrated multimodality, functional and anatomical imaging analysis software package for target
definition and therapy response assessment in pediatric radiotherapy (RT) patients. Our software provides quantitative
tools for automated image segmentation, region-of-interest (ROI) histogram analysis, spatial volume-of-interest (VOI)
analysis, and voxel-wise correlation across modalities. To demonstrate the clinical applicability of this software,
histogram analyses were performed on baseline and follow-up 18F-fluorodeoxyglucose (18F-FDG) PET images of nine
patients with rhabdomyosarcoma enrolled in an institutional clinical trial at St. Jude Children's Research Hospital. In
addition, we combined 18F-FDG PET, dynamic-contrast-enhanced (DCE) MR, and anatomical MR data to visualize the
heterogeneity in tumor pathophysiology with the ultimate goal of adaptive targeting of regions with high tumor burden.
Our software is able to simultaneously analyze multimodality images across multiple time points, which could greatly
speed up the analysis of large-scale clinical trial data and validation of potential imaging biomarkers.
3D automatic anatomy recognition based on iterative graph-cut-ASM
Author(s):
Xinjian Chen;
Jayaram K. Udupa;
Ulas Bagci;
Abass Alavi;
Drew A. Torigian
Show Abstract
We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in
medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR).
The AAR system we are developing includes five main parts: model building, object recognition, object delineation,
pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling
part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid
strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM)
method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for
object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method
to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information
embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost
function, which effectively integrates the specific image information with the ASM shape model information. The
proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to
explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method
improves on ASM and GC and can provide practical operational time on clinical images.
Catheter tracking in asynchronous biplane fluoroscopy images by 3D B-snakes
Author(s):
Marcel Schenderlein;
Susanne Stierlin;
Robert Manzke;
Volker Rasche;
Klaus Dietmayer
Show Abstract
Minimally invasive catheter ablation procedures are guided by biplane fluoroscopy images visualising the interventional
scene from two different orientations. However, these images do not provide direct access to their
inherent spatial information. A three-dimensional reconstruction and visualisation of the catheters from such
projections has the potential to support quick and precise catheter navigation. It enhances the perception of
the interventional situation and provides means of three-dimensional catheter pose documentation. In this contribution
we develop an algorithm for tracking the three-dimensional pose of electro-physiological catheters in
biplane fluoroscopy images. It is based on the B-Snake algorithm which had to be adapted to the biplane and
in particular the asynchronous image acquisition situation. A three-dimensional B-spline curve is transformed
so that its projections are consistent with the catheter path enhancing feature images, while the information
from the missing image caused by the asynchronous acquisition is interpolated from its sequence neighbours. In
order to analyse the three-dimensional precision, virtual images were created from patient data sets and threedimensional
ground truth catheter paths. The evaluation of the three-dimensional catheter pose reconstruction
by means of our algorithm on 33 of such virtual image sets indicated a mean catheter pose error of 1.26 mm and
a mean tip deviation of 3.28 mm. The tracking capability of the algorithm was evaluated on 10 patient data
sets. In 94 % of all images our algorithm followed the catheter projections.
A new gold-standard dataset for 2D/3D image registration evaluation
Author(s):
Supriyanto Pawiro;
Primoz Markelj;
Christelle Gendrin;
Michael Figl;
Markus Stock;
Christoph Bloch;
Christoph Weber;
Ewald Unger;
Iris Nöbauer;
Franz Kainberger;
Helga Bergmeister;
Dietmar Georg;
Helmar Bergmann;
Wolfgang Birkfellner
Show Abstract
In this paper, we propose a new gold standard data set for the validation of 2D/3D image registration algorithms for
image guided radiotherapy. A gold standard data set was calculated using a pig head with attached fiducial markers. We
used several imaging modalities common in diagnostic imaging or radiotherapy which include 64-slice computed
tomography (CT), magnetic resonance imaging (MRI) using T1, T2 and proton density (PD) sequences, and cone beam
CT (CBCT) imaging data. Radiographic data were acquired using kilovoltage (kV) and megavoltage (MV) imaging
techniques. The image information reflects both anatomy and reliable fiducial marker information, and improves over
existing data sets by the level of anatomical detail and image data quality. The markers of three dimensional (3D) and
two dimensional (2D) images were segmented using Analyze 9.0 (AnalyzeDirect, Inc) and an in-house software. The
projection distance errors (PDE) and the expected target registration errors (TRE) over all the image data sets were found
to be less than 1.7 mm and 1.3 mm, respectively. The gold standard data set, obtained with state-of-the-art imaging
technology, has the potential to improve the validation of 2D/3D registration algorithms for image guided therapy.
A comment on the rank correlation merit function for 2D/3D registration
Author(s):
Michael Figl;
Christoph Bloch;
Wolfgang Birkfellner
Show Abstract
Lots of procedures in computer assisted interventions register pre-interventionally generated 3D data sets to the
intraoperative situation using fast and simply generated 2D images, e.g. from a C-Arm, a B-mode Ultrasound,
etc. Registration is typically done by generating a 2D image out of the 3D data set, comparison to the original
2D image using a planar similarity measure and subsequent optimisation. As these two images can be very
different, a lot of different comparison functions are in use.
In a recent article Stochastic Rank Correlation, a merit function based on Spearman's rank correlation coefficient
was presented. By comparing randomly chosen subsets of the images, the authors wanted to avoid the
computational expense of sorting all the points in the image.
In the current paper we show that, because of the limited grey level range in medical images, full image rank
correlation can be computed almost as fast as Pearson's correlation coefficient.
A run time estimation is illustrated with numerical results using a 2D Shepp-Logan phantom at different
sizes, and a sample data set of a pig.
The influence of intensity standardization on medical image registration
Author(s):
Ulas Bagci;
Jayaram K. Udupa;
Li Bai
Show Abstract
Acquisition-to-acquisition signal intensity variations (non-standardness) are inherent in MR images. Standardization
is a post processing method for correcting inter-subject intensity variations through transforming all
images from the given image gray scale into a standard gray scale wherein similar intensities achieve similar
tissue meanings. The lack of a standard image intensity scale in MRI leads to many difficulties in tissue characterizability,
image display, and analysis, including image segmentation. This phenomenon has been documented
well; however, effects of standardization on medical image registration have not been studied yet. In this paper,
we investigate the influence of intensity standardization in registration tasks with systematic and analytic evaluations
involving clinical MR images. We conducted nearly 20,000 clinical MR image registration experiments
and evaluated the quality of registrations both quantitatively and qualitatively. The evaluations show that intensity
variations between images degrades the accuracy of registration performance. The results imply that the
accuracy of image registration not only depends on spatial and geometric similarity but also on the similarity of
the intensity values for the same tissues in different images.
Non-rigid registration for quantification of intestinal peristalsis on dynamic MRI data
Author(s):
Daniel Stein;
Tobias Heye;
Hans-Ulrich Kauczor;
Hans-Peter Meinzer
Show Abstract
Diseases of the intestinal tract often begin with changes altering the bowel tissue elasticity. Therefore, quantification
of bowel motion would be desirable for diagnosis, treatment monitoring and follow-up. Dynamic MRI can
capture such changes, but quantification requires non-rigid registration.
Towards a computer-assisted quantification for bowel diseases, two innovative methods for detection of bowel
motility restrictions have been developed and evaluated. Therefore a coronal 2D+t image will be extracted from
a dynamic 3D MRI dataset and registered non-rigidly over multiple time steps. The first method generates a
new image from the resulting motion maps by adding the absolute value of the vector for each pixel to the
corresponding values in following time steps. The second method calculates the absolute values only from the
lateral part of the vectors, skipping the coronal part, and thus removes large distortions due to movements caused
by breathing. In this preliminary evaluation both methods will be compared in regard to 5 healthy subjects
(volunteers) and 5 patients with proven restrictions in bowel motility.
It was shown, that for the first method with respiration a classification of volunteers and patients is only
partly possible. However, the second method turns out to be capable of classifying normal and restricted bowel
peristalsis. For the second method the mean motion from patients motion maps are about 34.4% lower than
that from volunteers motion maps. Therefore, for the first time such a classification is possible.
Pre-tuned resonant marker for iMRI using aerosol deposition on polymer catheters
Author(s):
Karl Will;
Stefan Schimpf;
Andreas Brose;
Frank Fischbach;
Jens Ricke;
Bertram Schmidt;
Georg Rose
Show Abstract
New advances in MRI technology enable fast acquisition of high-resolution images. In combination with the new open
architecture this scanners are entering the surgical suite being used as intra-operative imaging modality for minimally
invasive interventions. However, for a usage on a large scale the major issue of availability of appropriate surgical tools
is still unsolved. Such instruments, i.e. needles and catheters have to be MR-safe and -compatible but in contrast still
have to be visible within the MRI image. This usually is solved by integration of markers onto non-magnetic devices.
For reasons of MR-safety, work-flow and cost effectiveness semi-active markers without any connection to the outside
are preferable. The challenge in development and integration of such resonant markers is to precisely meet the MRI
frequency by keeping the geometrical dimensions of the interventional tool constant. This paper focuses on the reliable
integration and easy fabrication of such resonant markers on the tip of an interventional instrument. Starting with a
theoretical background for resonant labels a self-sufficient pre-tuned marker consisting of a standard capacitor and a
thin-film inductor is presented. A prototype is built using aerosol deposition for the inductor on a 6-F polymer catheter
and by integration of an off-the-shelf capacitor into the lumen of the catheter. Due to the fact that the dielectric materials
of some capacitors lead to artifacts in the MRI image different capacitor technologies are investigated. The prototypes
are scanned by an interventional MRI device proving the proper functionality of the tools.
Robotically assisted small animal MRI-guided mouse biopsy
Author(s):
Emmanuel Wilson;
Chris Chiodo;
Kenneth H. Wong;
Stanley Fricke;
Mira Jung;
Kevin Cleary
Show Abstract
Small mammals, namely mice and rats, play an important role in biomedical research. Imaging, in
conjunction with accurate therapeutic agent delivery, has tremendous value in small animal research since it
enables serial, non-destructive testing of animals and facilitates the study of biomarkers of disease
progression. The small size of organs in mice lends some difficulty to accurate biopsies and therapeutic agent
delivery. Image guidance with the use of robotic devices should enable more accurate and repeatable targeting
for biopsies and delivery of therapeutic agents, as well as the ability to acquire tissue from a pre-specified
location based on image anatomy. This paper presents our work in integrating a robotic needle guide device,
specialized stereotaxic mouse holder, and magnetic resonance imaging, with a long-term goal of performing
accurate and repeatable targeting in anesthetized mice studies.
A rapid method for compensating registration error between tracker and endoscope in flexible neuroendoscopic surgery navigation system
Author(s):
Zhengang Jiang;
Yukitaka Nimura;
Takayuki Kitasaka;
Yuichiro Hayashi;
Eiji Ito;
Masazumi Fujii;
Tetsuya Nagatani;
Yasukazu Kajita;
Toshiko Wakabayashi;
Kensaku Mori
Show Abstract
This paper proposes a rapid method for compensating registration error between the tracker and the endoscope in
a flexible neuroendoscopic surgery navigation system, as well as evaluates the accuracy of the proposed method.
Recently, flexible neuroendoscopic surgery navigation systems have been developed utilizing an electromagnetic
tracker (EMT). In such systems, an electromagnetic tracker sensor is fixed at the tip of a flexible endoscope
to get the position and the orientation of the endoscope camera by using the relationship between the camera
and the sensor. Usually, the relationship is estimated by a registration method using a calibration chart. Then,
virtual images corresponding to real endoscopic views are generated by using the position and orientation of
the camera. However, in the clinical application, the sensor has to be re-fixed before or during the surgery
due to its disinfection or breakage. Although the sensor can be re-fixed at the same position as the registered
position, it is difficult to ensure the roll of sensor in the same because the senor is a cylinder. Furthermore,
the sensor can also be rotated by the operation of tools during surgery. As a result, the virtual images will
be rotated and become greatly different from the real endoscopic views. In this case, the relationship between
camera and sensor has to be re-estimated by a registration method or manually, which makes the operation of
endoscope complicated and nonfeasible. In order to overcome this problem, we proposed a rapid method for
compensating the rotational error between real and virtual cameras using the epipolar geometry. In this study,
various experiments of the method are performed in order to evaluate and to improve its accuracy. Experimental
results suggested estimation accuracy can be improved by reducing the relative error of EMT outputs, and it is
necessary to ensure the quality of images which are used in the estimation.
A system for advanced real-time visualization and monitoring of MR-guided thermal ablations
Author(s):
Eva Rothgang;
Wesley D. Gilson;
Joachim Hornegger;
Christine H. Lorenz
Show Abstract
In modern oncology, thermal ablations are increasingly used as a regional treatment option to supplement systemic
treatment strategies such as chemotherapy and immunotherapy. The goal of all thermal ablation procedures
is to cause cell death of disease tissue while sparing adjacent healthy tissue. Real-time assessment of thermal
damage is the key to therapeutic efficiency and safety of such procedures. Magnetic resonance thermometry is
capable of monitoring the spatial distribution and temporal evolution of temperature changes during thermal
ablations. In this work, we present an advanced monitoring system for MR-guided thermal ablations that includes
multiplanar visualization, specialized overlay visualization methods, and additional methods for correcting
errors resulting from magnetic field shifts and motion. To ensure the reliability of the displayed thermal data,
systematic quality control of thermal maps is carried out on-line. The primary purpose of this work is to provide
clinicians with an intuitive tool for accurately visualizing the progress of thermal treatment at the time of the
procedure. Importantly, the system is designed to be independent of the heating source. The presented system
is expected to be of great value not only to guide thermal procedures but also to further explore the relationship
between temperature-time exposure and tissue damage. The software application was implemented within the
eXtensible Imaging Platform (XIP) and has been validated with clinical data.
Evaluation of nonholonomic needle steering using a robotic needle driver
Author(s):
Emmanuel Wilson;
Jeinan Ding;
Craig Carignan;
Karthik Krishnan;
Rick Avila;
Wes Turner;
Dan Stoianovici;
David Yankelevitz;
Filip Banovac;
Kevin Cleary
Show Abstract
Accurate needle placement is a common need in the medical environment. While the use
of small diameter needles for clinical applications such as biopsy, anesthesia and
cholangiography is preferred over the use of larger diameter needles, precision placement
can often be challenging, particularly for needles with a bevel tip. This is due to
deflection of the needle shaft caused by asymmetry of the needle tip. Factors such as the
needle shaft material, bevel design, and properties of the tissue penetrated determine the
nature and extent to which a needle bends. In recent years, several models have been
developed to characterize the bending of the needle, which provides a method of
determining the trajectory of the needle through tissue. This paper explores the use of a
nonholonomic model to characterize needle bending while providing added capabilities
of path planning, obstacle avoidance, and path correction for lung biopsy procedures. We
used a ballistic gel media phantom and a robotic needle placement device to
experimentally assess the accuracy of simulated needle paths based on the nonholonomic
model. Two sets of experiments were conducted, one for a single bend profile of the
needle and the second set of tests for double bending of the needle. The tests provided an
average error between the simulated path and the actual path of 0.8 mm for the single
bend profile and 0.9 mm for the double bend profile tests over a 110 mm long insertion
distance. The maximum error was 7.4 mm and 6.9 mm for the single and double bend
profile tests respectively. The nonholonomic model is therefore shown to provide a
reasonable prediction of needle bending.
Absolute vs. relative error characterization of electromagnetic tracking accuracy
Author(s):
Mohammad Matinfar;
Ganesh Narayanasamy;
Luis Gutierrez;
Raymond Chan;
Ameet Jain
Show Abstract
Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image
Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking
within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM
tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from
changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or
other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data
unusable. We present a mapping method for the operating region over which EM tracking sensors are used,
allowing for characterization of measurement errors, in turn providing physicians with visual feedback about
measurement confidence or reliability of localization estimates.
In this instance, we employ a calibration phantom to assess distortion within the operating field of the
EM tracker and to display in real time the distribution of measurement errors, as well as the location and
extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive
measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative
to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom
geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean")
EM environment. The registration results in the locations of sensors with respect to each other and defines
the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from
all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement
and orientation) are computed. Based on error thresholds provided by the operator, the spatial distribution of
localization errors are clustered and dynamically displayed as separate confidence zones within the operating
region of the EM tracker space.
Reducing depth uncertainty in large surgical workspaces, with applications to veterinary medicine
Author(s):
Michel A. Audette;
Ahmad Kolahi;
Andinet Enquobahrie;
Claudio Gatti;
Kevin Cleary
Show Abstract
This paper presents on-going research that addresses uncertainty along the Z-axis in image-guided surgery, for
applications to large surgical workspaces, including those found in veterinary medicine. Veterinary medicine lags human
medicine in using image guidance, despite MR and CT data scanning of animals. The positional uncertainty of a surgical
tracking device can be modeled as an octahedron with one long axis coinciding with the depth axis of the sensor, where
the short axes are determined by pixel resolution and workspace dimensions. The further a 3D point is from this device,
the more elongated is this long axis, and the greater the uncertainty along Z of this point's position, in relation to its
components along X and Y. Moreover, for a triangulation-based tracker, its position error degrades with the square of
distance. Our approach is to use two or more Micron Trackers to communicate with each other, and combine this feature
with flexible positioning. Prior knowledge of the type of surgical procedure, and if applicable, the species of animal that
determines the scale of the workspace, would allow the surgeon to pre-operatively configure the trackers in the OR for
optimal accuracy. Our research also leverages the open-source Image-guided Surgery Toolkit (IGSTK).
Exploring the clinical validity of predicted TRE in navigation
Author(s):
M. Bickel;
Ö. Güler;
F. Kral;
F. Schwarm;
W. Freysinger
Show Abstract
In a detailed laboratory investigation we performed a series of experiments in order to assess the validity of the widely
used TRE concept to predict the application accuracy. On base of 1mm CT scan a plastic skull, a cadaver head and a
volunteer were registered to an in house navigation system. We stored the position data of an optical camera (NDI
Polaris) for registration with pre-defined CT coordinates. For every specimen we choose 3, 5, 7 and 9 registration and 10
evaluation points, respectively, performing 10 registrations. The data were evaluated both with the Arun and the Horn
approaches. The vectorial difference between actual and predefined position in the CT data set was stored and evaluated
for FRE and TRE. Evaluation and visualization was implemented in Matlab. The data were analyzed, specifically for
normal distribution, with MS Excel and SPSS Version 15.0.
For the plastic skull and the anatomic specimen submillimetric application accuracy was found experimentally and
confirmed by the calculated TRE. Since for the volunteer no Titanium screws were implanted anatomic landmarks had to
be used for registration and evaluation; an application accuracy in the low millimeter regime was found in all
approaches. However, the detailed statistical analysis of the data revealed that the model predictions and the actual
measurements do not exhibit a strong statistical correlation (p < 0.05). These data suggest that the TRE predictions are
too optimistic and should be used with caution intraoperatively.
Full automatic fiducial marker detection on coil arrays for accurate instrumentation placement during MRI guided breast interventions
Author(s):
Konstantinos Filippatos;
Tobias Boehler;
Benjamin Geisler;
Harald Zachmann;
Thorsten Twellmann
Show Abstract
With its high sensitivity, dynamic contrast-enhanced MR imaging (DCE-MRI) of the breast is today one of the first-line
tools for early detection and diagnosis of breast cancer, particularly in the dense breast of young women. However, many
relevant findings are very small or occult on targeted ultrasound images or mammography, so that MRI guided biopsy is
the only option for a precise histological work-up [1]. State-of-the-art software tools for computer-aided diagnosis of
breast cancer in DCE-MRI data offer also means for image-based planning of biopsy interventions. One step in the MRI
guided biopsy workflow is the alignment of the patient position with the preoperative MR images. In these images, the
location and orientation of the coil localization unit can be inferred from a number of fiducial markers, which for this
purpose have to be manually or semi-automatically detected by the user.
In this study, we propose a method for precise, full-automatic localization of fiducial markers, on which basis a virtual
localization unit can be subsequently placed in the image volume for the purpose of determining the parameters for
needle navigation. The method is based on adaptive thresholding for separating breast tissue from background followed
by rigid registration of marker templates. In an evaluation of 25 clinical cases comprising 4 different commercial coil
array models and 3 different MR imaging protocols, the method yielded a sensitivity of 0.96 at a false positive rate of
0.44 markers per case. The mean distance deviation between detected fiducial centers and ground truth information that
was appointed from a radiologist was 0.94mm.
Risk maps for navigation in liver surgery
Author(s):
C. Hansen;
S. Zidowitz;
A. Schenk;
K.-J. Oldhafer;
H. Lang;
H.-O. Peitgen
Show Abstract
The optimal transfer of preoperative planning data and risk evaluations to the operative site is challenging. A common
practice is to use preoperative 3D planning models as a printout or as a presentation on a display. One important aspect is
that these models were not developed to provide information in complex workspaces like the operating room.
Our aim is to reduce the visual complexity of 3D planning models by mapping surgically relevant information onto a
risk map. Therefore, we present methods for the identification and classification of critical anatomical structures in the
proximity of a preoperatively planned resection surface. Shadow-like distance indicators are introduced to encode the
distance from the resection surface to these critical structures on the risk map. In addition, contour lines are used to
accentuate shape and spatial depth.
The resulting visualization is clear and intuitive, allowing for a fast mental mapping of the current resection surface to
the risk map. Preliminary evaluations by liver surgeons indicate that damage to risk structures may be prevented and
patient safety may be enhanced using the proposed methods.
Real time planning, guidance and validation of surgical acts using 3D segmentations, augmented reality projections and surgical tools video tracking
Author(s):
Angel Osorio;
Juan-Antonio Galan;
Julien Nauroy;
Patricia Donars
Show Abstract
When performing laparoscopies and punctures, the precise anatomic localizations are required. Current techniques very
often rely on the mapping between the real situation and preoperative images. The PC based software we present realizes
3D segmentations of regions of interest from CT or MR slices. It allows the planning of punctures or trocars insertion
trajectories, taking anatomical constraints into account. Geometrical transformations allow the projection over the
patient's body of the organs and lesions shapes, realistically reconstructed, using a standard video projector in the
operating room. We developed specific image processing software which automatically segments and registers images of
a webcam used in the operating room to give feedback to the user.
Treatment planning and delivery of shell dose distribution for precision irradiation
Author(s):
Mohammad Matinfar;
Santosh Iyer;
Eric Ford;
John Wong;
Peter Kazanzides
Show Abstract
The motivation for shell dose irradiation is to deliver a high therapeutic dose to the surrounding supplying blood-vessels
of a lesion. Our approach's main utility is in enabling laboratory experiments to test the much disputed hypothesis about
tumor vascular damage. That is, at high doses, tumor control is driven by damage to the tumor vascular supply and not
the damage to the tumor cells themselves. There is new evidence that bone marrow derived cells can reconstitute tumor
blood vessels in mice after irradiation. Shell dosimetry is also of interest to study the effect of radiation on neurogenic
stem cells that reside in small niche surface of the mouse ventricles, a generalized form of shell. The type of surface that
we are considering as a shell is a sphere which is created by intersection of cylinders. The results are then extended to
create the contours of different organ shapes. Specifically, we present a routine to identify the 3-D structure of a mouse
brain, project it into 2-D contours and convert the contours into trajectories that can be executed by our platform. We use
the Small Animal Radiation Research Platform (SARRP) to demonstrate the dose delivery procedure. The SARRP is a
portable system for precision irradiation with beam sizes down to 0.5 mm and optimally planned radiation with on-board
cone-beam CT guidance.
Correction of prostate misalignment in radiation therapy using US-CT registration
Author(s):
Rainer Hoffmann;
Michael Figl;
Marcus Kaar;
Amon Bhatia;
Amar Bhatia;
Wolfgang Birkfellner;
Johann Hummel
Show Abstract
Latest developments in radiation therapy such as IGRT (image guided radiation therapy) and IMRT
(intensity modulated radiation therapy) promise to spare organs at risk by applying better dose
distribution on the tumor. For any effective application of these methods the exact positioning of the
patient and the localization of the exposured organ is crucially. With respect to the filling of rectum
and bladder the prostate can move several millimeters up to centimeters. That implies the need of
daily determination and correction of the position of the prostate before irradiation. We calibrated
a scan head of a B-mode US machine (Ultramark 9, advanced Technology Laboratories, USA) by
means of an optical tracking system (Polaris, NDI, Can). 2D/3D registration was accomplished
by minimizing an adapted mutual information function. Before the registration procedure was
started CT respectively US images were preprocessed by applying various filters and masks. For
registration between the tracking system and the coordinate system of the Linac (defined by the
positioning laser system) a block with three drilled holes was used. Since the coordinates of the
holes are known in both coordinate systems a simple point-to-point registration fulfills the task. The
complete system setup was evaluated by means of a water-filled balloon embedded in a gelatinetank.
For additional evaluation of the 2D/3D registration we used real patient data. For evaluation
of the 2D/3D registration on patient data the prostate was drawn in by a physician on the US
image and the reformatted CT slice. Then, the Hausdorff distance between the two structures was
calculated. The target registration error (TRE) with respect to the balloon experiments amounted
to be 2.1mm±1.2mm for 10 targets. The US calibration was accomplished with an error of 0.8mm±
0.2mm (five calibrations). With respect to the 2D/3D registration we found an Hausdorff distance
of 2.6mm. The results imply that the method is sufficiently accurate and robust. It can be easily
applied to older linear accelerators at low costs.
Computer-assisted targeted therapy (CATT) for prostate radiotherapy planning by fusion of CT and MRI
Author(s):
Jonathan Chappelow;
Stefan Both;
Satish Viswanath;
Stephen Hahn;
Michael Feldman;
Mark Rosen;
John Tomaszewski;
Neha Vapiwala;
Pratik Patel;
Anant Madabhushi
Show Abstract
In this paper, we present a comprehensive, quantitative imaging framework for improved treatment of prostate
cancer via computer-assisted targeted therapy (CATT) to facilitate radiotherapy dose escalation to regions with
a high likelihood of disease presence. The framework involves identification of high likelihood prostate cancer
regions using computer-aided detection (CAD) classifier on diagnostic MRI, followed by mapping of these regions
from MRI onto planning computerized tomography (CT) via image registration. Treatment of prostate cancer
by targeted radiotherapy requires CT to formulate a dose plan. While accurate delineation of the prostate and
cancer can provide reduced exposure of benign tissue to radiation, as well as a higher dose to the cancer, CT is
ineffective in localizing intraprostatic lesions and poor for highlighting the prostate boundary. MR imagery on the
other hand allows for greatly improved visualization of the prostate. Further, several studies have demonstrated
the utility of CAD for identifying the location of tumors on in vivo multi-functional prostate MRI. Consequently,
our objective is to improve the accuracy of radiotherapy dose plans via multimodal fusion of MR and CT. To
achieve this objective, the CATT framework presented in this paper comprises the following components: (1) an
unsupervised pixel-wise classifier to identify suspicious regions within the prostate on diagnostic MRI, (2) elastic
image registration to align corresponding diagnostic MRI, planning MRI, and CT of the prostate, (3) mapping
of the suspect regions from diagnostic MRI onto CT, and (4) calculation of a modified radiotherapy plan with
escalated dose for cancer. Qualitative comparison of the dose plans (with and without CAD) over a total of 79
2D slices obtained from 10 MR-CT patient studies, suggest that our CATT framework could help in improved
targeted treatment of prostate cancer.
Shape-correlated deformation statistics for respiratory motion prediction in 4D lung
Author(s):
Xiaoxiao Liu;
Ipek Oguz;
Stephen M. Pizer;
Gig S. Mageras
Show Abstract
4D image-guided radiation therapy (IGRT) for free-breathing lungs is challenging due to the complicated respiratory
dynamics. Effective modeling of respiratory motion is crucial to account for the motion affects on the
dose to tumors. We propose a shape-correlated statistical model on dense image deformations for patient-specic
respiratory motion estimation in 4D lung IGRT. Using the shape deformations of the high-contrast lungs as the
surrogate, the statistical model trained from the planning CTs can be used to predict the image deformation
during delivery verication time, with the assumption that the respiratory motion at both times are similar for
the same patient. Dense image deformation fields obtained by diffeomorphic image registrations characterize the
respiratory motion within one breathing cycle. A point-based particle optimization algorithm is used to obtain
the shape models of lungs with group-wise surface correspondences. Canonical correlation analysis (CCA) is
adopted in training to maximize the linear correlation between the shape variations of the lungs and the corresponding
dense image deformations. Both intra- and inter-session CT studies are carried out on a small group
of lung cancer patients and evaluated in terms of the tumor location accuracies. The results suggest potential
applications using the proposed method.
Modeling tumor/polyp/lesion structure in 3D for computer-aided diagnosis in colonoscopy
Author(s):
Chao-I Chen;
Dusty Sargent;
Yuan-Fang Wang
Show Abstract
We describe a software system for building three-dimensional (3D) models from colonoscopic videos. The system
is end-to-end in the sense that it takes as input raw image frames-shot during a colon exam-and produces the
3D structure of objects of interest (OOI), such as tumors, polyps, and lesions. We use the structure-from-motion
(SfM) approach in computer vision which analyzes an image sequence in which camera's position and aim vary
relative to the OOI. The varying pose of the camera relative to the OOI induces the motion-parallax effect which
allows 3D depth of the OOI to be inferred. Unlike the traditional SfM system pipeline, our software system
contains many check-and-balance mechanisms to ensure robustness, and the analysis from earlier stages of the
pipeline is used to guide the later processing stages to better handle challenging medical data. The constructed
3D models allow the pathology (growth and change in both structure and appearance) to be monitored over
time.
Generation of smooth and accurate surface models for surgical planning and simulation
Author(s):
Tobias Moench;
Mathias Neugebauer;
Peter Hahn;
Bernhard Preim
Show Abstract
Surface models from medical image data (intensity, binary) are used for evaluating spatial relationships for
intervention or radiation treatment planning. Furthermore, surface models are employed for generating volume
meshes for simulating e.g. tissue deformation or blood flow. In such applications, smoothness and accuracy
of the models are essential. These aspects may be influenced by image preprocessing, the mesh generation
algorithm and mesh postprocessing (smoothing, simplification). Thus, we evaluated the influences of different
image preprocessing methods (Gaussian smoothing, morphological operators, shape-based interpolation), model
generation (Marching Cubes, Constrained Elastic Surface Nets, MPU Implicits) and mesh postprocessing to
intensity and binary data with respect to its application within surgical planning and simulation. The resulting
surface meshes are evaluated regarding their smoothness, accuracy and mesh quality. We consider the local
curvature, equi-angle skewness, (Hausdorff) distances between two meshes (before and after processing), and
volume preservation as measures. We discuss these results concerning their suitability for different applications
in the field of surgical planning as well as finite element simulations and make recommendations on how to receive
smooth and accurate surface meshes for exemplary cases.
Multi-contact model for FEM-based surgical simulation
Author(s):
Hyun Young Choi;
Woojin Ahn;
Doo Yong Lee
Show Abstract
This paper presents a novel method to simulate multiple contacts of deformable objects modeled by the finite-element
method. There have been two main approaches for contact models in the literature. The penalty method fails to guarantee
the non-penetration condition. In the constraint method, imposition of multiple position constraints on arbitrary points of
the surface, caused by the multiple contacts, can be non-deterministic depending on the contact configurations. Infinite
number of solutions exists in most cases. The proposed method uses a deformable membrane of mass and spring to
determine the initial position constraints to obtain a unified solution regardless of the contact configurations. The
membrane is locally generated at the contact region, and is identical to the local triangular surface meshes of the finite-element
model. The membrane is then deformed by the contacts caused by rigid objects such as surgical tools.
Displacements of the mass points of the deformed membrane at the equilibrium state are, then, applied to the finiteelement
model as the position constraints. Simulation results show satisfactory realism in the real-time simulation of the
deformation. The proposed method prevents penetration of the rigid object into the deformable object. The method can
be applied to interactions between tools and organs of arbitrary shapes.
3D TEE registration with MR for cardiac interventional applications
Author(s):
Jonghye Woo;
Vijay Parthasarathy;
Dalal Sandeep;
Ameet Jain
Show Abstract
Live three dimensional (3D) transesophageal echocardiography (TEE) provides real-time imaging of cardiac
structure and function, and has been shown to be useful in interventional cardiac procedures. Its application in
catheter based cardiac procedures is, however, limited by its limited field of view (FOV). In order to mitigate
this limitation, we register pre-operative magnetic resonance (MR) images to live 3D TEE images. Conventional
multimodal image registration techniques that use mutual information (MI) as the similarity measure use
statistics from the entire image. In these cases, correct registration, however, may not coincide with the global
maximum of MI metric. In order to address this problem, we present an automated registration algorithm that
balances a combination global and local edge-based statistics. The weighted sum of global and local statistics is
computed as the similarity measure, where the weights are decided based on the strength of the local statistics.
Phantom validation experiments shows improved capture ranges when compared with conventional MI based
methods. The proposed method provided robust results with accuracy better than 3 mm (5°) in the range of
-10 to 12 mm (-6 to 3°), -14 to 12 mm (-6 to 6°) and -16 to 6 mm (-6 to 3°) in x-, y-, and z- axes respectively.
We believe that the proposed registration method has the potential for real time intra-operative image fusion
during percutaneous cardiac interventions.
Realistic colon simulation in CT colonography using mesh skinning
Author(s):
Jianhua Yao;
Ananda S. Chowdhury;
Ronald M. Summers M.D.
Show Abstract
Realistic colon simulations do not exist but would be valuable for CT colonography (CTC) CAD development and
validation of new colon image processing algorithms. The human colon is a convoluted tubular structure and very hard
to model physically and electronically. In this investigation, we propose a novel approach to generate realistic colon
simulation using mesh skinning. The method proceeds as follows. First, a digital phantom of a cylindrical tube is
modeled to simulate a straightened colon. Second, haustral folds and teniae coli are added to the cylindrical tube. Third,
a centerline equipped with rotation-minimizing frames (RMF) and distention values is computed. Fourth, mesh skinning
is applied to warp the tube around the centerline and generate realistic colon simulation. Lastly, colonic polyps in the
shape of ellipsoids are also modeled. Results show that the simulated colon highly resembles the real colon. This is the
first colon simulation that incorporates most colon characteristics in one model, including curved centerline, variable
distention, haustral folds, teniae coli and colonic polyps.
Ground truth and CT image model simulation for pathophysiological human airway system
Author(s):
Margarete Ortner;
Catalin Fetita;
Pierre-Yves Brillet;
Françoise Prêteux;
Philippe Grenier
Show Abstract
Recurrent problem in medical image segmentation and analysis, establishing a ground truth for assessment purposes
is often difficult. Facing this problem, the scientific community orients its efforts towards the development
of objective methods for evaluation, namely by building up or simulating the missing ground truth for analysis.
This paper focuses on the case of human pulmonary airways and develops a method 1) to simulate the ground
truth for different pathophysiological configurations of the bronchial tree as a mesh model, and 2) to generate
synthetic 3D CT images of airways associated with the simulated ground truth. The airway model is here built
up based on the information provided by a medial axis (describing bronchus shape, subdivision geometry and
local radii), which is computed from real CT data to ensure realism and matching with a patient-specific morphology.
The model parameters can be further on adjusted to simulate various pathophysiological conditions
of the same patient (longitudinal studies). Based on the airway mesh model, a 3D image model is synthesized
by simulating the CT acquisition process. The image realism is achieved by including textural features of the
surrounding pulmonary tissue which are obtained by segmentation from the same original CT data providing
the airway axis. By varying the scanning simulation parameters, several 3D image models can be generated
for the same airway mesh ground truth. Simulation results for physiological and pathological configurations
are presented and discussed, illustrating the interest of such a modeling process for designing computer-aided
diagnosis systems or for assessing their sensitivity, mainly for follow-up studies in asthma and COPD.
Endoscope-magnetic tracker calibration via trust region optimization
Author(s):
Dusty Sargent
Show Abstract
Minimally invasive surgical techniques and advanced imaging systems are gaining prevalence in modern clinical
practice. Using miniaturized magnetic trackers in combination with these procedures can help physicians with the
orientation and guidance of instruments in graphical displays, navigation during surgery, 3D reconstruction of anatomy,
and other applications. Magnetic trackers are often used in conjunction with other sensors or instruments such as
endoscopes and optical trackers. In such applications, complex calibration procedures are required to align the
coordinate systems of the different devices in order to produce accurate results. Unfortunately, current calibration
procedures developed for augmented reality are cumbersome and unsuitable for repeated use in a clinical setting.
This paper presents an efficient automated endoscope-tracker calibration algorithm for clinical applications. The
algorithm is based on a state-of-the-art trust region optimization method and requires minimal intervention from the
endoscope operator. The only required input is a short video of a calibration grid taken with the endoscope and attached
magnetic tracker prior to the procedure. The three stage calibration process uses a traditional camera calibration to
determine the intrinsic and extrinsic parameters of the endoscope, and then the endoscope is registered in the tracker's
reference frame using a novel linear estimation method and a trust region optimization algorithm. This innovative
method eliminates the need for complicated calibration procedures and facilitates the use of magnetic tracking devices in
clinical settings.
A GPU based high-definition ultrasound digital scan conversion algorithm
Author(s):
Mingchang Zhao;
Shanjue Mo
Show Abstract
Digital scan conversion algorithm is the most computational intensive part of B-mode ultrasound imaging. Traditionally,
in order to meet the requirements of real-time imaging, digital scan conversion algorithm often traded off image quality
for speed, such as the use of simple image interpolation algorithm, the use of look-up table to carry out polar coordinates
transform and logarithmic compression. This paper presents a GPU-based high-definition real-time ultrasound digital
scan conversion algorithm implementation. By rendering appropriate proxy geometry, we can implement a high
precision digital scan conversion pipeline, including polar coordinates transform, bi-cubic image interpolation, high
dynamic range tone reduction, line average and frame persistence FIR filtering, 2D post filtering, fully in the fragment
shader of GPU at real-time speed. The proposed method shows the possibility of updating exist FPGA or ASIC based
digital scan conversion implementation to low cost GPU based high-definition digital scan conversion implementation.
Precisely shaped acoustic ablation of tumors utilizing steerable needle and 3D ultrasound image guidance
Author(s):
Emad M. Boctor;
Philipp Stolka;
Hyun-Jae Kang;
Clyde Clarke;
Caleb Rucker;
Jordon Croom;
E. Clif Burdette;
Robert J. Webster III
Show Abstract
Many recent studies have demonstrated the efficacy of interstitial ablative approaches for the treatment of hepatic tumors. Despite these promising results, current systems remain highly dependent on operator skill, and cannot treat many tumors because there is little control of the size and shape of the zone of necrosis, and no control over ablator trajectory within tissue once insertion has taken place. Additionally, tissue deformation and target motion make it extremely difficult to place the ablator device precisely into the target. Irregularly shaped target volumes typically require multiple insertions and several overlapping (thermal) lesions, which are even more challenging to accomplish in a precise, predictable, and timely manner without causing excessive damage to surrounding normal tissues.
In answer to these problems, we have developed a steerable acoustic ablator called the ACUSITT with the ability of directional energy delivery to precisely shape the applied thermal dose . In this paper, we address image guidance for this device, proposing an innovative method for accurate tracking and tool registration with spatially-registered intra-operative three-dimensional US volumes, without relying on an external tracking device. This method is applied to guid-ance of the flexible, snake-like, lightweight, and inexpensive ACUSITT to facilitate precise placement of its ablator tip within the liver, with ablation monitoring via strain imaging. Recent advancements in interstitial high-power ultrasound applicators enable controllable and penetrating heating patterns which can be dynamically altered. This paper summarizes the design and development of the first synergistic system that integrates a novel steerable interstitial acoustic ablation device with a novel trackerless 3DUS guidance strategy.
A probabilistic framework for ultrasound image decomposition
Author(s):
Igor V. Solovey;
Oleg V. Michailovich;
Robert S. Xu
Show Abstract
Image segmentation and tissue characterization are fundamental tasks of computer-aided diagnosis (CAD) in
medical ultrasound imaging. As an initial step, such algorithms are usually based on extraction of pertinent
features from the acquired ultrasound data. Typically, these features are computed directly from localized
image segments, thereby representing local statistical properties of the image. However, the process of image
formation of medical ultrasound suggests that such an approach could result in a variety of unwanted artifacts
(such as excessively smooth segmentation boundaries or misclassification) at subsequent stages of the algorithm.
In this work, we propose to first decompose the observed images into a number of their statistically distinct
components. The decomposition is based on the maximum-a-posteriori (MAP) statistical framework which has
been derived based on the signal and noise models appropriate for the ultrasound setting. Subsequently, each
resulting component is used separately to extract a set of its corresponding features. When retrieved in this way
(rather than directly from the observed image), the combined set of resulting features is shown to be capable of
better discriminating between different tissue types. Examples of in silico simulations and in vivo experiments
are provided to illustrate the practical usefulness of this technique for improving the results of ultrasound image
segmentation.
Dynamic tracking of tendon elongation in ultrasound imaging
Author(s):
Mahta Karimpoor;
Hazel Screen;
Dylan Morrissey
Show Abstract
The aim of this study was to evaluate the elongation of the Achilles tendon by looking at the changing position of
Myo-Tendenious Junction (MTJ) using ultrasound during isometric contraction on an Isometric dynamometer.
A sequence of ultrasound images in the form of movie, obtained from a unit operating at a frequency of 12MHz
during isometric contraction, was analyzed offline using MATLAB to track the MTJ. This investigation has
implemented important techniques for in vivo feature extraction of Achilles tendon. Prior to feature extraction,
the images were filtered by anisotropic diffusion method and morphological enhancements. The cross correlation
search algorithm with an adaptive mask was utilized to track MTJ by comparing adjacent segmented frames.
The present method was studied on seventeen subjects, where it was able to measure the related movement
accurately.
Mechanically assisted 3D prostate ultrasound imaging and biopsy needle-guidance system
Author(s):
Jeffrey Bax;
Jackie Williams;
Derek Cool;
Lori Gardi;
Jacques Montreuil;
Vaishali Karnik;
Shi Sherebrin;
Cesare Romagnoli;
Aaron Fenster
Show Abstract
Prostate biopsy procedures are currently limited to using 2D transrectal ultrasound (TRUS) imaging to guide the biopsy
needle. Being limited to 2D causes ambiguity in needle guidance and provides an insufficient record to allow guidance
to the same suspicious locations or avoid regions that are negative during previous biopsy sessions. We have developed
a mechanically assisted 3D ultrasound imaging and needle tracking system, which supports a commercially available
TRUS probe and integrated needle guide for prostate biopsy. The mechanical device is fixed to a cart and the
mechanical tracking linkage allows its joints to be manually manipulated while fully supporting the weight of the
ultrasound probe. The computer interface is provided in order to track the needle trajectory and display its path on a
corresponding 3D TRUS image, allowing the physician to aim the needle-guide at predefined targets within the prostate.
The system has been designed for use with several end-fired transducers that can be rotated about the longitudinal axis
of the probe in order to generate 3D image for 3D navigation. Using the system, 3D TRUS prostate images can be
generated in approximately 10 seconds. The system reduces most of the user variability from conventional hand-held
probes, which make them unsuitable for precision biopsy, while preserving some of the user familiarity and procedural
workflow. In this paper, we describe the 3D TRUS guided biopsy system and report on the initial clinical use of this
system for prostate biopsy.
Multi-parametric MRI-pathologic correlation of prostate cancer using tracked biopsies
Author(s):
Sheng Xu;
Baris Turkbey;
Jochen Kruecker;
Pingkun Yan;
Julia Locklin;
Peter Pinto;
Peter Choyke;
Bradford Wood
Show Abstract
MRI is currently the most promising imaging modality for prostate cancer diagnosis due to its high resolution and multiparametric
nature. However, currently there is no standard for integration of diagnostic information from different MRI
sequences. We propose a method to increase the diagnostic accuracy of MRI by correlating biopsy specimens with four
MRI sequences including T2 weighted MRI, Diffusion Weight Imaging, Dynamic Contrast Enhanced MRI and MRI
spectroscopy. This method uses device tracking and image fusion to determine the specimen's position on MRI images.
The proposed method is unbiased and cost effective. It does not substantially interfere with the standard biopsy
workflow, allowing it to be easily accepted by physicians. A study of 41 patients was carried out to validate the
approach. The performance of all four MRI sequences in various combinations is reported. Guidelines are given for
multi-parametric imaging and tracked biopsy of prostate cancer.
A multi-threaded mosaicking algorithm for fast image composition of fluorescence bladder images
Author(s):
Alexander Behrens;
Michael Bommes;
Thomas Stehle;
Sebastian Gross;
Steffen Leonhardt;
Til Aach
Show Abstract
The treatment of urinary bladder cancer is usually carried out using fluorescence endoscopy. A narrow-band
bluish illumination activates a tumor marker resulting in a red fluorescence. Because of low illumination
power the distance between endoscope and bladder wall is kept low during the whole bladder scan, which is
carried out before treatment. Thus, only a small field of view (FOV) of the operation field is provided, which
impedes navigation and relocating of multi-focal tumors. Although off-line calculated panorama images can
assist surgery planning, the immediate display of successively growing overview images composed from single
video frames in real-time during the bladder scan, is well suited to ease navigation and reduce the risk of
missing tumors. Therefore we developed an image mosaicking algorithm for fluorescence endoscopy. Due to
fast computation requirements a flexible multi-threaded software architecture based on our RealTimeFrame
platform is developed. Different algorithm tasks, like image feature extraction, matching and stitching are
separated and applied by independent processing threads. Thus, different implementation of single tasks can
be easily evaluated. In an optimization step we evaluate the trade-off between feature repeatability and total
processing time, consider the thread synchronization, and achieve a constant workload of each thread. Thus,
a fast computation of panoramic images is performed on a standard hardware platform, preserving full input
image resolution (780x576) at the same time. Displayed on a second clinical monitor, the extended FOV of
the image composition promises high potential for surgery assistance.
Automatic segmentation of seeds and fluoroscope tracking (FTRAC) fiducial in prostate brachytherapy x-ray images
Author(s):
Nathanael Kuo;
Junghoon Lee;
Anton Deguet;
Danny Song;
E. Clif Burdette;
Jerry Prince
Show Abstract
C-arm X-ray fluoroscopy-based radioactive seed localization for intraoperative dosimetry of prostate brachytherapy is an
active area of research. The fluoroscopy tracking (FTRAC) fiducial is an image-based tracking device composed of
radio-opaque BBs, lines, and ellipses that provides an effective means for pose estimation so that three-dimensional
reconstruction of the implanted seeds from multiple X-ray images can be related to the ultrasound-computed prostate
volume. Both the FTRAC features and the brachytherapy seeds must be segmented quickly and accurately during the
surgery, but current segmentation algorithms are inhibitory in the operating room (OR). The first reason is that current
algorithms require operators to manually select a region of interest (ROI), preventing automatic pipelining from image
acquisition to seed reconstruction. Secondly, these algorithms fail often, requiring operators to manually correct the
errors. We propose a fast and effective ROI-free automatic FTRAC and seed segmentation algorithm to minimize such
human intervention. The proposed algorithm exploits recent image processing tools to make seed reconstruction as easy
and convenient as possible. Preliminary results on 162 patient images show this algorithm to be fast, effective, and
accurate for all features to be segmented. With near perfect success rates and subpixel differences to manual
segmentation, our automatic FTRAC and seed segmentation algorithm shows promising results to save crucial time in
the OR while reducing errors.
Trans-rectal interventional MRI: initial prostate biopsy experience
Author(s):
Bernadette M. Greenwood;
Meliha R. Behluli;
John F. Feller;
Stuart T. May;
Robert Princenthal;
Alex Winkel;
David B. Kaminsky
Show Abstract
Dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) of the prostate gland when evaluated along with
T2-weighted images, diffusion-weighted images (DWI) and their corresponding apparent diffusion coefficient (ADC)
maps can yield valuable information in patients with rising or elevated serum prostate-specific antigen (PSA) levels1. In
some cases, patients present with multiple negative trans-rectal ultrasound (TRUS) biopsies, often placing the patient
into a cycle of active surveillance. Recently, more patients are undergoing TRIM for targeted biopsy of suspicious
findings with a cancer yield of ~59% compared to 15% for second TRUS biopsy2 to solve this diagnostic dilemma and
plan treatment. Patients were imaged in two separate sessions on a 1.5T magnet using a cardiac phased array parallel
imaging coil. Automated CAD software was used to identify areas of wash-out. If a suspicious finding was identified on
all sequences it was followed by a second imaging session. Under MRI-guidance, cores were acquired from each target
region3. In one case the microscopic diagnosis was prostatic intraepithelial neoplasia (PIN), in the other it was invasive
adenocarcinoma. Patient 1 had two negative TRUS biopsies and a PSA level of 9ng/mL. Patient 2 had a PSA of
7.2ng/mL. He underwent TRUS biopsy which was negative for malignancy. He was able to go on to treatment for his
prostate carcinoma (PCa)4. MRI may have an important role in a subset of patients with multiple negative TRUS
biopsies and elevated or rising PSA.
MRI-GUIDED prostate motion tracking by means of multislice-to-volume registration
Author(s):
Hadi Tadayyon;
Siddharth Vikal;
Sean Gill;
Andras Lasso;
Gabor Fichtinger
Show Abstract
We developed an algorithm for tracking prostate motion during MRI-guided prostatic needle placement, with the
primary application in prostate biopsy. Our algorithm has been tested on simulated patient and phantom data. The
algorithm features a robust automatic restart and a 12-core biopsy error validation scheme. Simulation tests were
performed on four patient MRI pre-operative volumes. Three orthogonal slices were extracted from the pre-operative
volume to simulate the intra-operative volume and a volume of interest was defined to isolate the prostate. Phantom tests
used six datasets, each representing the phantom at a known perturbed position. These volumes were registered to their
corresponding reference volume (the phantom at its home position). Convergence tests on the phantom data showed that
the algorithm demonstrated accurate results at 100% confidence level for initial misalignments of less than 5mm and at
73% confidence level for initial misalignments less than 10mm. Our algorithm converged in 95% of the cases for the
simulated patient data with 0.66mm error and the six phantom registration tests resulted in 1.64mm error.
Planning of vessel grafts for reconstructive surgery in congenital heart diseases
Author(s):
U. Rietdorf;
E. Riesenkampff;
T. Schwarz;
T. Kuehne;
H.-P. Meinzer;
I. Wolf
Show Abstract
The Fontan operation is a surgical treatment for patients with severe congenital heart diseases, where a biventricular correction of the heart can't be achieved. In these cases, a uni-ventricular system is established. During the last step of surgery a tunnel segment is placed to connect the inferior caval vein directly with the pulmonary artery, bypassing the right atrium and ventricle. Thus, the existing ventricle works for the body circulation, while the venous blood is passively directed to the pulmonary arteries. Fontan tunnels can be placed intra- and extracardially. The location, length and shape of the tunnel must be planned accurately. Furthermore, if the tunnel is placed extracardially, it must be positioned between other anatomical structures without constraining them. We developed a software system to support planning of the tunnel location, shape, and size, making pre-operative preparation of the tunnel material possible. The system allows for interactive placement and adjustment of the tunnel, affords a three-dimensional visualization of the virtual Fontan tunnel inside the thorax, and provides a quantification of the length, circumferences and diameters of the tunnel segments. The visualization and quantification can be used to plan and prepare the tunnel material for surgery in order to reduce the intra-operative time and to improve the fit of the tunnel patch.
A robotic assistant system for cardiac interventions under MRI guidance
Author(s):
Ming Li;
Dumitru Mazilu;
Bradford J. Wood;
Keith A. Horvath;
Ankur Kapoor
Show Abstract
In this paper we present a surgical assistant system for implanting prosthetic aortic valve transapically under MRI
guidance, in a beating heart. The system integrates an MR imaging system, a robotic system, as well as user interfaces
for a surgeon to plan the procedure and manipulate the robot. A compact robotic delivery module mounted on a robotic
arm is used for delivering both balloon-expandable and self-expanding prosthesis. The system provides different user
interfaces at different stages of the procedure. A compact fiducial pattern close to the volume of interest is proposed for
robot registration. The image processing and the transformation recovery methods using this fiducial in MRI are
presented. The registration accuracy obtained by using this compact fiducial is comparable to the larger multi-spherical
marker registration method. The registration accuracy using these two methods is less than 0.62±0.50 deg (mean ± std.
dev.) and 0.63±0.72 deg (mean ± std. dev.), respectively. We evaluated each of the components and show that they can
work together to form a complete system for transapical aortic valve replacement.
Integration of trans-esophageal echocardiography with magnetic tracking technology for cardiac interventions
Author(s):
John T. Moore;
Andrew D. Wiles;
Chris Wedlake;
Daniel Bainbridge;
Bob Kiaii;
Ana Luisa Trejos;
Rajni Patel;
Terry M. Peters
Show Abstract
Trans-esophageal echocardiography (TEE) is a standard component of patient monitoring during most cardiac
surgeries. In recent years magnetic tracking systems (MTS) have become sufficiently robust to function effectively
in appropriately structured operating room environments. The ability to track a conventional multiplanar 2D
TEE transducer in 3D space offers incredible potential by greatly expanding the cumulative field of view of cardiac
anatomy beyond the limited field of view provided by 2D and 3D TEE technology. However, there is currently
no TEE probe manufactured with MTS technology embedded in the transducer, which means sensors must be
attached to the outer surface of the TEE. This leads to potential safety issues for patients, as well as potential
damage to the sensor during procedures. This paper presents a standard 2D TEE probe fully integrated with
MTS technology. The system is evaluated in an environment free of magnetic and electromagnetic disturbances,
as well as a clinical operating room in the presence of a da Vinci robotic system. Our first integrated TEE
device is currently being used in animal studies for virtual reality-enhanced ultrasound guidance of intracardiac
surgeries, while the "second generation" TEE is in use in a clinical operating room as part of a project to
measure perioperative heart shift and optimal port placement for robotic cardiac surgery. We demonstrate
excellent system accuracy for both applications.
2D/3D registration using only single-view fluoroscopy to guide cardiac ablation procedures: a feasibility study
Author(s):
Pascal Fallavollita
Show Abstract
The CARTO XP is an electroanatomical cardiac mapping system that provides 3D color-coded maps of the
electrical activity of the heart, however it is expensive and it can only use a single costly magnetic catheter for each
patient intervention. Aim: To develop an affordable fluoroscopic navigation system that could shorten the duration of RF
ablation procedures and increase its efficacy. Methodology: A 4-step filtering technique was implemented in order to
project the tip electrode of an ablation catheter visible in single-view C-arm images in order to calculate its width. The
width is directly proportional to the depth of the catheter. Results: For phantom experimentation, when displacing a 7-
French catheter at 1cm intervals away from an X-ray source, the recovered depth using a single image was 2.05 ± 1.47
mm, whereas depth errors improved to 1.55 ± 1.30 mm when using an 8-French catheter. In clinic experimentation,
twenty posterior and left lateral images of a catheter inside the left ventricle of a mongrel dog were acquired. The
standard error of estimate for the recovered depth of the tip-electrode of the mapping catheter was 13.1 mm and 10.1 mm
respectively for the posterior and lateral views. Conclusions: A filtering implementation using single-view C-arm images
showed that it was possible to recover depth in phantom study and proved adequate in clinical experimentation based on
isochronal map fusion results.
Segmentation of carotid arteries by graph-cuts using centerline models
Author(s):
Mehmet A. Gülsün;
Hüseyin Tek
Show Abstract
This document presents a semi-automatic method for segmenting carotid arteries in contrast enhanced (CE)-
CT angiography (CTA) scans. The segmentation algorithm extracts the lumen of carotid arteries between user
specified locations. Specifically, the algorithm first detects the centerline representations between the user placed
seed points. This centerline extraction algorithm is based on a minimal path detection method which operates
on a medialness map. The lumen of carotid arteries is then extracted by graph-cuts optimization technique
using the detected centerlines as input. The distance from the centerline representation is used to normalize the
gradient based edge weights of the graph. It is shown that this algorithm can successfully segment the carotid
arteries without including calcified and non-calcified plaques in the segmentation results.
An evaluative tool for preoperative planning of brain tumor resection
Author(s):
Aaron M. Coffey;
Ishita Garg;
Michael I. Miga;
Reid C. Thompson
Show Abstract
A patient specific finite element biphasic brain model has been utilized to codify a surgeon's experience by establishing
quantifiable biomechanical measures to score orientations for optimal planning of brain tumor resection. When faced
with evaluating several potential approaches to tumor removal during preoperative planning, the goal of this work is to
facilitate the surgeon's selection of a patient head orientation such that tumor presentation and resection is assisted via
favorable brain shift conditions rather than trying to allay confounding ones. Displacement-based measures consisting of
area classification of the brain surface shifting in the craniotomy region and lateral displacement of the tumor center
relative to an approach vector defined by the surgeon were calculated over a range of orientations and used to form an
objective function. The objective function was used in conjunction with Levenberg-Marquardt optimization to find the
ideal patient orientation. For a frontal lobe tumor presentation the model predicts an ideal orientation that indicates the
patient should be placed in a lateral decubitus position on the side contralateral to the tumor in order to minimize
unfavorable brain shift.
Computer-aided planning for endovascular treatment of intracranial aneurysms (CAPETA)
Author(s):
Ashraf Mohamed;
Eleni Sgouritsa;
Hesham Morsi;
Hashem Shaltoni;
Michel E. Mawad;
Ioannis A. Kakadiaris
Show Abstract
Endovascular treatment planning of intracranial aneurysms requires accurate quantification of their geometric
parameters, including the neck length, dome height and maximum diameter. Today, the geometry of intracranial
aneurysms is typically quantified manually based on three-dimensional (3D) Digital Subtraction Angiography
(DSA) images. Since the repeatability of manual measurements is not guaranteed and their accuracy is dependent
on the experience of the treating physician, we propose a semi-automated approach for computer-aided
measurement of these parameters. In particular, a tubular deformable model, initialized based on user-provided
points, is first fit to the surface of the parent artery. An initial estimate of the aneurysmal segment is obtained
based on differences between the two surfaces. A 3D deformable contour model is then used to localize the
aneurysmal neck and to separate its dome surface from the parent artery. Finally, approaches for estimation of
the clinically relevant geometric parameters are applied based on the aneurysmal neck and dome surface. Results
on 19 3D DSA datasets of saccular aneurysms indicate that, for the maximum diameter, the standard deviation
of the difference between the proposed approach and two independent manual sets of measurements obtained by
expert readers is similar to the inter-rater standard deviation. For the neck length and dome height, the results
improve considerably if we exclude datasets with high deviation from the manual measurements.
A novel contrast for DTI visualization for thalamus delineation
Author(s):
Xian Fan;
Meredith Thompson;
John A. Bogovic;
Pierre-Louis Bazin;
Jerry L. Prince
Show Abstract
It has been recently shown that thalamic nuclei can be automatically segmented using diffusion tensor images (DTI)
under the assumption that principal fiber orientation is similar within a given nucleus and distinct between adjacent
nuclei. Validation of these methods, however, is challenging because manual delineation is hard to carry out due to the
lack of images showing contrast between the nuclei. In this paper, we present a novel gray-scale contrast for DTI
visualization that accentuates voxels in which the orientations of the principal eigenvectors are changing, thus providing
an edge map for the delineation of some thalamic nuclei. The method uses the principal fiber orientation computed from
the diffusion tensors computed at each voxel. The three-dimensional orientations of the principal eigenvectors are
represented as five dimensional vectors and the spatial gradient (matrix) of these vectors provide information about
spatial changes in tensor orientation. In particular, an edge map is created by computing the Frobenius norm of this
gradient matrix. We show that this process reveals distinct edges between large nuclei in the thalamus, thereby making
manual delineation of the thalamic nuclei possible. We briefly describe a protocol for the manual delineation of thalamic
nuclei based on this edge map used in conjunction with a registered T1-weighted MR image, and present a preliminary
multi-rater evaluation of the volumes of thalamic nuclei in several subjects.
Evaluating a visualization of uncertainty in probabilistic tractography
Author(s):
Anette von Kapri;
Tobias Rick;
Svenja Caspers;
Simon B. Eickhoff;
Karl Zilles;
Torsten Kuhlen
Show Abstract
In this paper we evaluate a visualization approach for representing uncertainty information in probabilistic fiber
pathways in the human brain. We employ a semi-transparent volume rendering method where probabilities of
fiber tracts are conveyed by colors and opacities (cf. Figure 1). Anatomic orientation is provided by placing
anatomic landmarks in form of cortial or functional defined brain areas. In order to quantify the effectiveness
of our approach we have conducted a formal user study concerning preferred anatomic context information and
coloring of fiber tracts.
Graphical user interfaces for simulation of brain deformation in image-guided neurosurgery
Author(s):
Xiaoyao Fan;
Songbai Ji;
Pablo Valdes;
David W. Roberts;
Alex Hartov;
Keith D. Paulsen
Show Abstract
In image-guided neurosurgery, preoperative images are typically used for surgical planning and intraoperative guidance.
The accuracy of preoperative images can be significantly compromised by intraoperative brain deformation. To
compensate for brain shift, biomechanical finite element models have been used to assimilate intraoperative data to
simulate brain deformation. The clinical feasibility of the approach strongly depends on its accuracy and efficiency. In
order to facilitate and streamline data flow, we have developed graphical user interfaces (GUIs) to provide efficient
image updates in the operating room (OR). The GUIs are organized in a top-down hierarchy with a main control panel
that invokes and monitors a series of sub-GUIs dedicated to perform tasks involved in various aspects of computations of
whole-brain deformation. These GUIs are used to segment brain, generate case-specific brain meshes, and assign and
visualize case-specific boundary conditions (BC). Registration between intraoperative ultrasound (iUS) images acquired
pre- and post-durotomy is also facilitated by a dedicated GUI to extract sparse displacement data used to drive a
biomechanical model. Computed whole-brain deformation is then used to morph preoperative MR images (pMR) to
generate a model-updated image set (i.e., uMR) for intraoperative guidance (accuracy of 1-2 mm). These task-driven
GUIs have been designed to be fault-tolerant, user-friendly, and with sufficient automation. In this paper, we present the
modular components of the GUIs and demonstrate the typical workflow through a clinical patient case.
An integrated model-based neurosurgical guidance system
Author(s):
Songbai Ji;
Xiaoyao Fan;
Kathryn Fontaine;
Alex Hartov;
David Roberts;
Keith Paulsen
Show Abstract
Maximal tumor resection without damaging healthy tissue in open cranial surgeries is critical to the prognosis for
patients with brain cancers. Preoperative images (e.g., preoperative magnetic resonance images (pMR)) are typically
used for surgical planning as well as for intraoperative image-guidance. However, brain shift even at the start of surgery
significantly compromises the accuracy of neuronavigation, if the deformation is not compensated for. Compensating for
brain shift during surgical operation is, therefore, critical for improving the accuracy of image-guidance and ultimately,
the accuracy of surgery. To this end, we have developed an integrated neurosurgical guidance system that incorporates
intraoperative three-dimensional (3D) tracking, acquisition of volumetric true 3D ultrasound (iUS), stereovision (iSV)
and computational modeling to efficiently generate model-updated MR image volumes for neurosurgical guidance. The
system is implemented with real-time Labview to provide high efficiency in data acquisition as well as with Matlab to
offer computational convenience in data processing and development of graphical user interfaces related to
computational modeling. In a typical patient case, the patient in the operating room (OR) is first registered to pMR
image volume. Sparse displacement data extracted from coregistered intraoperative US and/or stereovision images are
employed to guide a computational model that is based on consolidation theory. Computed whole-brain deformation is
then used to generate a model-updated MR image volume for subsequent surgical guidance. In this paper, we present the
key modular components of our integrated, model-based neurosurgical guidance system.
Augmented reality guidance system for peripheral nerve blocks
Author(s):
Chris Wedlake;
John Moore;
Maxim Rachinsky;
Daniel Bainbridge;
Andrew D. Wiles;
Terry M. Peters
Show Abstract
Peripheral nerve block treatments are ubiquitous in hospitals and pain clinics worldwide. State of the art
techniques use ultrasound (US) guidance and/or electrical stimulation to verify needle tip location. However,
problems such as needle-US beam alignment, poor echogenicity of block needles and US beam thickness can
make it difficult for the anesthetist to know the exact needle tip location. Inaccurate therapy delivery raises
obvious safety and efficacy issues. We have developed and evaluated a needle guidance system that makes use
of a magnetic tracking system (MTS) to provide an augmented reality (AR) guidance platform to accurately
localize the needle tip as well as its projected trajectory. Five anesthetists and five novices performed simulated
nerve block deliveries in a polyvinyl alcohol phantom to compare needle guidance under US alone to US placed in
our AR environment. Our phantom study demonstrated a decrease in targeting attempts, decrease in contacting
of critical structures, and an increase in accuracy of 0.68 mm compared to 1.34mm RMS in US guidance alone.
Currently, the MTS uses 18 and 21 gauge hypodermic needles with a 5 degree of freedom sensor located at the
needle tip. These needles can only be sterilized using an ethylene oxide process. In the interest of providing
clinicians with a simple and efficient guidance system, we also evaluated attaching the sensor at the needle hub as
a simple clip-on device. To do this, we simultaneously performed a needle bending study to assess the reliability
of a hub-based sensor.
Ultrasound guided spine needle insertion
Author(s):
Elvis C. S. Chen;
Parvin Mousavi;
Sean Gill;
Gabor Fichtinger;
Purang Abolmaesumi
Show Abstract
An ultrasound (US) guided, CT augmented, spine needle insertion navigational system is introduced. The
system consists of an electromagnetic (EM) sensor, an US machine, and a preoperative CT volume of the patient
anatomy. Three-dimensional (3D) US volume is reconstructed intraoperatively from a set of two-dimensional
(2D) freehand US slices, and is coregistered with the preoperative CT. This allows the preoperative CT volume to
be used in the intraoperative clinical coordinate. The spatial relationship between the patient anatomy, surgical
tools, and the US transducer are tracked using the EM sensor, and are displayed with respect to the CT volume.
The pose of the US transducer is used to interpolate the CT volume, providing the physician with a 2D "x-ray
vision" to guide the needle insertion. Many of the system software components are GPU-accelerated, allowing
real-time performance of the guidance system in a clinical setting.
Statistical atlas based extrapolation of CT data
Author(s):
Gouthami Chintalapani;
Ryan Murphy;
Robert S. Armiger;
Jyri Lepisto;
Yoshito Otake;
Nobuhiko Sugano;
Russell H. Taylor;
Mehran Armand
Show Abstract
We present a framework to estimate the missing anatomical details from a partial CT scan with the help
of statistical shape models. The motivating application is periacetabular osteotomy (PAO), a technique for
treating developmental hip dysplasia, an abnormal condition of the hip socket that, if untreated, may lead to
osteoarthritis. The common goals of PAO are to reduce pain, joint subluxation and improve contact pressure
distribution by increasing the coverage of the femoral head by the hip socket. While current diagnosis and
planning is based on radiological measurements, because of significant structural variations in dysplastic hips,
a computer-assisted geometrical and biomechanical planning based on CT data is desirable to help the surgeon
achieve optimal joint realignments. Most of the patients undergoing PAO are young females, hence it is usually
desirable to minimize the radiation dose by scanning only the joint portion of the hip anatomy. These partial
scans, however, do not provide enough information for biomechanical analysis due to missing iliac region. A
statistical shape model of full pelvis anatomy is constructed from a database of CT scans. The partial volume is
first aligned with the statistical atlas using an iterative affine registration, followed by a deformable registration
step and the missing information is inferred from the atlas. The atlas inferences are further enhanced by the
use of X-ray images of the patient, which are very common in an osteotomy procedure. The proposed method is
validated with a leave-one-out analysis method. Osteotomy cuts are simulated and the effect of atlas predicted
models on the actual procedure is evaluated.
A new method of morphological comparison for bony reconstructive surgery: maxillary reconstruction using scapular tip bone
Author(s):
Harley Chan;
Ralph W. Gilbert;
Nitin A. Pagedar;
Michael J. Daly;
Jonathan C. Irish;
Jeffrey H. Siewerdsen
Show Abstract
esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary
reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex
structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less
satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently
explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that
quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric
computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the
surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist
the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the
Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image
obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this
image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume
segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation
of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall
root-mean-square (RMS) conformance was 3.71± 0.16 mm
Measurement of complex joint trajectories using slice-to-volume 2D/3D registration and cine MR
Author(s):
C. Bloch;
M. Figl;
C. Gendrin;
C. Weber;
E. Unger;
S. Aldrian;
W. Birkfellner
Show Abstract
A method for studying the in vivo kinematics of complex joints is presented.
It is based on automatic fusion of single slice cine MR images capturing the dynamics and a static MR volume.
With the joint at rest the 3D scan is taken. In the data the anatomical compartments are identified and segmented resulting in a 3D volume of each individual part. In each of the cine MR images the joint parts are segmented and their pose and position are derived using a 2D/3D slice-to-volume registration to the volumes.
The method is tested on the carpal joint because of its complexity and the small but complex motion of its compartments.
For a first study a human cadaver hand was scanned and the method was evaluated with
artificially generated slice images. Starting from random initial positions of about 5 mm translational and 12°
rotational deviation, 70 to 90 % of the registrations converged successfully to a deviation better than 0.5 mm
and 5°.
First evaluations using real data from a cine MR were promising.
The feasibility of the method was demonstrated. However we experienced difficulties with the segmentation
of the cine MR images.
We therefore plan to examine different parameters for the image acquisition in future
studies.
Computer assisted 3D pre-operative planning tool for femur fracture orthopedic surgery
Author(s):
Pavan Gamage;
Sheng Quan Xie;
Patrice Delmas;
Wei Liang Xu
Show Abstract
Femur shaft fractures are caused by high impact injuries and can affect gait functionality if not treated correctly. Until
recently, the pre-operative planning for femur fractures has relied on two-dimensional (2D) radiographs, light boxes,
tracing paper, and transparent bone templates. The recent availability of digital radiographic equipment has to some
extent improved the workflow for preoperative planning. Nevertheless, imaging is still in 2D X-rays and
planning/simulation tools to support fragment manipulation and implant selection are still not available. Direct three-dimensional
(3D) imaging modalities such as Computed Tomography (CT) are also still restricted to a minority of
complex orthopedic procedures.
This paper proposes a software tool which allows orthopedic surgeons to visualize, diagnose, plan and simulate femur
shaft fracture reduction procedures in 3D. The tool utilizes frontal and lateral 2D radiographs to model the fracture
surface, separate a generic bone into the two fractured fragments, identify the pose of each fragment, and automatically
customize the shape of the bone. The use of 3D imaging allows full spatial inspection of the fracture providing different
views through the manipulation of the interactively reconstructed 3D model, and ultimately better pre-operative
planning.
Splint deformation measurement: a contribution to quality control in computer assisted surgery
Author(s):
Christoph Weber;
Michael Figl;
Kurt Schicho
Show Abstract
Setting up a reliable and accurate reference coordinate system is a crucial part in computer assisted navigated
surgery. As the use of splints is a well established technique for this purpose and any change in its geometry
directly influences the accuracy of the navigation, a regular monitoring of such deformations should occur as a
means of quality control.
This work presents a method to quantify such deformations based on computed tomography images of a splint
equipped with fiducial markers. Point-to-point registration is used to match the two data sets and some markers
near to the navigation field are used to estimate the registration error. The Hausdorff Distance, describing the
maximum of all minimal distances between two point sets in general, is applied to the surfaces of the models,
being a measure for the overall change in geometry.
Finally this method for quantification is demonstrated using a computed tomography data set of such a splint
together with an artificially modified one, being an initial step to a study examining the influence of the Sterrad
sterilisation system on acrylic splints.
A three-dimensional finite element analysis of the osseointegration progression in the human mandible
Author(s):
Enas Esmail;
Noha Hassan;
Yasser Kadah
Show Abstract
In this study, three-dimensional (3D) finite element analysis was used to model the effect of the peri-implant
bone geometry and thickness on the biomechanical behavior of a dental implant/supporting bone system. The 3D finite
element model of the jaw bone, cancellous and cortical, was developed based on computerized tomography (CT) scan
technology while the dental implant model was created based on a commercially available implant design. Two models,
cylindrical and threaded, representing the peri-implant bone region were simulated. In addition, various thicknesses (0.1
mm, 0.3 mm, 0.5 mm) of the peri-implant bone region were modeled to account for the misalingnment during the
drilling process. Different biomechanical properties of the peri-implant bone region were used to simulate the
progression of the osseointegration process with time. Four stages of osseointegration were modeled to mimic different
phases of tissue healing of the peri- implant region starting with soft connective tissue and ending with complete bone
maturation. For the realistic threaded model of the peri-implant bone region, the maximum von Mises stress and
displacement in the dental implant and jaw bone were higher than those computed for the simple cylindrical peri-implant
bone region model. The average von Mises stress and displacement in the dental implant and the jaw bone decreased as
the oseeointegration progressed with time for all thicknesses of the peri-implant bone region. On the other hand, the
maximum absolute vertical displacement of the dental implant increased as the drilled thickness of the peri-implant bone
region increased.
Visualization of 3D elbow kinematics using reconstructed bony surfaces
Author(s):
Emily A. Lalone;
Colin P. McDonald;
Louis M. Ferreira;
Terry M. Peters;
Graham J. W. King;
James A. Johnson
Show Abstract
An approach for direct visualization of continuous three-dimensional elbow kinematics using reconstructed surfaces has
been developed. Simulation of valgus motion was achieved in five cadaveric specimens using an upper arm simulator.
Direct visualization of the motion of the ulna and humerus at the ulnohumeral joint was obtained using a contact based
registration technique. Employing fiducial markers, the rendered humerus and ulna were positioned according to the
simulated motion. The specific aim of this study was to investigate the effect of radial head arthroplasty on restoring
elbow joint stability after radial head excision. The position of the ulna and humerus was visualized for the intact elbow
and following radial head excision and replacement. Visualization of the registered humerus/ulna indicated an increase
in valgus angulation of the ulna with respect to the humerus after radial head excision. This increase in valgus angulation
was restored to that of an elbow with a native radial head following radial head arthroplasty. These findings were
consistent with previous studies investigating elbow joint stability following radial head excision and arthroplasty. The
current technique was able to visualize a change in ulnar position in a single DoF. Using this approach, the coupled
motion of ulna undergoing motion in all 6 degrees-of-freedom can also be visualized.