Show all abstracts
View Session
- Front Matter: Volume 8316
- Visualization, Segmentation, and Registration
- Tracking and Radiation Therapy
- Keynote and Robotics
- Simulation and Modeling
- 2D/3D and Fluoroscopy
- Keynote and Ultrasound
- Optical, Laparoscopic, and Needle Techniques
- Prostate
- Cardiac and Vascular
- Neuro and Head
- Lung and Liver
- Poster Session: Visualization, Segmentation, and Registration
- Poster Session: Tracking and Radiation Therapy
- Poster Session: Robotics
- Poster Session: Simulation and Modeling
- Poster Session: 2D/3D and Fluoroscopy
- Poster Session: Acquisition Technologies
- Poster Session: Technology Evaluation
- Poster Session: Prostate
- Poster Session: Cardiac and Vascular
- Poster Session: Neuro and Head
- Poster Session: Lung and Abdomen
Front Matter: Volume 8316
Front Matter: Volume 8316
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 8316, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Visualization, Segmentation, and Registration
Deformable registration of the inflated and deflated lung for cone-beam CT-guided thoracic surgery
Show abstract
Intraoperative cone-beam CT (CBCT) could offer an important advance to thoracic surgeons in directly localizing
subpalpable nodules during surgery. An image-guidance system is under development using mobile C-arm CBCT to
directly localize tumors in the OR, potentially reducing the cost and logistical burden of conventional preoperative
localization and facilitating safer surgery by visualizing critical structures surrounding the surgical target (e.g.,
pulmonary artery, airways, etc.). To utilize the wealth of preoperative image/planning data and to guide targeting under
conditions in which the tumor may not be directly visualized, a deformable registration approach has been developed that
geometrically resolves images of the inflated (i.e., inhale or exhale) and deflated states of the lung. This novel technique
employs a coarse model-driven approach using lung surface and bronchial airways for fast registration, followed by an
image-driven registration using a variant of the Demons algorithm to improve target localization to within ~1 mm. Two
approaches to model-driven registration are presented and compared - the first involving point correspondences on the
surface of the deflated and inflated lung and the second a mesh evolution approach. Intensity variations (i.e., higher
image intensity in the deflated lung) due to expulsion of air from the lungs are accounted for using an a priori lung
density modification, and its improvement on the performance of the intensity-driven Demons algorithm is
demonstrated. Preliminary results of the combined model-driven and intensity-driven registration process demonstrate
accuracy consistent with requirements in minimally invasive thoracic surgery in both target localization and critical
structure avoidance.
Incorporation of prior knowledge for region of change imaging from sparse scan data in image-guided surgery
Show abstract
This paper proposes to utilize a patient-specific prior to augment intraoperative sparse-scan data to accurately reconstruct
the aspects of the region that have changed by a surgical procedure in image-guided surgeries. When anatomical changes
are introduced by a surgical procedure, only a sparse set of x-ray images are acquired, and the prior volume is registered
to these data. Since all the information of the patient anatomy except for the surgical change is already known from the
prior volume, we highlight only the change by creating difference images between the new scan and digitally
reconstructed radiographs (DRR) computed from the registered prior volume. The region of change (RoC) is
reconstructed from these sparse difference images by a penalized likelihood (PL) reconstruction method regularized by a
compressed sensing penalty. When the surgical changes are local and relatively small, the RoC reconstruction involves
only a small volume size and a small number of projections, allowing much faster computation and lower radiation dose
than is needed to reconstruct the entire surgical volume. The reconstructed RoC merges with the prior volume to
visualize an updated surgical field. We apply this novel approach to sacroplasty phantom data obtained from a conebeam
CT (CBCT) test bench and vertebroplasty data with a fresh cadaver acquired from a C-arm CBCT system with a
flat-panel detector (FPD).
GPU-based iterative relative fuzzy connectedness image segmentation
Show abstract
This paper presents a parallel algorithm for the top of the line among the fuzzy connectedness algorithm family,
namely the iterative relative fuzzy connectedness (IRFC) segmentation method. The algorithm of IRFC, realized
via image foresting transform (IFT), is implemented by using NVIDIA's compute unified device architecture
(CUDA) platform for segmenting large medical image data sets. In the IRFC algorithm, there are two major
computational tasks: (i) computing the fuzzy affinity relations, and (ii) computing the fuzzy connectedness
relations and tracking labels for objects of interest. Both tasks are implemented as CUDA kernels, and a
substantial improvement in speed for both tasks is achieved. Our experiments based on three data sets of small,
medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up
factor of 2.4x, 17.0x, and 42.7x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the
implementation of the algorithm in CPU.
Automatic anatomy recognition via fuzzy object models
Show abstract
To make Quantitative Radiology a reality in routine radiological practice, computerized automatic anatomy recognition
(AAR) during radiological image reading becomes essential. As part of this larger goal, last year at this conference we
presented a fuzzy strategy for building body-wide group-wise anatomic models. In the present paper, we describe the
further advances made in fuzzy modeling and the algorithms and results achieved for AAR by using the fuzzy models.
The proposed AAR approach consists of three distinct steps: (a) Building fuzzy object models (FOMs) for each
population group G. (b) By using the FOMs to recognize the individual objects in any given patient image I under group
G. (c) To delineate the recognized objects in I. This paper will focus mostly on (b).
FOMs are built hierarchically, the smaller sub-objects forming the offspring of larger parent objects. The hierarchical
pose relationships from the parent to offspring are codified in the FOMs. Several approaches are being explored
currently, grouped under two strategies, both being hierarchical: (ra1) those using search strategies; (ra2) those
strategizing a one-shot approach by which the model pose is directly estimated without searching. Based on 32 patient
CT data sets each from the thorax and abdomen and 25 objects modeled, our analysis indicates that objects do not all
scale uniformly with patient size. Even the simplest among the (ra2) strategies of recognizing the root object and then
placing all other descendants as per the learned parent-to-offspring pose relationship bring the models on an average
within about 18 mm of the true locations.
Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology
Show abstract
Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for
intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of
interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs
but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been
evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying
image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and
motion artifacts.
Tracking and Radiation Therapy
Error prediction for probes guided by means of fixtures
Show abstract
Probe guides are surgical fixtures that are rigidly attached to bone anchors in order to place a probe at a target with high
accuracy (RMS error < 1 mm). Applications include needle biopsy, the placement of electrodes for deep-brain
stimulation (DBS), spine surgery, and cochlear implant surgery. Targeting is based on pre-operative images, but
targeting errors can arise from three sources: (1) anchor localization error, (2) guide fabrication error, and (3) external
forces and torques. A well-established theory exists for the statistical prediction of target registration error (TRE) when
targeting is accomplished by means of tracked probes, but no such TRE theory is available for fixtured probe guides.
This paper provides that theory and shows that all three error sources can be accommodated in a remarkably simple
extension of existing theory. Both the guide and the bone with attached anchors are modeled as objects with rigid
sections and elastic sections, the latter of which are described by stiffness matrices. By relating minimization of elastic
energy for guide attachment to minimization of fiducial registration error for point registration, it is shown that the
expression for targeting error for the guide is identical to that for weighted rigid point registration if the weighting
matrices are properly derived from stiffness matrices and the covariance matrices for fiducial localization are augmented
with offsets in the anchor positions. An example of the application of the theory is provided for ear surgery.
A novel fully automatic system for the evaluation of electromagnetic tracker
Show abstract
Electromagnetic tracking (EMT) systems are gaining increased attention in various fields of image-guided surgery. One of the main problems related to EMT systems is their vulnerability to distortion due to metallic objects. Several methods have been introduced to evaluate electromagnetic trackers, yet, the data acquisition has to be manually performed in a time consuming procedure, which often leads to a sparse volume coverage. The aim of this work is to present a fully automatic calibration system. It consists of a novel, parallel robotic arm and has the potential to collect a very large number of tracking data while scanning the entire tracking volume of a field generator. To prove the feasibility of our system, we evaluate two electromagnetic field generators (NDI Planar and Tabletop) in an ideal metal-free environment and in a clinical setup. Our proposed calibration robot successfully performed throughout the experiments and examined 1,000 positions in the tracking volume of each field generator (FG). According to the results both FGs are highly accurate in an ideal environment. However, in the examined clinical setup, the Planar FG is strongly distorted by metallic objects. Whereas the Tabletop FG provided very robust and accurate tracking, even if metallic objects where lying directly underneath the FG.
Tracker-on-C for cone-beam CT-guided surgery: evaluation of geometric accuracy and clinical applications
Show abstract
Conventional surgical tracking configurations carry a variety of limitations in line-of-sight, geometric accuracy, and
mismatch with the surgeon's perspective (for video augmentation). With increasing utilization of mobile C-arms,
particularly those allowing cone-beam CT (CBCT), there is opportunity to better integrate surgical trackers at bedside to
address such limitations. This paper describes a tracker configuration in which the tracker is mounted directly on the Carm.
To maintain registration within a dynamic coordinate system, a reference marker visible across the full C-arm
rotation is implemented, and the "Tracker-on-C" configuration is shown to provide improved target registration error
(TRE) over a conventional in-room setup - (0.9±0.4) mm vs (1.9±0.7) mm, respectively. The system also can generate
digitally reconstructed radiographs (DRRs) from the perspective of a tracked tool ("x-ray flashlight"), the tracker, or the
C-arm ("virtual fluoroscopy"), with geometric accuracy in virtual fluoroscopy of (0.4±0.2) mm. Using a video-based
tracker, planning data and DRRs can be superimposed on the video scene from a natural perspective over the surgical
field, with geometric accuracy (0.8±0.3) pixels for planning data overlay and (0.6±0.4) pixels for DRR overlay across all
C-arm angles. The field-of-view of fluoroscopy or CBCT can also be overlaid on real-time video ("Virtual Field Light")
to assist C-arm positioning. The fixed transformation between the x-ray image and tracker facilitated quick, accurate
intraoperative registration. The workflow and precision associated with a variety of realistic surgical tasks were
significantly improved using the Tracker-on-C - for example, nearly a factor of 2 reduction in time required for C-arm
positioning, reduction or elimination of dose in "hunting" for a specific fluoroscopic view, and confident placement of
the x-ray FOV on the surgical target. The proposed configuration streamlines the integration of C-arm CBCT with realtime
tracking and demonstrated utility in a spectrum of image-guided interventions (e.g., spine surgery) benefiting from
improved accuracy, enhanced visualization, and reduced radiation exposure.
Application of 3D surface imaging in breast cancer radiotherapy
Show abstract
Purpose: Accurate dose delivery in deep-inspiration breath-hold (DIBH) radiotherapy for patients with breast cancer
relies on precise treatment setup and monitoring of the depth of the breath hold. This study entailed performance
evaluation of a 3D surface imaging system for image guidance in DIBH radiotherapy by comparison with cone-beam
computed tomography (CBCT).
Materials and Methods: Fifteen patients, treated with DIBH radiotherapy after breast-conserving surgery, were included.
The performance of surface imaging was compared to the use of CBCT for setup verification. Retrospectively, breast
surface registrations were performed for CBCT to planning CT as well as for a 3D surface, captured concurrently with
CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis. For the differences
between setup errors, group mean, systematic and random errors were calculated. Furthermore, a residual error after
registration (RRE) was assessed for both systems by investigating the root-mean-square distance between the planning
CT surface and registered CBCT/captured surface.
Results: Good correlation between setup errors was found: R2=0.82, 0.86, 0.82 in left-right, cranio-caudal and anteriorposterior
direction, respectively. Systematic and random errors were ≤0.16cm and ≤0.13cm in all directions,
respectively. RRE values for surface imaging and CBCT were on average 0.18 versus 0.19cm with a standard deviation
of 0.10 and 0.09cm, respectively. Wilcoxon-signed-ranks testing showed that CBCT registrations resulted in higher RRE
values than surface imaging registrations (p=0.003).
Conclusion: This performance evaluation study shows very promising results
Improvement of tracking accuracy and stability by recursive image processing in real-time tumor-tracking radiotherapy system
Show abstract
In the real-time tumor-tracking radiotherapy (RTRT) system, the fiducial markers are inserted in or near the
target tumor in order monitor the respiratory-induced motion of tumors. During radiation treatment, the markers
are detected by continuous fluoroscopy operated at 30 frames/sec. The marker position is determined by means
of a template pattern matching technique which is based on the normalized cross correlation. With high tube
voltage, large current and long exposure, the fiducial marker will be recognized accurately, however, the radiation
dose due to X-ray fluoroscopy increases. On the other hand, by decreasing the fluoroscopy parameter settings,
the fiducial marker could be lost because the effect of statistical noise is increased. In the respiratory-gated
radiotherapy, the error of the image guidance will induce the reduction of the irradiation efficiency and accuracy.
In order to track the marker stably and accurately in low dose fluoroscopy, we propose the application of a
recursive filter. The effectiveness of the image processing is investigated by tracking the static marker and the
dynamic marker. The results suggest that the stability and the accuracy of the marker tracking can be improved
by applying the recursive image filter in low dose imaging.
Model-based risk assessment for motion effects in 3D radiotherapy of lung tumors
Show abstract
Although 4D CT imaging becomes available in an increasing number of radiotherapy facilities, 3D imaging and
planning is still standard in current clinical practice. In particular for lung tumors, respiratory motion is a
known source of uncertainty and should be accounted for during radiotherapy planning - which is difficult by
using only a 3D planning CT. In this contribution, we propose applying a statistical lung motion model to
predict patients' motion patterns and to estimate dosimetric motion effects in lung tumor radiotherapy if only
3D images are available. Being generated based on 4D CT images of patients with unimpaired lung motion, the
model tends to overestimate lung tumor motion. It therefore promises conservative risk assessment regarding
tumor dose coverage. This is exemplarily evaluated using treatment plans of lung tumor patients with different
tumor motion patterns and for two treatment modalities (conventional 3D conformal radiotherapy and step-&-
shoot intensity modulated radiotherapy). For the test cases, 4D CT images are available. Thus, also a standard
registration-based 4D dose calculation is performed, which serves as reference to judge plausibility of the modelbased
4D dose calculation. It will be shown that, if combined with an additional simple patient-specific breathing
surrogate measurement (here: spirometry), the model-based dose calculation provides reasonable risk assessment
of respiratory motion effects.
Keynote and Robotics
Medical robotics and computer-integrated interventional medicine
Russell H. Taylor
Show abstract
Computer-Integrated Interventional Medicine (CIIM) promises to have a profound impact on health care in the next 20
years, much as and for many of the same reasons that the marriage of computers and information processing methods
with other technology have had on manufacturing, transportation, and other sectors of our society. Our basic premise is
that the steps of creating patient-specific computational models, using these models for planning, registering the models
and plans with the actual patient in the operating room, and using this information with appropriate technology to assist
in carrying out and monitoring the intervention are best viewed as part of a complete patient-specific intervention
process that occurs over many time scales. Further, the information generated in computer-integrated interventions can
be captured and analyzed statistically to improve treatment processes. This paper will explore these themes briefly,
using examples drawn from our work at the Engineering Research Center for Computer-Integrated Surgical Systems and
Technology (CISST ERC).
Does a robotic scrub nurse improve economy of movements?
Show abstract
Objective: Robotic assistance during surgery has been shown to be a useful resource to both augment the surgical skills
of the surgeon through tele-operation, and to assist the surgeon handling the surgical instruments to the surgeon, similar
to a surgical tech. We evaluated the performance and effect of a gesture driven surgical robotic nurse in the context of
economy of movements, during an abdominal incision and closure exercise with a simulator.
Methods: A longitudinal midline incision (100 mm) was performed on the simulated abdominal wall to enter the
peritoneal cavity without damaging the internal organs. The wound was then closed using a blunt needle ensuring that no
tissue is caught up by the suture material. All the instruments required to complete this task were delivered by a robotic
surgical manipulator directly to the surgeon. The instruments were requested through voice and gesture recognition. The
robotic system used a low end range sensor camera to extract the hand poses and for recognizing the gestures. The
instruments were delivered to the vicinity of the patient, at chest height and at a reachable distance to the surgeon. Task
performance measures for each of three abdominal incision and closure exercises were measured and compared to a
human scrub nurse instrument delivery action. Picking instrument position variance, completion time and trajectory of
the hand were recorded for further analysis.
Results: The variance of the position of the robotic tip when delivering the surgical instrument is compared to the same
position when a human delivers the instrument. The variance was found to be 88.86% smaller compared to the human
delivery group. The mean task completion time to complete the surgical exercise was 162.7± 10.1 secs for the human
assistant and 191.6± 3.3 secs (P<.01) when using the robotic standard display group.
Conclusion: Multimodal robotic scrub nurse assistant improves the surgical procedure by reducing the number of
movements (lower variance in the picking position). The variance of the picking point is closely related to the concept of
economy of movements in the operating room. Improving the effectiveness of the operating room can potentially
enhance the safety of surgical interventions without affecting the performance time.
The role of three-dimensional visualization in robotics-assisted cardiac surgery
Show abstract
Objectives: The purpose of this study was to determine the effect of three-dimensional (3D) versus two-dimensional
(2D) visualization on the amount of force applied to mitral valve tissue during robotics-assisted mitral valve
annuloplasty, and the time to perform the procedure in an ex vivo animal model. In addition, we examined whether these
effects are consistent between novices and experts in robotics-assisted cardiac surgery.
Methods: A cardiac surgery test-bed was constructed to measure forces applied by the da Vinci surgical system
(Intuitive Surgical, Sunnyvale, CA) during mitral valve annuloplasty. Both experts and novices completed roboticsassisted
mitral valve annuloplasty with 2D and 3D visualization.
Results: The mean time for both experts and novices to suture the mitral valve annulus and to tie sutures using 3D
visualization was significantly less than that required to suture the mitral valve annulus and to tie sutures using 2D vision
(p∠0.01). However, there was no significant difference in the maximum force applied by novices to the mitral valve
during suturing (p = 0.3) and suture tying (p = 0.6) using either 2D or 3D visualization.
Conclusion: This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in
robotics-assisted cardiac surgery.
Keywords: Robotics-assisted surgery, visualization, cardiac surgery
Simulation and Modeling
Evaluation of deformation accuracy of a virtual pneumoperitoneum method based on clinical trials for patient-specific laparoscopic surgery simulator
Show abstract
This paper evaluates deformation accuracy of a virtual pneumoperitoneum method by utilizing measurement data
of real deformations of patient bodies. Laparoscopic surgery is an option of surgical operations that is less invasive
technique as compared with traditional surgical operations. In laparoscopic surgery, the pneumoperitoneum
process is performed to create a viewing and working space. Although a virtual pneumoperitoneum method
based on 3D CT image deformation has been proposed for patient-specific laparoscopy simulators, quantitative
evaluation based on measurements obtained in real surgery has not been performed. In this paper, we evaluate
deformation accuracy of the virtual pneumoperitoneum method based on real deformation data of the abdominal
wall measured in operating rooms (ORs.) The evaluation results are used to find optimal deformation parameters
of the virtual pneumoperitoneum method. We measure landmark positions on the abdominal wall on a 3D CT
image taken before performing a pneumoperitoneum process. The landmark positions are defined based on
anatomical structure of a patient body. We also measure the landmark positions on a 3D CT image deformed
by the virtual pneumoperitoneum method. To measure real deformations of the abdominal wall, we measure
the landmark positions on the abdominal wall of a patient before and after the pneumoperitoneum process
in the OR. We transform the landmark positions measured in the OR from the tracker coordinate system to
the CT coordinate system. A positional error of the virtual pneumoperitoneum method is calculated based
on positional differences between the landmark positions on the 3D CT image and the transformed landmark
positions. Experimental results based on eight cases of surgeries showed that the minimal positional error was
13.8 mm. The positional error can be decreased from the previous method by calculating optimal deformation
parameters of the virtual pneumoperitoneum method from the experimental results.
Neurosurgery simulation using non-linear finite element modeling and haptic interaction
Show abstract
Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime
requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due
to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its
requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear
elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems,
and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing
work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear
finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing
the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element
operations. We employ a virtual coupling method for separating deformable body simulation and collision
detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation.
The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with
haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the
material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic
relaxation are required to improve the stability of the system.
Lung tumor motion prediction during lung brachytherapy using finite element model
Show abstract
A biomechanical model is proposed to predict deflated lung tumor motion caused by diaphragm respiratory motion. This
model can be very useful for targeting the tumor in tumor ablative procedures such as lung brachytherapy. To minimize
motion within the target lung, these procedures are performed while the lung is deflated. However, significant amount of
tissue deformation still occurs during respiration due to the diaphragm contact forces. In the absence of effective realtime
image guidance, biomechanical models can be used to estimate tumor motion as a function of diaphragm's position.
To develop this model, Finite Element Method (FEM) was employed. To demonstrate the concept, we conducted an
animal study of an ex-vivo porcine deflated lung with a tumor phantom. The lung was deformed by compressing a
diaphragm mimicking cylinder against it. Before compression, 3D-CT image of this lung was acquired, which was
segmented and turned into FE mesh. The lung tissue was modeled as hyperelastic material with a contact loading to
calculate the lung deformation and tumor motion during respiration. To validate the results from FE model, the motion
of a small area on the surface close to the tumor was tracked while the lung was being loaded by the cylinder. Good
agreement was demonstrated between the experiment results and simulation results. Furthermore, the impact of tissue
hyperelastic parameters uncertainties in the FE model was investigated. For this purpose, we performed in-silico
simulations with different hyperelastic parameters. This study demonstrated that the FEM was accurate and robust for
tumor motion prediction.
A method for constructing real-time FEM-based simulator of stomach behavior with large-scale deformation by neural networks
Ken'ichi Morooka,
Tomoyuki Taguchi,
Xian Chen,
et al.
Show abstract
This paper presents a method for simulating the behavior of stomach with large-scale deformation. This simulator is
generated by the real-time FEM-based analysis by using a neural network.4 There are various deformation patterns of
hollow organs by changing both its shape and volume. In this case, one network can not learn the stomach deformation
with a huge number of its deformation pattern. To overcome the problem, we propose a method of constructing the
simulator composed of multiple neural networks by 1)partitioning a training dataset into several subsets, and 2)selecting
the data included in each subset. From our experimental results, we can conclude that our method can speed up the training
process of a neural network while keeping acceptable accuracy.
Pectus excavatum postsurgical outcome based on preoperative soft body dynamics simulation
Show abstract
Pectus excavatum is the most common congenital deformity of the anterior chest wall, in which an abnormal
formation of the rib cage gives the chest a caved-in or sunken appearance. Today, the surgical correction of this
deformity is carried out in children and adults through Nuss technic, which consists in the placement of a prosthetic bar
under the sternum and over the ribs. Although this technique has been shown to be safe and reliable, not all patients have
achieved adequate cosmetic outcome. This often leads to psychological problems and social stress, before and after the
surgical correction. This paper targets this particular problem by presenting a method to predict the patient surgical
outcome based on pre-surgical imagiologic information and chest skin dynamic modulation. The proposed approach uses
the patient pre-surgical thoracic CT scan and anatomical-surgical references to perform a 3D segmentation of the left
ribs, right ribs, sternum and skin. The technique encompasses three steps: a) approximation of the cartilages, between the
ribs and the sternum, trough b-spline interpolation; b) a volumetric mass spring model that connects two layers - inner
skin layer based on the outer pleura contour and the outer surface skin; and c) displacement of the sternum according to
the prosthetic bar position.
A dynamic model of the skin around the chest wall region was generated, capable of simulating the effect of the
movement of the prosthetic bar along the sternum. The results were compared and validated with patient postsurgical
skin surface acquired with Polhemus FastSCAN system.
Fusion of intraoperative force sensoring, surface reconstruction and biomechanical modeling
Show abstract
Minimally invasive surgery is medically complex and can heavily benefit from computer assistance. One way to help the
surgeon is to integrate preoperative planning data into the surgical workflow. This information can be represented as a
customized preoperative model of the surgical site. To use it intraoperatively, it has to be updated during the intervention
due to the constantly changing environment. Hence, intraoperative sensor data has to be acquired and registered with the
preoperative model. Haptic information which could complement the visual sensor data is still not established. In
addition, biomechanical modeling of the surgical site can help in reflecting the changes which cannot be captured by
intraoperative sensors.
We present a setting where a force sensor is integrated into a laparoscopic instrument. In a test scenario using a silicone
liver phantom, we register the measured forces with a reconstructed surface model from stereo endoscopic images and a
finite element model. The endoscope, the instrument and the liver phantom are tracked with a Polaris optical tracking
system. By fusing this information, we can transfer the deformation onto the finite element model. The purpose of this
setting is to demonstrate the principles needed and the methods developed for intraoperative sensor data fusion. One
emphasis lies on the calibration of the force sensor with the instrument and first experiments with soft tissue. We also
present our solution and first results concerning the integration of the force sensor as well as accuracy to the fusion of
force measurements, surface reconstruction and biomechanical modeling.
2D/3D and Fluoroscopy
Robust pigtail catheter tip detection in fluoroscopy
Show abstract
The pigtail catheter is a type of catheter inserted into the human body during interventional surgeries such
as the transcatheter aortic valve implantation (TAVI). The catheter is characterized by a tightly curled end in
order to remain attached to a valve pocket during the intervention, and it is used to inject contrast agent for the
visualization of the vessel in fluoroscopy. Image-based detection of this catheter is used during TAVI, in order to
overlay a model of the aorta and enhance visibility during the surgery. Due to the different possible projection
angles in fluoroscopy, the pigtail tip can appear in a variety of different shapes spanning from pure circular to
ellipsoid or even line. Furthermore, the appearance of the catheter tip is radically altered when the contrast
agent is injected during the intervention or when it is occluded by other devices. All these factors make the
robust real-time detection and tracking of the pigtail catheter a challenging task. To address these challenges,
this paper proposes a new tree-structured, hierarchical detection scheme, based on a shape categorization of the
pigtail catheter tip, and a combination of novel Haar features. The proposed framework demonstrates improved
detection performance, through a validation on a data set consisting of 272 sequences with more than 20,000
images. The detection framework presented in this paper is not limited to pigtail catheter detection, but it can
also be applied successfully to any other shape-varying object with similar characteristics.
Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration
Show abstract
Localization of target vertebrae is an essential step in minimally invasive spine surgery, with conventional methods relying
on "level counting" - i.e., manual counting of vertebrae under fluoroscopy starting from readily identifiable anatomy
(e.g., the sacrum). The approach requires an undesirable level of radiation, time, and is prone to counting errors due to
the similar appearance of vertebrae in projection images; wrong-level surgery occurs in 1 of every ~3000 cases. This
paper proposes a method to automatically localize target vertebrae in x-ray projections using 3D-2D registration between
preoperative CT (in which vertebrae are preoperatively labeled) and intraoperative fluoroscopy. The registration uses an
intensity-based approach with a gradient-based similarity metric and the CMA-ES algorithm for optimization. Digitally
reconstructed radiographs (DRRs) and a robust similarity metric are computed on GPU to accelerate the process. Evaluation
in clinical CT data included 5,000 PA and LAT projections randomly perturbed to simulate human variability in
setup of mobile intraoperative C-arm. The method demonstrated 100% success for PA view (projection error: 0.42mm)
and 99.8% success for LAT view (projection error: 0.37mm). Initial implementation on GPU provided automatic target
localization within about 3 sec, with further improvement underway via multi-GPU. The ability to automatically label
vertebrae in fluoroscopy promises to streamline surgical workflow, improve patient safety, and reduce wrong-site surgeries,
especially in large patients for whom manual methods are time consuming and error prone.
2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy
Show abstract
Prostate biopsy is the clinical standard for prostate cancer diagnosis. To improve the accuracy of targeting suspicious
locations, systems have been developed that can plan and record biopsy locations in a 3D TRUS image acquired at the
beginning of the procedure. Some systems are designed for maximum compatibility with existing ultrasound equipment
and are thus designed around the use of a conventional 2D TRUS probe, using controlled axial rotation of this probe to
acquire a 3D TRUS reference image at the start of the biopsy procedure. Prostate motion during the biopsy procedure
causes misalignments between the prostate in the live 2D TRUS images and the pre-acquired 3D TRUS image. We
present an image-based rigid registration technique that aligns live 2D TRUS images, acquired immediately prior to
biopsy needle insertion, with the pre-acquired 3D TRUS image to compensate for this motion. Our method was
validated using 33 manually identified intrinsic fiducials in eight subjects and the target registration error was found to
be 1.89 mm. We analysed the suitability of two image similarity metrics (normalized cross correlation and mutual
information) for this task by plotting these metrics as a function of varying parameters in the six degree-of-freedom
transformation space, with the ground truth plane obtained from registration as the starting point for the parameter
exploration. We observed a generally convex behaviour of the similarity metrics. This encourages their use for this
registration problem, and could assist in the design of a tool for the detection of misalignment, which could trigger the
execution of a non-real-time registration, when needed during the procedure.
Error analysis of the x-ray projection geometry of camera-augmented mobile C-arm
Show abstract
The Camera-Augmented Mobile C-arm (CamC) augments X-ray by optical camera images and is used as an
advanced visualization and guidance tool in trauma and orthopedic surgery. However, in its current form the
calibration is suboptimal. We investigated and compared calibration and distortion correction between: (i) the
existing CamC calibration framework (ii) Zhang's calibration for video images, and (iii) the traditional C-arm
fluoroscopy calibration technique. Accuracy of the distortion correction for each of the three methods is
compared by analyzing the error based on a synthetic model and the linearity and cross-ratio properties. Also,
the accuracy of calibrated X-ray projection geometry is evaluated by performing C-arm pose estimation using
a planar pattern with known geometry. The RMS errors based on a synthetic model and pose estimation
shows that the traditional C-arm method (μ=0.39 pixels) outperforms both Zhang (μ=0.68 pixels) and original
CamC (μ=1.07 pixels) methods. The relative pose estimation comparison shows that the translation error of
the traditional method (μ=0.25mm) outperforms Zhang (μ=0.41mm) and CamC (μ=1.13mm) method. In
conclusion, we demonstrated that the traditional X-ray calibration procedure outperforms the existing CamC
solution and Zhang's method for the calibration of C-arm X-ray projection geometry.
Automatic pose initialization for accurate 2D/3D registration applied to abdominal aortic aneurysm endovascular repair
Show abstract
Minimally invasive abdominal aortic aneurysm (AAA) stenting can be greatly facilitated by overlaying the preoperative
3-D model of the abdominal aorta onto the intra-operative 2-D X-ray images. Accurate 2-D/3-D registration in 3-D
space makes the 2-D/3-D overlay robust to the change of C-Arm angulations. By far, the 2-D/3-D registration methods
based on simulated X-ray projection images using multiple image planes have been shown to be able to provide
satisfactory 3-D registration accuracy. However, one drawback of the intensity-based 2-D/3-D registration methods is
that the similarity measure is usually highly non-convex and hence the optimizer can easily be trapped into local minima.
User interaction therefore is often needed in the initialization of the position of the 3-D model in order to get a successful
2-D/3-D registration. In this paper, a novel 3-D pose initialization technique is proposed, as an extension of our
previously proposed bi-plane 2-D/3-D registration method for AAA intervention [4]. The proposed method detects
vessel bifurcation points and spine centerline in both 2-D and 3-D images, and utilizes landmark information to bring the
3-D volume into a 15mm capture range. The proposed landmark detection method was validated on real dataset, and is
shown to be able to provide a good initialization for 2-D/3-D registration in [4], thus making the workflow fully
automatic.
Keynote and Ultrasound
Tracked 3D ultrasound targeting with an active cannula
Show abstract
The objective of our work is a system that enables both mechanically and electronically shapable thermal energy
deposition in soft tissue ablation. The overall goal is a system that can percutaneously (and through a single
organ surface puncture) treat tumors that are large, multiple, geometrically complex, or located too close to
vital structures for traditional resection. This paper focuses on mechanical steering and image guidance aspects
of the project. Mechanical steering is accomplished using an active cannula that enables repositioning of the
ablator tip without complete retraction. We describe experiments designed to evaluate targeting accuracy of the
active cannula (also known as a concentric tube robot) in soft tissues under tracked 3D ultrasound guidance.
Intraoperative ultrasound to stereocamera registration using interventional photoacoustic imaging
Show abstract
There are approximately 6000 hospitals in the United States, of which approximately 5400 employ minimally
invasive surgical robots for a variety of procedures. Furthermore, 95% of these robots require extensive
registration before they can be fitted into the operating room. These "registrations" are performed by surgical
navigation systems, which allow the surgical tools, the robot and the surgeon to be synchronized together-hence
operating in concert. The most common surgical navigation modalities include: electromagnetic (EM) tracking
and optical tracking. Currently, these navigation systems are large, intrusive, come with a steep learning curve,
require sacrifices on the part of the attending medical staff, and are quite expensive (since they require several
components). Recently, photoacoustic (PA) imaging has become a practical and promising new medical imaging
technology. PA imaging only requires the minimal equipment standard with most modern ultrasound (US) imaging
systems as well as a common laser source. In this paper, we demonstrate that given a PA imaging system, as
well as a stereocamera (SC), the registration between the US image of a particular anatomy and the SC image
of the same anatomy can be obtained with reliable accuracy. In our experiments, we collected data for N = 80
trials of sample 3D US and SC coordinates. We then computed the registration between the SC and the US
coordinates. Upon validation, the mean error and standard deviation between the predicted sample coordinates
and the corresponding ground truth coordinates were found to be 3.33 mm and 2.20 mm respectively.
Optical, Laparoscopic, and Needle Techniques
Registration of partially overlapping surfaces for range image based augmented reality on mobile devices
Show abstract
Visualization of anatomical data for disease diagnosis, surgical planning, or orientation during interventional
therapy is an integral part of modern health care. However, as anatomical information is typically shown on
monitors provided by a radiological work station, the physician has to mentally transfer internal structures
shown on the screen to the patient. To address this issue, we recently presented a new approach to on-patient
visualization of 3D medical images, which combines the concept of augmented reality (AR) with an intuitive
interaction scheme. Our method requires mounting a range imaging device, such as a Time-of-Flight (ToF)
camera, to a portable display (e.g. a tablet PC). During the visualization process, the pose of the camera and
thus the viewing direction of the user is continuously determined with a surface matching algorithm. By moving
the device along the body of the patient, the physician is given the impression of looking directly into the human
body. In this paper, we present and evaluate a new method for camera pose estimation based on an anisotropic
trimmed variant of the well-known iterative closest point (ICP) algorithm. According to in-silico and in-vivo
experiments performed with computed tomography (CT) and ToF data of human faces, knees and abdomens,
our new method is better suited for surface registration with ToF data than the established trimmed variant of
the ICP, reducing the target registration error (TRE) by more than 60%. The TRE obtained (approx. 4-5 mm)
is promising for AR visualization, but clinical applications require maximization of robustness and run-time.
The Kinect as an interventional tracking system
Show abstract
This work explores the suitability of low-cost sensors for "serious" medical applications, such as tracking of
interventional tools in the OR, for simulation, and for education. Although such tracking - i.e. the acquisition
of pose data e.g. for ultrasound probes, tissue manipulation tools, needles, but also tissue, bone etc. - is well
established, it relies mostly on external devices such as optical or electromagnetic trackers, both of which
mandate the use of special markers or sensors attached to each single entity whose pose is to be recorded, and
also require their calibration to the tracked entity, i.e. the determination of the geometric relationship between
the marker's and the object's intrinsic coordinate frames. The Microsoft Kinect sensor is a recently introduced
device for full-body tracking in the gaming market, but it was quickly hacked - due to its wide range of tightly
integrated sensors (RGB camera, IR depth and greyscale camera, microphones, accelerometers, and basic
actuation) - and used beyond this area. As its field of view and its accuracy are within reasonable usability
limits, we describe a medical needle-tracking system for interventional applications based on the Kinect
sensor, standard biopsy needles, and no necessary attachments, thus saving both cost and time. Its twin
cameras are used as a stereo pair to detect needle-shaped objects, reconstruct their pose in four degrees of
freedom, and provide information about the most likely candidate.
Feasibiliy of optical detection of soft tissue deformation during needle insertion
Show abstract
Needles provide an effective way to reach lesions in soft tissue and are frequently used for diagnosis
and treatment. Examples include biopsies, tumor ablation, and brachytherapy. Yet, precise
localization of the needle with respect to the target is complicated by motion and deformation of
the tissue during insertion.
We have developed a prototypical needle with an embedded optical fiber allowing to obtain
optical coherence tomography images of the tissue in front of the needle tip. Using the data and
particularly the Doppler information it is possible to estimate the motion of the needle tip with
respect to the surrounding soft tissue. We studied whether it is feasible to approximate the depth
in tissue by integrating over the relative velocity.
To validate the approach, the needle was driven into tissue phantoms using an articulated robotic
arm. The time when the needle entered and left the phantom was observed with optical cameras,
and the total motion of the robot was compared with the values computed from the Doppler OCTmeasurements.
Our preliminary results indicate that the Doppler data can provide additional information on
the needle position inside soft tissue. It could be used in addition to other image data to improve
precise needle navigation, particularly when other image modalities are subject to artifacts caused
by the needles.
Surgical motion characterization in simulated needle insertion procedures
Matthew S. Holden,
Tamas Ungi,
Derek Sargent,
et al.
Show abstract
PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both
promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to
determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its
five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated
with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree
of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to
determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle
crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by
identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For
amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more
accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm).
For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task
segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as
opposed to a threshold-based algorithm for procedures involving translational noise.
Measurement of distances between anatomical structures using a translating stage with mounted endoscope
Show abstract
During endoscopic procedures it is often desirable to determine the distance between anatomical features. One such
clinical application is percutaneous cochlear implantation (PCI), which is a minimally invasive approach to the cochlea
via a single, straight drill path and can be achieved accurately using bone-implanted markers and customized
microstereotactic frame. During clinical studies to validate PCI, traditional open-field cochlear implant surgery was
performed and prior to completion of the surgery, a customized microstereotactic frame designed to achieve the desired
PCI trajectory was attached to the bone-implanted markers. To determine whether this trajectory would have safely
achieved the target, a sham drill bit is passed through the frame to ensure that the drill bit would reach the cochlea
without damaging vital structures. Because of limited access within the facial recess, the distances from the bit to
anatomical features could not be measured with calipers. We hypothesized that an endoscope mounted on a sliding stage
that translates only along the trajectory, would provide sufficient triangulation to accurately measure these distances. In
this paper, the design, fabrication, and testing of such a system is described. The endoscope is mounted so that its optical
axis is approximately aligned with the trajectory. Several images are acquired as the stage is moved, and threedimensional
reconstruction of selected points allows determination of distances. This concept also has applicability in a
large variety of rigid endoscopic interventions including bronchoscopy, laparoscopy, and sinus endoscopy.
Keyframe selection for robust pose estimation in laparoscopic videos
Show abstract
Motion estimation based on point correspondences in two views is a classic problem in computer vision. In the
field of laparoscopic video sequences - even with state of the art algorithms - a stable motion estimation can not
be guaranteed generally. Typically, a video from a laparoscopic surgery contains sequences where the surgeon
barely moves the endoscope. Such restricted movement causes a small ratio between baseline and distance
leading to unstable estimation results. Exploiting the fact that the entire sequence is known a priori, we propose
an algorithm for keyframe selection in a sequence of images. The key idea can be expressed as follows: if all
combination of frames in a sequence are scored, the optimal solution can be described as a weighted directed
graph problem. We adapt the widely known Dijkstras Algorithm to find the best selection of frames.1 The
framework for keyframe selection can be used universally to find the best combination of frames for any reliable
scoring function. For instance, forward motion ensures the most accurate camera position estimation, whereas
sideward motion is preferred in the sense of reconstruction. Based on the distribution and the disparity of
point correspondences, we propose a scoring function which is capable of detecting poorly conditioned pairs of
frames. We demonstrate the potential of the algorithm focusing on accurate camera positions. A robot system
provides ground truth data. The environment in laparoscopic videos is reflected by an industrial endoscope and
a phantom.
Improving interaction in navigated surgery by combining a pan-tilt mounted laser and a pointer with triggering
Show abstract
User interaction during navigated surgery is often a critical issue in the overall procedure, as several complex aspects
must be considered, such as sterility, workflow, field of view, and cognitive load. This work introduces a new approach
for intraoperative interaction that seamlessly fits the high surgical requirements. A navigation system, typically
consisting of a tracking system and a monitor for 3D virtual models, is augmented with a tracked pointer with triggering
functionality and a pan-tilt mounted laser. The pointer, which is sterile and can be applied for landmark-based organ
registration, is used for wireless interaction with the monitor scene. The laser system enables the calibration of the
monitor, which is out of the tracking system's range. Moreover, the laser beam can focus on any organ point defined on
the virtual model, which improves targeting or visual feedback during intervention. The calibration of the laser system,
monitor, and triggered pointer is achieved by an effective procedure, which can be easily repeated in operating room. The
mathematical background of the calibration is based on the Levenberg-Marquardt and Umeyama's algorithms.
Prostate
An elastic registration framework to estimate prostate deformation in endorectal MR scans
Show abstract
In an effort to improve the accuracy of transrectal ultrasound (TRUS)-guided needle biopsies of the prostate, it is
important to understand the non-rigid deformation of the prostate. To understand the deformation of the prostate when
an endorectal coil (ERC) is inserted, we develop an elastic registration framework to register prostate MR images with
and without ERC. Our registration framework uses robust point matching (RPM) to get the correspondence between the
surface landmarks in the source and target volumes followed by elastic body spline (EBS) registration based on the
corresponding landmark pairs. Together with the manual rigid alignment, we compared our registration framework
based on pure surface landmarks to the registration based on both surface and internal landmarks in the center of the
prostate. In addition, we assessed the impact of constraining the warping in the central zone of the prostate using a
Gaussian weighting function. Our results show that elastic surface-driven prostate registration is feasible, and that
internal landmarks further improve the registration in the central zone while they have little impact on the registration in
the peripheral zone of the prostate. Results varied case by case depending on the accuracy of the prostate segmentation
and the amount of warping present in each image pair. The most accurate results were obtained when using a Gaussian
weighting in the central zone to limit the EBS warping driven by surface points. This suggests that a Gaussian constrain
of the warping can effectively compensate for the limitations of the isotropic EBS deformation model, and for erroneous
warping inside the prostate created by inaccurate surface landmarks driving the EBS.
Implicit active contours for automatic brachytherapy seed segmentation in fluoroscopy
Show abstract
Motivation: In prostate brachytherapy, intra-operative dosimetry would be ideal to allow for rapid evaluation of
the implant quality while the patient is still in the treatment position. Such a mechanism, however, requires 3-D
visualization of the currently deposited seeds relative to the prostate. Thus, accurate, robust, and fully-automatic
seed segmentation is of critical importance in achieving intra-operative dosimetry. Methodology: Implanted
brachytherapy seeds are segmented by utilizing a region-based implicit active contour approach. Overlapping
seed clusters are then resolved using a simple yet effective declustering technique. Results: Ground-truth
seed coordinates were obtained via a published segmentation technique. A total of 248 clinical C-arm images
from 16 patients were used to validate the proposed algorithm resulting in a 98.4% automatic detection rate
with a corresponding 2.5% false-positive rate. The overall mean centroid error between the ground-truth and
automatic segmentations was measured to be 0.42 pixels, while the mean centroid error for overlapping seed
clusters alone was measured to be 0.67 pixels. Conclusion: Based on clinical data evaluation and validation,
robust, accurate, and fully-automatic brachytherapy seed segmentation can be achieved through the implicit
active contour framework and subsequent seed declustering method.
Deformable prostate registration from MR and TRUS images using surface error driven FEM models
Farheen Taquee,
Orcun Goksel,
S. Sara Mahdavi,
et al.
Show abstract
The fusion of TransRectal Ultrasound (TRUS) and Magnetic Resonance (MR) images of the prostate can aid
diagnosis and treatment planning for prostate cancer. Surface segmentations of the prostate are available in
both modalities. Our goal is to develop a 3D deformable registration method based on these segmentations and
a biomechanical model. The segmented source volume is meshed and a linear finite element model is created
for it. This volume is deformed to the target image volume by applying surface forces computed by assuming
a negative relative pressure between the non-overlapping regions of the volumes and the overlapping ones. This
pressure drives the model to increase the volume overlap until the surfaces are aligned. We tested our algorithm
on prostate surfaces extracted from post-operative MR and TRUS images for 14 patients, using a model with
elasticity parameters in the range reported in the literature for the prostate. We used three evaluation metrics
for validating our technique: the Dice Similarity Coefficient (DSC) (ideally equal to 1.0), which is a measure
of volume alignment, the volume change in source surface during registration, which is a measure of volume
preservation, and the distance between the urethras to assess the anatomical correctness of the method. We
obtained a DSC of 0.96±0.02 and a mean distance between the urethras of 1.5±1.4 mm. The change in the
volume of the source surface was 1.5±1.4%. Our results show that this method is a promising tool for physicallybased
deformable surface registration.
A molecular image-directed, 3D ultrasound-guided biopsy system for the prostate
Show abstract
Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate
cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to
30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound imageguided
biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization
system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a
3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT
images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The
segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has
been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired
from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in
human patients.
Development and preliminary evaluation of an ultrasonic motor actuated needle guide for 3T MRI-guided transperineal prostate interventions
Show abstract
Image guided prostate interventions have been accelerated by Magnetic Resonance Imaging (MRI) and robotic
technologies in the past few years. However, transrectal ultrasound (TRUS) guided procedure still remains as vast
majority in clinical practice due to engineering and clinical complexity of the MRI-guided robotic interventions.
Subsequently, great advantages and increasing availability of MRI have not been utilized at its maximum capacity in
clinic. To benefit patients from the advantages of MRI, we developed an MRI-compatible motorized needle guide device
"Smart Template" that resembles a conventional prostate template to perform MRI-guided prostate interventions with
minimal changes in the clinical procedure. The requirements and specifications of the Smart Template were identified
from our latest MRI-guided intervention system that has been clinically used in manual mode for prostate biopsy. Smart
Template consists of vertical and horizontal crossbars that are driven by two ultrasonic motors via timing-belt and mitergear
transmissions. Navigation software that controls the crossbar position to provide needle insertion positions was also
developed. The software can be operated independently or interactively with an open-source navigation software, 3D
Slicer, that has been developed for prostate intervention. As preliminary evaluation, MRI distortion and SNR test were
conducted. Significant MRI distortion was found close to the threaded brass alloy components of the template. However,
the affected volume was limited outside the clinical region of interest. SNR values over routine MRI scan sequences for
prostate biopsy indicated insignificant image degradation during the presence of the robotic system and actuation of the
ultrasonic motors.
Cardiac and Vascular
Towards real-time 3D US-CT registration on the beating heart for guidance of minimally invasive cardiac interventions
Show abstract
Compared to conventional open-heart surgeries, minimally invasive cardiac interventions cause less trauma and sideeffects
to patients. However, the direct view of surgical targets and tools is usually not available in minimally invasive
procedures, which makes image-guided navigation systems essential. The choice of imaging modalities used in the
navigation systems must consider the capability of imaging soft tissues, spatial and temporal resolution, compatibility
and flexibility in the OR, and financial cost. In this paper, we propose a new means of guidance for minimally invasive
cardiac interventions using 3D real-time ultrasound images to show the intra-operative heart motion together with preoperative
CT image(s) employed to demonstrate high-quality 3D anatomical context. We also develop a method to
register intra-operative ultrasound and pre-operative CT images in close to real-time. The registration method has two
stages. In the first, anatomical features are segmented from the first frame of ultrasound images and the CT image(s). A
feature based registration is used to align those features. The result of this is used as an initialization in the second stage,
in which a mutual information based registration is used to register every ultrasound frame to the CT image(s). A GPU
based implementation is used to accelerate the registration.
Multi-sequence magnetic resonance imaging integration framework for image-guided catheter ablation of scar-related ventricular tachycardia
Show abstract
Catheter ablation is an important option to treat ventricular tachycardias (VT). Scar-related VT is among the most
difficult to treat, because myocardial scar, which is the underlying arrhythmogenic substrate, is patient-specific and
often highly complex. The scar image from preprocedural late gadolinium enhancement magnetic resonance
imaging (LGE- MRI) can provide high-resolution substrate information and, if integrated at the early stage of the
procedure, can largely facilitate the procedure with image guidance. In clinical practice, however, early MRI
integration is difficult because available integration tools rely on matching the MRI surface mesh and
electroanatomical mapping (EAM) points, which is only possible after extensive EAM has been performed.
In this paper, we propose to use a priori information on patient posture and a multi-sequence MRI integration
framework to achieve accurate MRI integration that can be accomplished at an early stage of the procedure. From
the MRI sequences, the left ventricular (LV) geometry, myocardial scar characteristics, and an anatomical landmark
indicating the origin of the left main coronary artery are obtained preprocedurally using image processing techniques.
Thereby the integration can be realized at the beginning of the procedure after acquiring a single mapping point. The
integration method has been evaluated postprocedurally in terms of LV shape match and actual scar match.
Compared to the iterative closest point (ICP) method that uses high-intensity mapping (225±49 points), our method
using one mapping point reached a mean point-to-surface distance of 5.09±1.09 mm (vs. 3.85±0.60 mm, p<0.05),
and scar correlation of -0.51±0.14 (vs. -0.50±0.14, p=NS).
An augmented reality platform for planning of minimally invasive cardiac surgeries
Show abstract
One of the fundamental components in all Image Guided Surgery (IGS) applications is a method for presenting
information to the surgeon in a simple, effective manner. This paper describes the first steps in our new
Augmented Reality (AR) information delivery program. The system makes use of new "off the shelf" AR glasses
that are both light-weight and unobtrusive, with adequate resolution for many IGS applications. Our first
application is perioperative planning of minimally invasive robot-assisted cardiac surgery. In this procedure,
a combination of tracking technologies and intraoperative ultrasound is used to map the migration of cardiac
targets prior to selection of port locations for trocars that enter the chest. The AR glasses will then be used to
present this heart migration data to the surgeon, overlaid onto the patients chest. The current paper describes
the calibration process for the AR glasses, their integration into our IGS framework for minimally invasive robotic
cardiac surgery, and preliminary validation of the system. Validation results indicate a mean 3D triangulation
error of 2.9 ± 3.3mm, 2D projection error of 2.1 ± 2.1 pixels, and Normalized Stereo Calibration Error of 3.3.
Extended contrast detection on fluoroscopy and angiography for image-guided trans-catheter aortic valve implantations (TAVI)
Show abstract
Navigation and deployment of the prosthetic valve during trans-catheter aortic valve implantation (TAVI) can be greatly
facilitated with 3-D models showing detailed anatomical structures. Fast and robust automatic contrast detection at the
aortic root on X-ray images is indispensable for automatically triggering a 2-D/3-D registration to align the 3-D model.
Previously, we have proposed an automatic method for contrast detection at the aortic root on fluoroscopic and
angiographic sequences [4]. In this paper, we extend that algorithm in several ways, making it more robust to handle
more general and difficult cases. Specifically, the histogram likelihood ratio test is multiplied with the histogram portion
computation to handle faint contrast cases. Histogram mapping corrects sudden changes in the global brightness, thus
avoiding potential false positives. Respiration and heart beating check further reduces the false positive rate. In addition,
a probe mask is introduced to enhance the contrast feature curve when the dark ultrasound probe partially occludes the
aortic root. Lastly, a semi-global registration method for aligning the aorta shape model is implemented to improve the
robustness of the algorithm with respect to the selection of region of interest (ROI) containing the aorta. The extended
algorithm was evaluated on 100 sequences, and improved the detection accuracy from 94% to 100%, compared to the
original method. Also, the robustness of the extended algorithm was tested with 20 different shifts of the ROI, and the
error rate was as low as 0.2%, in comparison to 6.6% for the original method.
Multiple capture locations for 3D ultrasound-guided robotic retrieval of moving bodies from a beating heart
Paul Thienphrapa,
Bharat Ramachandran,
Haytham Elhawary,
et al.
Show abstract
Free moving bodies in the heart pose a serious health risk as they may be released in the arteries causing blood
flow disruption. These bodies may be the result of various medical conditions and trauma. The conventional
approach to removing these objects involves open surgery with sternotomy, the use of cardiopulmonary bypass,
and a wide resection of the heart muscle. We advocate a minimally invasive surgical approach using a flexible
robotic end effector guided by 3D transesophageal echocardiography. In a phantom study, we track a moving
body in a beating heart using a modified normalized cross-correlation method, with mean RMS errors of 2.3 mm.
We previously found the foreign body motion to be fast and abrupt, rendering infeasible a retrieval method based
on direct tracking. We proposed a strategy based on guiding a robot to the most spatially probable location of
the fragment and securing it upon its reentry to said location.
To improve efficacy in the context of a robotic retrieval system, we extend this approach by exploring multiple
candidate capture locations. Salient locations are identified based on spatial probability, dwell time, and visit
frequency; secondary locations are also examined. Aggregate results indicate that the location of highest spatial
probability (50% occupancy) is distinct from the longest-dwelled location (0.84 seconds). Such metrics are vital
in informing the design of a retrieval system and capture strategies, and they can be computed intraoperatively
to select the best capture location based on constraints such as workspace, time, and device manipulability.
Given the complex nature of fragment motion, the ability to analyze multiple capture locations is a desirable
capability in an interventional system.
Coronary arteries motion modeling on 2D x-ray images
Show abstract
During interventional procedures, 3D imaging modalities like CT and MRI are not commonly used due to interference
with the surgery and radiation exposure concerns. Therefore, real-time information is usually limited and
building models of cardiac motion are difficult. In such case, vessel motion modeling based on 2-D angiography
images become indispensable. Due to issues with existing vessel segmentation algorithms and the lack of contrast
in occluded vessels, manual segmentation of certain branches is usually necessary. In addition, such occluded
branches are the most important vessels during coronary interventions and obtaining motion models for these
can greatly help in reducing the procedure time and radiation exposure. Segmenting different cardiac phases independently
does not guarantee temporal consistency and is not efficient for occluded branches required manual
segmentation. In this paper, we propose a coronary motion modeling system which extracts the coronary tree
for every cardiac phase, maintaining the segmentation by tracking the coronary tree during the cardiac cycle. It
is able to map every frame to the specific cardiac phase, thereby inferring the shape information of the coronary
arteries using the model corresponding to its phase. Our experiments show that our motion modeling system
can achieve promising results with real-time performance.
Neuro and Head
Variability of the temporal bone surface's topography: implications for otologic surgery
Show abstract
Otologic surgery is performed for a variety of reasons including treatment of recurrent ear infections, alleviation of
dizziness, and restoration of hearing loss. A typical ear surgery consists of a tympanomastoidectomy in which both the
middle ear is explored via a tympanic membrane flap and the bone behind the ear is removed via mastoidectomy to treat
disease and/or provide additional access. The mastoid dissection is performed using a high-speed drill to excavate bone
based on a pre-operative CT scan. Intraoperatively, the surface of the mastoid component of the temporal bone provides
visual feedback allowing the surgeon to guide their dissection. Dissection begins in "safe areas" which, based on surface
topography, are believed to be correlated with greatest distance from surface to vital anatomy thus decreasing the chance
of injury to the brain, large blood vessels (e.g. the internal jugular vein and internal carotid artery), the inner ear, and the
facial nerve. "Safe areas" have been identified based on surgical experience with no identifiable studies showing
correlation of the surface with subsurface anatomy. The purpose of our study was to investigate whether such a
correlation exists. Through a three-step registration process, we defined a correspondence between each of twenty five
clinically-applicable temporal bone CT scans of patients and an atlas and explored displacement and angular differences
of surface topography and depth of critical structures from the surface of the skull. The results of this study reflect
current knowledge of osteogenesis and anatomy. Based on two features (distance and angular difference), two regions
(suprahelical and posterior) of the temporal bone show the least variability between surface and subsurface anatomy.
Registering stereovision surface with preoperative magnetic resonance images for brain shift compensation
Show abstract
Intraoperative brain deformation can significantly degrade the accuracy of image guidance using preoperative MR
images (pMR). To compensate for brain deformation, biomechanical models have been used to assimilate intraoperative
displacement data, compute whole-brain deformation field, and to produce updated MR images (uMR). Stereovision
(SV) is an important technique to capture both geometry and texture information of exposed cortical surface at the
craniotomy, from which surface displacement data (known as sparse data) can be extracted by registering with pMR to
drive the computational model. Approaches that solely utilize geometrical information (e.g., closest point distance (CPD)
and iterative closest point (ICP) method) do not seem to capture surface deformation accurately especially when
significant lateral shift occurs. In this study, we have developed a texture intensity-based method to register cortical
surface reconstructed from stereovision after dural opening with pMR to extract 3D sparse data. First, a texture map is
created from pMR using surface geometry before dural opening. Second, a mutual information (MI)-based registration
was performed between the texture map and the corresponding stereo image after dural opening to capture the global
lateral shift. A block-matching algorithm was then executed to differentiate local displacements in smaller patches. The
global and local displacements were finally combined and transformed in 3D following stereopsis. We demonstrate the
application of the proposed method with a clinical patient case, and show that the accuracy of the technique is 1-2 mm in
terms of model-data misfit with a computation time <10 min.
A surgeon specific automatic path planning algorithm for deep brain stimulation
Show abstract
In deep brain stimulation surgeries, stimulating electrodes are placed at specific targets in the deep brain to treat
neurological disorders. Reaching these targets safely requires avoiding critical structures in the brain. Meticulous
planning is required to find a safe path from the cortical surface to the intended target. Choosing a trajectory
automatically is difficult because there is little consensus among neurosurgeons on what is optimal. Our goals are to
design a path planning system that is able to learn the preferences of individual surgeons and, eventually, to standardize
the surgical approach using this learned information. In this work, we take the first step towards these goals, which is to
develop a trajectory planning approach that is able to effectively mimic individual surgeons and is designed such that
parameters, which potentially can be automatically learned, are used to describe an individual surgeon's preferences. To
validate the approach, two neurosurgeons were asked to choose between their manual and a computed trajectory, blinded
to their identity. The results of this experiment showed that the neurosurgeons preferred the computed trajectory over
their own in 10 out of 40 cases. The computed trajectory was judged to be equivalent to the manual one or otherwise
acceptable in 27 of the remaining cases. These results demonstrate the potential clinical utility of computer-assisted path
planning.
Automatic pre- to intra-operative CT registration for image-guided cochlear implant surgery
Show abstract
Percutaneous cochlear implantation (PCI) is a minimally invasive image-guided cochlear implant approach, where
access to the cochlea is achieved by drilling a linear channel from the outer skull to the cochlea. The PCI approach
requires pre- and intra-operative planning. Segmentation of critical ear anatomy and computation of a safe drilling
trajectory are performed in a pre-operative CT. The computed safe drilling trajectory must then be mapped to the intraoperative
space. The mapping can be done using the transformation matrix that registers the pre- and intra-operative
CTs. However, the difference in orientation between the pre- and intra-operative CTs is too extreme to be recovered by
standard, gradient descent-based registration methods. Thus, we have so far relied on an expert to manually initialize the
registration. In this work we present a method that aligns the scans automatically. We compared the performance of the
automatic approach to the registration approach when an expert does the manual initialization on ten pairs of scans.
There is a maximum difference of 0.19 mm between the entry and target points resulting from the automatic and
manually initialized registration processes. This suggests that the automatic registration method is accurate enough to be
used in a PCI surgery.
A system for saccular intracranial aneurysm analysis and virtual stent planning
Show abstract
Recent studies have found correlation between the risk of rupture of saccular aneurysms and their morphological
characteristics, such as volume, surface area, neck length, among others. For reliably exploiting these parameters
in endovascular treatment planning, it is crucial that they are accurately quantified. In this paper, we present
a novel framework to assist physicians in accurately assessing saccular aneurysms and efficiently planning for
endovascular intervention. The approach consists of automatically segmenting the pathological vessel, followed
by the construction of its surface representation. The aneurysm is then separated from the vessel surface
through a graph-cut based algorithm that is driven by local geometry as well as strong prior information. The
corresponding healthy vessel is subsequently reconstructed and measurements representing the patient-specific
geometric parameters of pathological vessel are computed. To better support clinical decisions on stenting and
device type selection, the placement of virtual stent is eventually carried out in conformity with the shape of the
diseased vessel using the patient-specific measurements. We have implemented the proposed methodology as a
fully functional system, and extensively tested it with phantom and real datasets.
Lung and Liver
Bronchoscopy guidance system based on bronchoscope-motion measurements
Show abstract
Bronchoscopy-guidance systems assist physicians during bronchoscope navigation. However, these systems require
an attending technician and fail to continuously track the bronchoscope. We propose a real-time technicianfree
bronchoscopy-guidance system that employs continuous tracking. For guidance, our system presents directions
on virtual views that are generated from the bronchoscope's tracked location. The system achieves bronchoscope
tracking using a strategy that is based on a recently proposed method for sensor-based bronchoscope-motion
tracking.1 Furthermore, a graphical indicator notifies the physician when he/she has maneuvered the bronchoscope
to an incorrect branch. Our proposed system uses the sensor data to generate virtual views through
multiple candidate routes and employs image matching in a Bayesian framework to determine the most probable
bronchoscope pose. Tests based on laboratory phantoms validate the potential of the system.
Planning and visualization methods for effective bronchoscopic target localization
Show abstract
Bronchoscopic biopsy of lymph nodes is an important step in staging lung cancer. Lymph nodes, however, lie
behind the airway walls and are near large vascular structures - all of these structures are hidden from the
bronchoscope's field of view. Previously, we had presented a computer-based virtual bronchoscopic navigation
system that provides reliable guidance for bronchoscopic sampling. While this system offers a major improvement
over standard practice, bronchoscopists told us that target localization- lining up the bronchoscope before
deploying a needle into the target - can still be challenging. We therefore address target localization in two
distinct ways: (1) automatic computation of an optimal diagnostic sampling pose for safe, effective biopsies, and
(2) a novel visualization of the target and surrounding major vasculature. The planning determines the final
pose for the bronchoscope such that the needle, when extended from the tip, maximizes the tissue extracted.
This automatically calculated local pose orientation is conveyed in endoluminal renderings by a 3D arrow. Additional
visual cues convey obstacle locations and target depths-of-sample from arbitrary instantaneous viewing
orientations. With the system, a physician can freely navigate in the virtual bronchoscopic world perceiving the
depth-of-sample and possible obstacle locations at any endoluminal pose, not just one pre-determined optimal
pose. We validated the system using mediastinal lymph nodes in eleven patients. The system successfully planned
for 20 separate targets in human MDCT scans. In particular, given the patient and bronchoscope constraints,
our method found that safe, effective biopsies were feasible in 16 of the 20 targets; the four remaining targets
required more aggressive safety margins than a "typical" target. In all cases, planning computation took only a
few seconds, while the visualizations updated in real time during bronchoscopic navigation.
High-performance C-arm cone-beam CT guidance of thoracic surgery
Show abstract
Localizing sub-palpable nodules in minimally invasive video-assisted thoracic surgery (VATS) presents a significant
challenge. To overcome inherent problems of preoperative nodule tagging using CT fluoroscopic guidance, an
intraoperative C-arm cone-beam CT (CBCT) image-guidance system has been developed for direct localization of
subpalpable tumors in the OR, including real-time tracking of surgical tools (including thoracoscope), and video-CBCT
registration for augmentation of the thoracoscopic scene. Acquisition protocols for nodule visibility in the inflated and
deflated lung were delineated in phantom and animal/cadaver studies. Motion compensated reconstruction was
implemented to account for motion induced by the ventilated contralateral lung. Experience in CBCT-guided targeting of
simulated lung nodules included phantoms, porcine models, and cadavers. Phantom studies defined low-dose acquisition
protocols providing contrast-to-noise ratio sufficient for lung nodule visualization, confirmed in porcine specimens with
simulated nodules (3-6mm diameter PE spheres, ~100-150HU contrast, 2.1mGy). Nodule visibility in CBCT of the
collapsed lung, with reduced contrast according to air volume retention, was more challenging, but initial studies
confirmed visibility using scan protocols at slightly increased dose (~4.6-11.1mGy). Motion compensated reconstruction
employing a 4D deformation map in the backprojection process reduced artifacts associated with motion blur.
Augmentation of thoracoscopic video with renderings of the target and critical structures (e.g., pulmonary artery) showed
geometric accuracy consistent with camera calibration and the tracking system (2.4mm registration error). Initial results
suggest a potentially valuable role for CBCT guidance in VATS, improving precision in minimally invasive, lungconserving
surgeries, avoid critical structures, obviate the burdens of preoperative localization, and improve patient
safety.
Fast CT-CT fluoroscopy registration with respiratory motion compensation for image-guided lung intervention
Show abstract
CT-fluoroscopy (CTF) is an efficient imaging method for guiding percutaneous lung interventions such as biopsy.
During CTF-guided biopsy procedure, four to ten axial sectional images are captured in a very short time period to
provide nearly real-time feedback to physicians, so that they can adjust the needle as it is advanced toward the target
lesion. Although popularly used in clinics, this traditional CTF-guided intervention procedure may require frequent scans
and cause unnecessary radiation exposure to clinicians and patients. In addition, CTF only generates limited slices of
images and provides limited anatomical information. It also has limited response to respiratory movements and has
narrow local anatomical dynamics. To better utilize CTF guidance, we propose a fast CT-CTF registration algorithm
with respiratory motion estimation for image-guided lung intervention using electromagnetic (EM) guidance. With the
pre-procedural exhale and inhale CT scans, it would be possible to estimate a series of CT images of the same patient at
different respiratory phases. Then, once a CTF image is captured during the intervention, our algorithm can pick the best
respiratory phase-matched 3D CT image and performs a fast deformable registration to warp the 3D CT toward the CTF.
The new 3D CT image can be used to guide the intervention by superimposing the EM-guided needle location on it.
Compared to the traditional repetitive CTF guidance, the registered CT integrates both 3D volumetric patient data and
nearly real-time local anatomy for more effective and efficient guidance. In this new system, CTF is used as a nearly
real-time sensor to overcome the discrepancies between static pre-procedural CT and the patient's anatomy, so as to
provide global guidance that may be supplemented with electromagnetic (EM) tracking and to reduce the number of CTF
scans needed. In the experiments, the comparative results showed that our fast CT-CTF algorithm can achieve better
registration accuracy.
Poster Session: Visualization, Segmentation, and Registration
Application of unscented Kalman filter for robust pose estimation in image-guided surgery
Show abstract
Image-guided surgery (IGS) allows clinicians to view current, intra-operative scenes superimposed on preoperative
images (typically MRI or CT scans). IGS systems use localization systems to track and visualize surgical tools overlaid
on top of preoperative images of the patient during surgery. The most commonly used localization systems in the
Operating Rooms (OR) are optical tracking systems (OTS) due to their ease of use and cost effectiveness. However,
OTS' suffer from the major drawback of line-of-sight requirements. State space approaches based on different
implementations of the Kalman filter have recently been investigated in order to compensate for short line-of-sight
occlusion. However, the proposed parameterizations for the rigid body orientation suffer from singularities at certain
values of rotation angles. The purpose of this work is to develop a quaternion-based Unscented Kalman Filter (UKF) for
robust optical tracking of both position and orientation of surgical tools in order to compensate marker occlusion issues.
This paper presents preliminary results towards a Kalman-based Sensor Management Engine (SME). The engine will
filter and fuse multimodal tracking streams of data. This work was motivated by our experience working in robot-based
applications for keyhole neurosurgery (ROBOCAST project). The algorithm was evaluated using real data from NDI
Polaris tracker. The results show that our estimation technique is able to compensate for marker occlusion with a
maximum error of 2.5° for orientation and 2.36 mm for position. The proposed approach will be useful in over-crowded
state-of-the-art ORs where achieving continuous visibility of all tracked objects will be difficult.
Interactive GPU volume raycasting in a clustered graphics environment
Show abstract
This research focuses on performing interactive, real-time volume raycasting in a large clustered graphics environment
using custom GPU shaders for composite volume raycasting with trilinear interpolation. Working in this type of
environment presents unique challenges due to the distributed nature, and inherently required synchronization of data
and operations across the cluster. Invoking custom vertex and fragment shaders in a non-thread-safe manner becomes
increasingly complex in a large clustered graphics environment. Through use of an abstraction layer, all rendering
contexts are split-up with no changes to the volume raycasting core. Therefore, the volume raycasting core is completely
transparent from the computing platform. The application was tested on a 6-wall immersive VR system with 96 graphics
contexts coming from 48 cluster nodes. Interactive framerates of 60 frames per second were produced on 512x512x100
volumes, and an average of 30 frames per second for a 512x512x1000 volume. The use of custom configuration files
allows the same code to be highly scalable from a single screen VR system to a fully immersive 6-sided wall VR system.
Through the code abstraction, the same volume raycasting core can be implemented on any type of computing platform
including desktop and mobile.
Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams
Show abstract
Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of
view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high
definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution
equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand
for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on
the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new
demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed
up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes
we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results.
Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction
information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater
stability.
Nonlinear ray tracing for vessel enhanced visualization
Show abstract
3D visualization of angiography data is an important preprocessing step in diagnosis of vascular disease. This paper
describes an efficient volume rendering method to emphasize feature-rich region (or focus) in the 3D angiography data.
The method takes the input 3D angiography data and computes the focus with user specification or certain feature
extraction algorithms. Then, a distance map is constructed based on the description of the focused region(s). While
rendering the 3D angiography data, the nonlinear ray tracing method is used and the gradient of the distance volume is
applied to guide ray marching. In the result image, the focused region(s) appears larger than in the normal ray-casting
image, while the context (other regions of the volume) can be still preserved in the image (maybe displayed in a shrink
size). This method avoids deforming the original volume to magnify focus regions, which is expensive to compute, thus
improves the performance.
Graph-based surface extraction of the liver using locally adaptive priors for multimodal interventional image registration
Show abstract
The 3D fusion of tracked ultrasound with a diagnostic CT image has multiple benefits in a variety of interventional
applications for oncology. Still, manual registration is a considerable drawback to the clinical workflow and hinders the
widespread clinical adoption of this technique. In this paper, we propose a method to allow for an image-based
automated registration, aligning multimodal images of the liver. We adopt a model-based approach that rigidly matches
segmented liver shapes from ultrasound (U/S) and diagnostic CT imaging. Towards this end, a novel method which
combines a dynamic region-growing method with a graph-based segmentation framework is introduced to address the
challenging problem of liver segmentation from U/S. The method is able to extract liver boundary from U/S images after
a partial surface is generated near the principal vector from an electromagnetically tracked U/S liver sweep. The liver
boundary is subsequently expanded by modeling the problem as a graph-cut minimization scheme, where cost functions
used to detect optimal surface topology are determined from adaptive priors of neighboring surface points. This allows
including boundaries affected by shadow areas by compensating for varying levels of contrast. The segmentation of the
liver surface is performed in 3D space for increased accuracy and robustness. The method was evaluated in a study
involving 8 patients undergoing biopsy or radiofrequency ablation of the liver, yielding promising surface segmentation
results based on ground-truth comparison. The proposed extended segmentation technique improved the fiducial
landmark registration error compared to a point-based registration (7.2mm vs. 10.2mm on average, respectively), while
achieving tumor target registration errors that are statistically equivalent (p > 0.05) to state-of-the-art methods.
Probabilistic registration of an unbiased statistical shape model to ultrasound images of the spine
Show abstract
The placement of an epidural needle is among the most difficult regional anesthetic techniques. Ultrasound
has been proposed to improve success of placement. However, it has not become the standard-of-care because
of limitations in the depictions and interpretation of the key anatomical features. We propose to augment
the ultrasound images with a registered statistical shape model of the spine to aid interpretation. The model
is created with a novel deformable group-wise registration method which utilizes a probabilistic approach to
register groups of point sets. The method is compared to a volume-based model building technique and it
demonstrates better generalization and compactness. We instantiate and register the shape model to a spine
surface probability map extracted from the ultrasound images. Validation is performed on human subjects. The
achieved registration accuracy (2-4 mm) is sufficient to guide the choice of puncture site and trajectory of an
epidural needle.
Poster Session: Tracking and Radiation Therapy
Automatic patient alignment for prostate radiation applying 3D ultrasound
Show abstract
Latest developments in radiation therapy such as IGRT (image guided radiation therapy) and IMRT
(intensity modulated radiation therapy) promise to spare organs at risk by applying better dose
distribution on the tumor. For any effective application of these methods the exact positioning
of the patient and the localization of the exposured organ is crucially. With respect to the filling
of rectum and bladder the prostate can move several millimeters up to centimeters. That implies
the need of daily determination and correction of the position of the prostate before irradiation.
We build a system which uses 3D US at both sites, the CT room and the intervention room and
applied a 3D/3D USUS registration for fully automatic repositioning. In a first step an appropriate
preprocessing of the US images is necessary. We implemented an importance image filter process to
improve the following registration success. For the 3D3D registration five different object functions
were implemented. To find the object function which fits best for the particular patient three 3D US
images were taken at the CT site and an US registration error was calculated. The most successful
object function was then applied at the treatment site.The US registration error was found to be
3.48 ± 2.32 mm (eight patients) with resect to the Mutual Information metric by Mattes. For
complete repositioning the distance error amounted to be 5.0 ± 3.1 mm (four patients).
Automated fiducial marker planning for thoracic stereotactic body radiation therapy
Show abstract
Stereotactic body-radiation therapy (SBRT) has gained acceptance in treating lung cancer. Localization of a
thoracic lesion is challenging as tumors can move significantly with breathing. Some SBRT systems compensate
for tumor motion with the intrafraction tracking of targets by two stereo fluoroscopy cameras. However, many
lung tumors lack a fluoroscopic signature and cannot be directly tracked. Small radiopaque fiducial markers,
acting as fluoroscopically visible surrogates, are instead implanted nearby. The spacing and configuration of
the fiducial markers is important to the success of the therapy as SBRT systems impose constraints on the
geometry of a fiducial-marker constellation. It is difficult even for experienced physicians mentally assess the
validity of a constellation a priori. To address this challenge, we present the first automated planning system
for bronchoscopic fiducial-marker placement. Fiducial-marker planning is posed as a constrained combinatoric
optimization problem. Constraints include requiring access from a navigable airway, having sufficient separation
in the fluoroscopic imaging planes to resolve each individual marker, and avoidance of major blood vessels.
Automated fiducial-marker planning takes approximately fifteen seconds, fitting within the clinical workflow.
The resulting locations are integrated into a virtual bronchoscopic planning system, which provides guidance to
each location during the implantation procedure. To date, we have retrospectively planned over 50 targets for
treatment, and have implanted markers according to the automated plan in one patient who then underwent
SBRT treatment. To our knowledge, this approach is the first to address automated bronchoscopic fiducialmarker
planning for SBRT.
Repeatable assessment protocol for electromagnetic trackers
Show abstract
In the past decades, many new trends appeared in interventional medicine. One of the most groundbreaking ones is
Image-Guided Surgery (IGS). The main benefit of IGS procedures is the reduction of the patient's pain and collateral
damage through improved accuracy and targeting. Electromagnetic Tracking (EMT) has been introduced to medical
applications as an effective tool for navigation. However, magnetic fields can be severely distorted by ferromagnetic
materials and electronic equipment, which is a major barrier towards their wider application. The focus of the study
is to determine and compensate the inherent errors of the different types of EMTs, in order to improve their accuracy.
Our aim is to develop a standardized, simple and repeatable assessment protocol; to determine tracking errors with
sub-millimeter accuracy, hence increasing the measurement precision and reliability. For initial experiments, the
NDI Aurora and the Ascension medSAFE systems were used in a standard laboratory environment. We aim to
advance to the state-of-the art by describing and disseminating an easily reproducible calibration method, publishing
the CAD files of the accuracy phantom and the source of the evaluation data. This should allow the wider spread of
the technique, and eventually lead to the repeatable and comparable assessment of EMT systems.
A quantitative assessment of using the Kinect for Xbox 360 for respiratory surface motion tracking
Show abstract
This paper describes a quantitative assessment of the Microsoft Kinect for X-box360TM for potential application
in tracking respiratory and body motion in diagnostic imaging and external beam radiotherapy. However, the
results can also be used in many other biomedical applications. We consider the performance of the Kinect in
controlled conditions and find mm precision at depths of 0.8-1.5m. We also demonstrate the use of the Kinect for
monitoring respiratory motion of the anterior surface. To improve the performance of respiratory monitoring,
we fit a spline model of the chest surface through the depth data as a method of a marker-less monitoring of
a respiratory motion. In addition, a comparison between the Kinect camera with and without zoom lens and
a marker-based system was used to evaluate the accuracy of using the Kinect camera as a respiratory tracking
system.
Poster Session: Robotics
A high accuracy multi-image registration method for tracking MRI-guided robots
Show abstract
Recent studies have demonstrated an increasing number of functional surgical robots and other devices operating in the
Magnetic Resonance Imaging (MRI) environment. Calibration and tracking of the robotic device is essential during such
MRI-guided procedures. A fiducial tracking module is placed on the base or the end effector of the robot to localize it
within the scanner, and thus the patient coordinate system. The fiducial frame represents a Z shape and is made of seven
tubes filled with high contrast fluid. The frame is highlighted in the MR images and is used in localization. Compared to
the former single image registration method, multiple images are used in this algorithm to calculate the position and
orientation of the frame, and thus the robot. By using multiple images together, measurement error is reduced and the
rigid requirement of slow to acquire high quality of images is not required. Accuracy and performance were evaluated in
experiments which were operated with a Philips 3T MRI scanner. Presented is an accuracy comparison of the new
method with varied number of images, and a comparison to more traditional single image registration techniques.
System for robot-assisted real-time laparoscopic ultrasound elastography
Show abstract
Surgical robots provide many advantages for surgery, including minimal invasiveness, precise motion, high dexterity,
and crisp stereovision. One limitation of current robotic procedures, compared to open surgery, is the loss of haptic
information for such purposes as palpation, which can be very important in minimally invasive tumor resection.
Numerous studies have reported the use of real-time ultrasound elastography, in conjunction with conventional B-mode
ultrasound, to differentiate malignant from benign lesions. Several groups (including our own) have reported integration
of ultrasound with the da Vinci robot, and ultrasound elastography is a very promising image guidance method for robotassisted
procedures that will further enable the role of robots in interventions where precise knowledge of sub-surface
anatomical features is crucial. We present a novel robot-assisted real-time ultrasound elastography system for minimally
invasive robot-assisted interventions. Our system combines a da Vinci surgical robot with a non-clinical experimental
software interface, a robotically articulated laparoscopic ultrasound probe, and our GPU-based elastography system.
Elasticity and B-mode ultrasound images are displayed as picture-in-picture overlays in the da Vinci console. Our system
minimizes dependence on human performance factors by incorporating computer-assisted motion control that
automatically generates the tissue palpation required for elastography imaging, while leaving high-level control in the
hands of the user. In addition to ensuring consistent strain imaging, the elastography assistance mode avoids the
cognitive burden of tedious manual palpation. Preliminary tests of the system with an elasticity phantom demonstrate the
ability to differentiate simulated lesions of varied stiffness and to clearly delineate lesion boundaries.
Magnetic resonance imaging properties of multimodality anthropomorphic silicone rubber phantoms for validating surgical robots and image guided therapy systems
Carling L. Cheung,
Thomas Looi,
James Drake,
et al.
Show abstract
The development of image guided robotic and mechatronic platforms for medical applications requires a phantom
model for initial testing. Finding an appropriate phantom becomes challenging when the targeted patient
population is pediatrics, particularly infants, neonates or fetuses. Our group is currently developing a pediatricsized
surgical robot that operates under fused MRI and laparoscopic video guidance. To support this work, we
describe a method for designing and manufacturing silicone rubber organ phantoms for the purpose of testing
the robotics and the image fusion system. A surface model of the organ is obtained and converted into a mold
that is then rapid-prototyped using a 3D printer. The mold is filled with a solution containing a particular
ratio of silicone rubber to slacker additive to achieve a specific set of tactile and imaging characteristics in
the phantom. The expected MRI relaxation times of different ratios of silicone rubber to slacker additive are
experimentally quantified so that the imaging properties of the phantom can be matched to those of the organ
that it represents. Samples of silicone rubber and slacker additive mixed in ratios ranging from 1:0 to 1:1.5 were
prepared and scanned using inversion recovery and spin echo sequences with varying TI and TE, respectively,
in order to fit curves to calculate the expected T1 and T2 relaxation times of each ratio. A set of infantsized
abdominal organs was prepared, which were successfully sutured by the robot and imaged using different
modalities.
Enabling technologies for natural orifice transluminal endoscopic surgery (N.O.T.E.S) using robotically guided elasticity imaging
Show abstract
Natural orifice transluminal endoscopic surgery (N.O.T.E.S) is a minimally invasive surgical technique that could benefit
greatly from additional methods for intraoperative detection of tissue malignancies (using elastography) along with more
precise control of surgical tools. Ultrasound elastography has proven itself as an invaluable imaging modality. However,
elasticity images typically suffer from low contrast when imaging organs from the surface of the body. In addition, the
palpation motions needed to generate elastography images useful for identifying clinically significant changes in tissue
properties are difficult to produce because they require precise axial displacements along the imaging plane.
Improvements in elasticity imaging necessitate an approach that simultaneously removes the need for imaging from the
body surface while providing more precise palpation motions. As a first step toward performing N.O.T.E.S in-vivo, we
integrated a phased ultrasonic micro-array with a flexible snake-like robot. The integrated system is used to create
elastography images of a spherical isoechoic lesion (approximately 5mm in cross-section) in a tissue-mimicking
phantom. Images are obtained by performing robotic palpation of the phantom at the location of the lesion.
A networked modular hardware and software system for MRI-guided robotic prostate interventions
Show abstract
Magnetic resonance imaging (MRI) provides high resolution multi-parametric imaging, large soft tissue contrast,
and interactive image updates making it an ideal modality for diagnosing prostate cancer and guiding surgical
tools. Despite a substantial armamentarium of apparatuses and systems has been developed to assist surgical
diagnosis and therapy for MRI-guided procedures over last decade, the unified method to develop high fidelity
robotic systems in terms of accuracy, dynamic performance, size, robustness and modularity, to work inside
close-bore MRI scanner still remains a challenge. In this work, we develop and evaluate an integrated modular
hardware and software system to support the surgical workflow of intra-operative MRI, with percutaneous
prostate intervention as an illustrative case. Specifically, the distinct apparatuses and methods include: 1) a
robot controller system for precision closed loop control of piezoelectric motors, 2) a robot control interface
software that connects the 3D Slicer navigation software and the robot controller to exchange robot commands
and coordinates using the OpenIGTLink open network communication protocol, and 3) MRI scan plane alignment
to the planned path and imaging of the needle as it is inserted into the target location. A preliminary experiment
with ex-vivo phantom validates the system workflow, MRI-compatibility and shows that the robotic system has
a better than 0.01mm positioning accuracy.
3D catheter reconstruction using non-rigid structure-from-motion and robotics modeling
Show abstract
Surgical guidance during minimally invasive intervention could be greatly enhanced if the 3D location and
orientation of instruments, especially catheters, is available. In this paper, we present a new method for the 3D
reconstruction of deforming curvilinear objects such as catheters, using the framework of Non-Rigid Structurefrom-
Motion (NRSfM). We combine NRSfM with a kinematics model from the field of Robotics, which provides
a low-dimensional parametrization of the object deformation. This is used in the context of an X-ray imaging
system where multiple views are acquired with a small view separation. We show that using such a kinematics
model, a non-linear optimization scheme succeeds in retrieving the deformable 3D pose from the 2D projections.
Experiments on synthetic and real X-ray data show promising results of the proposed method as compared to
state-of-the-art NRSfM.
Poster Session: Simulation and Modeling
Initial study of breast tissue retraction toward image guided breast surgery
Michael J. Shannon,
Ingrid M. Meszoely,
Janet E. Ondrake,
et al.
Show abstract
Image-guided surgery may reduce the re-excision rate in breast-conserving tumor-resection surgery, but
image guidance is difficult since the breast undergoes significant deformation during the procedure. In
addition, any imaging performed preoperatively is usually conducted in a very different presentation to that in
surgery. Biomechanical models combined with low-cost ultrasound imaging and laser range scanning may
provide an inexpensive way to provide intraoperative guidance information while also compensating for soft
tissue deformations that occur during breast-conserving surgery. One major cause of deformation occurs after
an incision into the tissue is made and the skin flap is pulled back with the use of retractors. Since the next
step in the surgery would be to start building a surgical plane around the tumor to remove cancerous tissue, in
an image-guidance environment, it would be necessary to have a model that corrects for the deformation
caused by the surgeon to properly guide the application of resection tools. In this preliminary study, two
anthropomorphic breast phantoms were made, and retractions were performed on both with improvised
retractors. One phantom underwent a deeper retraction that the other. A laser range scanner (LRS) was used to
monitor phantom tissue change before and after retraction. The surface data acquired with the LRS and
retractors were then used to drive the solution of a finite element model. The results indicate an encouraging
level of agreement between model predictions and data. The surface target error for the phantom with the
deep retraction was 2.2 +/- 1.2 mm (n=47 targets) with the average deformation of the surface targets at 4.2
+/- 1.6mm. For the phantom with the shallow retraction, the surface target error was 2.1 +/- 1.0 mm (n=70
targets) with the average deformation of the surface targets at 4.0 +/- 2.0 mm.
Procedural wound geometry and blood flow generation for medical training simulators
Show abstract
Efficient application of wound treatment procedures is vital in both emergency room and battle zone scenes. In order to
train first responders for such situations, physical casualty simulation kits, which are composed of tens of individual
items, are commonly used. Similar to any other training scenarios, computer simulations can be effective means for
wound treatment training purposes. For immersive and high fidelity virtual reality applications, realistic 3D models are
key components. However, creation of such models is a labor intensive process. In this paper, we propose a procedural
wound geometry generation technique that parameterizes key simulation inputs to establish the variability of the training
scenarios without the need of labor intensive remodeling of the 3D geometry. The procedural techniques described in
this work are entirely handled by the graphics processing unit (GPU) to enable interactive real-time operation of the
simulation and to relieve the CPU for other computational tasks. The visible human dataset is processed and used as a
volumetric texture for the internal visualization of the wound geometry. To further enhance the fidelity of the simulation,
we also employ a surface flow model for blood visualization. This model is realized as a dynamic texture that is
composed of a height field and a normal map and animated at each simulation step on the GPU. The procedural wound
geometry and the blood flow model are applied to a thigh model and the efficiency of the technique is demonstrated in a
virtual surgery scene.
Explicit contact modeling for surgical computer guidance and simulation
Show abstract
Realistic modelling of mechanical interactions between tissues is an important part of surgical simulation, and
may become a valuable asset in surgical computer guidance. Unfortunately, it is also computationally very
demanding. Explicit matrix-free FEM solvers have been shown to be a good choice for fast tissue simulation,
however little work has been done on contact algorithms for such FEM solvers.
This work introduces such an algorithm that is capable of handling both deformable-deformable (soft-tissue interacting
with soft-tissue) and deformable-rigid (e.g. soft-tissue interacting with surgical instruments) contacts.
The proposed algorithm employs responses computed with a fully matrix-free, virtual node-based version of
the model first used by Taylor and Flanagan in PRONTO3D. For contact detection, a bounding-volume hierarchy
(BVH) capable of identifying self collisions is introduced. The proposed BVH generation and update
strategies comprise novel heuristics to minimise the number of bounding volumes visited in hierarchy update
and collision detection.
Aside from speed, stability was a major objective in the development of the algorithm, hence a novel method for
computation of response forces from C0-continuous normals, and a gradual application of response forces from
rate constraints has been devised and incorporated in the scheme. The continuity of the surface normals has
advantages particularly in applications such as sliding over irregular surfaces, which occurs, e.g., in simulated
breathing.
The effectiveness of the scheme is demonstrated on a number of meshes derived from medical image data and
artificial test cases.
Poster Session: 2D/3D and Fluoroscopy
Fluoroscopic image-guided intervention system for transbronchial localization
Show abstract
Reliable transbronchial access of peripheral lung lesions is desirable for the diagnosis and potential treatment
of lung cancer. This procedure can be difficult, however, because accessory devices (e.g., needle or forceps)
cannot be reliably localized while deployed. We present a fluoroscopic image-guided intervention (IGI) system
for tracking such bronchoscopic accessories. Fluoroscopy, an imaging technology currently utilized by many
bronchoscopists, has a fundamental shortcoming - many lung lesions are invisible in its images. Our IGI
system aligns a digitally reconstructed radiograph (DRR) defined from a pre-operative computed tomography
(CT) scan with live fluoroscopic images. Radiopaque accessory devices are readily apparent in fluoroscopic video,
while lesions lacking a fluoroscopic signature but identifiable in the CT scan are superimposed in the scene. The
IGI system processing steps consist of: (1) calibrating the fluoroscopic imaging system; (2) registering the CT
anatomy with its depiction in the fluoroscopic scene; (3) optical tracking to continually update the DRR and
target positions as the fluoroscope is moved about the patient. The end result is a continuous correlation of the
DRR and projected targets with the anatomy depicted in the live fluoroscopic video feed. Because both targets
and bronchoscopic devices are readily apparent in arbitrary fluoroscopic orientations, multiplane guidance is
straightforward. The system tracks in real-time with no computational lag. We have measured a mean projected
tracking accuracy of 1.0 mm in a phantom and present results from an in vivo animal study.
A C-arm calibration method with application to fluoroscopic image-guided procedures
Show abstract
C-arm fluoroscopy units provide continuously updating X-ray video images during surgical procedure. The
modality is widely adopted for its low cost, real-time imaging capabilities, and its ability to display radio-opaque
tools in the anatomy. It is, however, important to correct for fluoroscopic image distortion and estimate camera
parameters, such as focal length and camera center, for registration with 3D CT scans in fluoroscopic imageguided
procedures. This paper describes a method for C-arm calibration and evaluates its accuracy in multiple
C-arm units and in different viewing orientations. The proposed calibration method employs a commerciallyavailable
unit to track the C-arm and a calibration plate. The method estimates both the internal calibration
parameters and the transformation between the coordinate systems of tracker and C-arm. The method was
successfully tested on two C-arm units (GE OEC 9800 and GE OEC 9800 Plus) of different image intensifier
sizes and verified with a rigid airway phantom model. The mean distortion-model error was found to be 0.14
mm and 0.17 mm for the respective C-arms. The mean overall system reprojection error (which measures the
accuracy of predicting an image using tracker coordinates) was found to be 0.63 mm for the GE OEC 9800.
Real-time motion-adjusted augmented fluoroscopy system for navigation during electrophysiology procedures
Show abstract
Electrophysiology (EP) procedures are conducted by cardiac specialists to help diagnose and treat abnormal heart
rhythms. Such procedures are conducted under mono-plane and bi-plane x-ray fluoroscopy guidance to allow the
specialist to target ablation points within the heart. Ablations lesions are usually set by applying radio-frequency energy
to endocardial tissue using catheters placed inside a patient's heart. Recently we have developed a system capable of
overlaying information involving the heart and targeted ablation locations from pre-operational image data for additional
assistance. Although useful, such information offers only approximate guidance due to heart beat and breathing motion.
As a solution to this problem, we propose to make use of a 2D lasso catheter tracking method. We apply it to bi-plane
fluoroscopy images to dynamically update fluoro overlays. The dynamic overlays are computed at 3.5 frames per second
to offer real-time updates matching the heart motion. During the course of our experiments, we found an average 3-D
error of 1.6 mm on average. We present the workflow and features of the motion-adjusted, augmented fluoroscopy
system and demonstrate the dramatic improvement in the overlay quality provided by this approach.
Navigation for fluoroscopy-guided cryo-balloon ablation procedures of atrial fibrillation
Show abstract
Atrial fibrillation (AFib), the most common arrhythmia, has been identified as a major
cause of stroke. The current standard in interventional treatment of AFib is the pulmonary
vein isolation (PVI). PVI is guided by fluoroscopy or non-fluoroscopic electro-anatomic mapping
systems (EAMS). Either classic point-to-point radio-frequency (RF)- catheter ablation or
so-called single-shot-devices like cryo-balloons are used to achieve electrically isolation of the
pulmonary veins and the left atrium (LA). Fluoroscopy-based systems render overlay images
from pre-operative 3-D data sets which are then merged with fluoroscopic imaging, thereby
adding detailed 3-D information to conventional fluoroscopy. EAMS provide tracking and
visualization of RF catheters by means of electro-magnetic tracking. Unfortunately, current
navigation systems, fluoroscopy-based or EAMS, do not provide tools to localize and visualize
single shot devices like cryo-balloon catheters in 3-D. We present a prototype software
for fluoroscopy-guided ablation procedures that is capable of superimposing 3-D datasets as
well as reconstructing cyro-balloon catheters in 3-D. The 3-D cyro-balloon reconstruction was
evaluated on 9 clinical data sets, yielded a reprojected 2-D error of 1.72 mm ± 1.02 mm.
Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration
Show abstract
The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions
based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in
providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative
to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine
is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL
reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray
images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the
proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image
contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function
and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using
a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM)
algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment
using a plastic phantom showed accurate results with errors of (-0.43°±1.19°, 0.45°±2.17°, 0.23°±1.05°) and (0.03±0.55,
-0.03±0.54, -2.73±1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained
with high accuracy of 0.53±0.30 mm distance errors.
Intensity-based 3D/2D registration for percutaneous intervention of major aorto-pulmonary collateral arteries
Julien Couet,
David Rivest-Henault,
Joaquim Miro,
et al.
Show abstract
Percutaneous cardiac interventions rely mainly on the experience of the cardiologist to safely navigate inside
soft tissues vessels under X-ray angiography guidance. Additional navigation guidance tool might contribute to
improve reliability and safety of percutaneous procedures. This study focus on major aorta-pulmonary collateral
arteries (MAPCAs) which are pediatric structures. We present a fully automatic intensity-based 3D/2D
registration method that accurately maps pre-operatively acquired 3D tomographic vascular data of a newborn
patient over intra-operatively acquired angiograms. The tomographic dataset 3D pose is evaluated by comparing
the angiograms with simulated X-ray projections, computed from the pre-operative dataset with a proposed
splatting-based projection technique. The rigid 3D pose is updated via a transformation matrix usually defined
in respect of the C-Arm acquisition system reference frame, but it can also be defined in respect of the projection
plane local reference frame. The optimization of the transformation is driven by two algorithms. First the hill
climbing local search and secondly a proposed variant, the dense hill climbing. The latter makes the search space
denser by considering the combinations of the registration parameters instead of neighboring solutions only.
Although this study focused on the registration of pediatric structures, the same procedure could be applied for
any cardiovascular structures involving CT-scan and X-ray angiography. Our preliminary results are promising
that an accurate (3D TRE 0.265 ± 0.647mm) and robust (99% success rate) bi-planes registration of the aorta
and MAPCAs from a initial displacement up to 20mm and 20° can be obtained within a reasonable amount of
time (13.7 seconds).
Poster Session: Acquisition Technologies
Single wall closed-form differential ultrasound calibration
Show abstract
In freehand 3D ultrasound, images are acquired while the position of the transducer is recorded with a tracking device.
Calibration is essential in this technique to find the transformation from the image coordinates to the reference
coordinate system. The single wall technique is a common calibration method because a simple plane phantom is used.
Despite its advantages, such as ease of phantom construction and image analysis, this method requires large number of
images to converge to the solution. One reason is a lack of a closed-form solution. Also, the technique uses slightly illconditioned
sets of equations with a high condition number due to limited range of scanning motions that produce clear
images of the plane. Here, a novel closed-form formulation has been proposed for the single wall calibration technique.
Also, differential measurements of the plane image are used instead of absolute plane detection to improve accuracy. The
closed-form solution leads to more accurate and robust results while providing an insight into understanding error
propagation and finding the optimal set of transducer poses. Results have been compared to the conventional single wall
technique. A residual error of 0.14 mm is achieved for the proposed method compared to 0.91 mm in the conventional
approach.
Characterization of tissue-simulating phantom materials for ultrasound-guided needle procedures
Susan Buchanan,
John Moore,
Deanna Lammers,
et al.
Show abstract
Needle biopsies are standard protocols that are commonly performed under ultrasound (US) guidance or computed
tomography (CT)1. Vascular access such as central line insertions, and many spinal needle therapies also rely on US
guidance. Phantoms for these procedures are crucial as both training tools for clinicians and research tools for developing
new guidance systems. Realistic imaging properties and material longevity are critical qualities for needle guidance
phantoms. However, current commercially available phantoms for use with US guidance have many limitations, the most
detrimental of which include harsh needle tracks obfuscating US images and a membrane comparable to human skin that
does not allow seepage of inner media. To overcome these difficulties, we tested a variety of readily available media and
membranes to evaluate optimal materials to fit our current needs. It was concluded that liquid hand soap was the best
medium, as it instantly left no needle tracks, had an acceptable depth of US penetration and portrayed realistic imaging
conditions, while because of its low leakage, low cost, acceptable durability and transparency, the optimal membrane
was 10 gauge vinyl.
Localization of liver tumors in freehand 3D laparoscopic ultrasound
O. Shahin,
V. Martens,
A. Besirevic,
et al.
Show abstract
The aim of minimally invasive laparoscopic liver interventions is to completely resect or ablate tumors while
minimizing the trauma caused by the operation. However, restrictions such as limited field of view and reduced
depth perception can hinder the surgeon's capabilities to precisely localize the tumor. Typically, preoperative
data is acquired to find the tumor(s) and plan the surgery. Nevertheless, determining the precise position of
the tumor is required, not only before but also during the operation. The standard use of ultrasound in hepatic
surgery is to explore the liver and identify tumors. Meanwhile, the surgeon mentally builds a 3D context to
localize tumors. This work aims to upgrade the use of ultrasound in laparoscopic liver surgery. We propose an
approach to segment and localize tumors intra-operatively in 3D ultrasound. We reconstruct a 3D laparoscopic
ultrasound volume containing a tumor. The 3D image is then preprocessed and semi-automatically segmented
using a level set algorithm. During the surgery, for each subsequent reconstructed volume, a fast update of the
tumor position is accomplished via registration using the previously segmented and localized tumor as a prior
knowledge. The approach was tested on a liver phantom with artificial tumors. The tumors were localized in
approximately two seconds with a mean error of less than 0.5 mm. The strengths of this technique are that it
can be performed intra-operatively, it helps the surgeon to accurately determine the location, shape and volume
of the tumor, and it is repeatable throughout the operation.
Real-time registration of video with ultrasound using stereo disparity
Jihang Wang,
Samantha Horvath,
George Stetten,
et al.
Show abstract
Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical
imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is
essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in
their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the
past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator,
the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce
a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion
approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this
concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating
the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful
operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct
relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated
computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does
today.
Bent rigid endoscopes: a challenge for accurate distortion correction and 3D reconstruction
Show abstract
No investigation is published so far, describing the distortion correction for bent rigid endoscopes. This work comprises
a definition of endoscope bending states and a proof that inhomogeneous (section varying) radial distortion correction
achieves enhanced results in comparison to linear distortion correction. Precaution or advanced distortion correction
techniques should be taken while using bent or deflected endoscopes in applications of computer assisted diagnosis and
therapy.
Validation of an algorithm for planar surgical resection reconstruction
Federico E. Milano,
Lucas E. Ritacco,
Germán L. Farfalli,
et al.
Show abstract
Surgical planning followed by computer-assisted intraoperative navigation in orthopaedics oncology for tumor
resection have given acceptable results in the last few years. However, the accuracy of preoperative planning and
navigation is not clear yet. The aim of this study is to validate a method capable of reconstructing the nearly
planar surface generated by the cutting saw in the surgical specimen taken off the patient during the resection
procedure. This method estimates an angular and offset deviation that serves as a clinically useful resection
accuracy measure. The validation process targets the degree to which the automatic estimation is true, taking
as a validation criterium the accuracy of the estimation algorithm. For this purpose a manually estimated gold
standard (a bronze standard) data set is built by an expert surgeon. The results show that the manual and the
automatic methods consistently provide similar measures.
Two new ad-hoc models of detection physics and their evaluation for navigated beta probe surface imaging
Show abstract
Intra-operative surface imaging with navigated beta probes in conjunction with positron-emitting radiotracers
like 18F-FDG has been shown to enable control of tumor resection borders. We showed previously that
employing iterative reconstruction (MLEM) in conjunction with an ad-hoc model of the detection physics
(based on solid-angle geometry, SA) improves the image quality. In this study, we sampled the beta probe
readings of a point source using a precision step-motor to generate a look-up-table (LUT) model. We also
generated a simplified geometrical model (SG) based on this data set. To see how these two models influence
the image quality compared to the old SA model, we reconstructed images from sparsely sampled datasets of
a phantom with three hotspots using each model. The images yielded 76% (SA), 81% (SG), and 81% (LUT)
mean NCC compared to the ground truth. The SG and LUT models, however, could resolve the hotspots
better in the datasets where the detector-to-phantom distance was larger. Additionally, we compared the
deviations of the SA and SG analytical models to the measured LUT model, where we found that the SG
model gives estimates substantially closer to the actual beta probe readings than the previous SA model.
Freehand SPECT reconstructions using look up tables
Show abstract
Nuclear imaging is a commonly used tool in today's diagnostics and therapy planning. For interventional
use however it suffers from drawbacks which limit its application. Freehand SPECT was developed to overcome
these limitations and to provide 3D functional imaging during an intervention. It combines a nuclear
probe with an optical tracking system to obtain its position and orientation in space synchronized with its
reading. This information can be used to compute a 3D tomographic reconstruction of an activity distribution.
However, as there is no fixed geometry the system matrix has to be computed on-the-fly, using ad-hoc
models of the detection process. One solution for such a model is a reference look up table of previously
acquired measurements of a single source at different angles and distances. In this work two look up tables
with a one and four millimeter step size between the entries were acquired. Twelve datasets of a phantom
with two hollow spheres filled with a solution of Tc99wm were acquired with the Freehand SPECT system.
Reconstructions with the look up tables and two analytical models currently in use were performed with these
datasets and compared with each other. The finely sampled look up table achieved the qualitatively best
reconstructions, while one of the analytical models showed the best positional accuracy.
Poster Session: Technology Evaluation
Lightweight distributed computing for intraoperative real-time image guidance
Stefan Suwelack,
Darko Katic,
Simon Wagner,
et al.
Show abstract
In order to provide real-time intraoperative guidance, computer assisted surgery (CAS) systems often rely on
computationally expensive algorithms. The real-time constraint is especially challenging if several components such as
intraoperative image processing, soft tissue registration or context aware visualization are combined in a single system.
In this paper, we present a lightweight approach to distribute the workload over several workstations based on the
OpenIGTLink protocol. We use XML-based message passing for remote procedure calls and native types for transferring
data such as images, meshes or point coordinates. Two different, but typical scenarios are considered in order to evaluate
the performance of the new system. First, we analyze a real-time soft tissue registration algorithm based on a finite
element (FE) model. Here, we use the proposed approach to distribute the computational workload between a primary
workstation that handles sensor data processing and visualization and a dedicated workstation that runs the real-time FE
algorithm. We show that the additional overhead that is introduced by the technique is small compared to the total
execution time. Furthermore, the approach is used to speed up a context aware augmented reality based navigation
system for dental implant surgery. In this scenario, the additional delay for running the computationally expensive
reasoning server on a separate workstation is less than a millisecond. The results show that the presented approach is a
promising strategy to speed up real-time CAS systems.
Simplified development of image-guided therapy software with MITK-IGT
Show abstract
Due to rapid developments in the research areas of medical imaging, medical image processing and robotics,
computer assistance is no longer restricted to diagnostics and surgical planning but has been expanded to surgical
and radiological interventions. From a software engineering point of view, the systems for image-guided therapy
(IGT) are highly complex. To address this issue, we presented an open source extension to the well-known
Medical Imaging Interaction Toolkit (MITK) for developing IGT systems, called MITK-IGT. The contribution
of this paper is two-fold: Firstly, we extended MITK-IGT such that it (1) facilitates the handling of navigation
tools, (2) provides reusable graphical user interface (UI) components, and (3) features standardized exception
handling. Secondly, we developed a software prototype for computer-assisted needle insertions, using the new
features, and tested it with a new Tabletop field generator (FG) for the electromagnetic tracking system NDI
Aurora ®. To our knowledge, we are the first to have integrated this new FG into a complete navigation system
and have conducted tests under clinical conditions. In conclusion, we enabled simplified development of imageguided
therapy software and demonstrated the utilizability of applications developed with MITK-IGT in the
clinical workflow.
Simulation, design, and analysis for magnetic anchoring and guidance of instruments for minimally invasive surgery
Show abstract
The exploration of natural orifice transluminal endoscopic surgery (NOTES) has brought considerable interest in
magnetic anchoring of intracorporeal tools. Magnetic anchoring and guidance system (MAGS) is the concept of
anchoring miniature in-vivo tools and device to the parietal peritoneum by coupling with an external magnetic holder
module placed on the skin surface. MAGS has been shown to be effective in anchoring passive tools such as in-vivo
cameras or tissue retractors. The strength of the magnetic field and magnet configurations employed depends on the size,
shape and weight of the in-vivo tools, the coupling distance between internal and external modules, and physiological
concerns such as tool interaction and tissue ischemia. This paper presents our effort to develop a better understanding of
the coupling dynamic between a small in-vivo robot designed for tissue manipulation, and an external MAGS handle
used to position the in-vivo robot. An electromagnetic simulation software (Vizimag 3.19) was used to simulate
coupling forces between a two-magnet configuration of the MAGS handle. A prototype model of the in-vivo robot and a
two-magnet configuration of a MAGS handle were fabricated. Based on this study, we were able to identify an optimal
design solution for a MAGS module given the mechanical constraints of the internal module design.
A robust motion estimation system for minimal invasive laparoscopy
Show abstract
Laparoscopy is a reliable imaging method to examine the liver. However, due to the limited field of view,
a lot of experience is required from the surgeon to interpret the observed anatomy. Reconstruction of organ
surfaces provide valuable additional information to the surgeon for a reliable diagnosis. Without an additional
external tracking system the structure can be recovered from feature correspondences between different frames.
In laparoscopic images blurred frames, specular reflections and inhomogeneous illumination make feature tracking
a challenging task. We propose an ego-motion estimation system for minimal invasive laparoscopy that can cope
with specular reflection, inhomogeneous illumination and blurred frames.
To obtain robust feature correspondence, the approach combines SIFT and specular reflection segmentation with
a multi-frame tracking scheme. The calibrated five-point algorithm is used with the MSAC robust estimator to
compute the motion of the endoscope from multi-frame correspondence.
The algorithm is evaluated using endoscopic videos of a phantom. The small incisions and the rigid endoscope
limit the motion in minimal invasive laparoscopy. These limitations are considered in our evaluation and are
used to analyze the accuracy of pose estimation that can be achieved by our approach. The endoscope is moved
by a robotic system and the ground truth motion is recorded.
The evaluation on typical endoscopic motion gives precise results and demonstrates the practicability of the
proposed pose estimation system.
Poster Session: Prostate
Imaging of prostate cancer: a platform for 3D co-registration of in-vivo MRI ex-vivo MRI and pathology
Clément Orczyk,
Artem Mikheev,
Andrew Rosenkrantz,
et al.
Show abstract
Objectives: Multi-parametric MRI is emerging as a promising method for prostate cancer diagnosis. prognosis and
treatment planning. However, the localization of in-vivo detected lesions and pathologic sites of cancer remains a
significant challenge. To overcome this limitation we have developed and tested a system for co-registration of in-vivo
MRI, ex-vivo MRI and histology.
Materials and Methods: Three men diagnosed with localized prostate cancer (ages 54-72, PSA levels 5.1-7.7 ng/ml)
were prospectively enrolled in this study. All patients underwent 3T multi-parametric MRI that included T2W, DCEMRI,
and DWI prior to robotic-assisted prostatectomy. Ex-vivo multi-parametric MRI was performed on fresh prostate
specimen. Excised prostates were then sliced at regular intervals and photographed both before and after fixation. Slices
were perpendicular to the main axis of the posterior capsule, i.e., along the direction of the rectal wall. Guided by the
location of the urethra, 2D digital images were assembled into 3D models. Cancer foci, extra-capsular extensions and
zonal margins were delineated by the pathologist and included in 3D histology data. A locally-developed software was
applied to register in-vivo, ex-vivo and histology using an over-determined set of anatomical landmarks placed in
anterior fibro-muscular stroma, central. transition and peripheral zones. The mean root square distance across
corresponding control points was used to assess co-registration error.
Results: Two specimens were pT3a and one pT2b (negative margin) at pathology. The software successfully fused invivo
MRI. ex-vivo MRI fresh specimen and histology using appropriate (rigid and affine) transformation models with
mean square error of 1.59 mm. Coregistration accuracy was confirmed by multi-modality viewing using operator-guided
variable transparency.
Conclusion: The method enables successful co-registration of pre-operative MRI, ex-vivo MRI and pathology and it
provides initial evidence of feasibility of MRI-guided surgical planning.
Intra-operative Prostate Motion Tracking Using Surface Markers for Robot-Assisted Laparoscopic Radical Prostatectomy: A Phantom Study
Mehdi Esteghamatian,
Kripasindhu Sarkar,
Stephen E. Pautler,
et al.
Show abstract
Radical prostatectomy surgery (RP) is the gold standard for treatment of localized prostate cancer (PCa).
Recently, emergence of minimally invasive techniques such as Laparoscopic Radical Prostatectomy (LRP) and
Robot-Assisted Laparoscopic Radical Prostatectomy (RARP) has improved the outcomes for prostatectomy.
However, it remains difficult for the surgeons to make informed decisions regarding resection margins and nerve
sparing since the location of the tumor within the organ is not usually visible in a laparoscopic view. While
MRI enables visualization of the salient structures and cancer foci, its efficacy in LRP is reduced unless it is
fused into a stereoscopic view such that homologous structures overlap. Registration of the MRI image and
peri-operative ultrasound image using a tracked probe can potentially be exploited to bring the pre-operative
information into alignment with the patient coordinate system during the procedure. While doing so, prostate
motion needs to be compensated in real-time to synchronize the stereoscopic view with the pre-operative MRI
during the prostatectomy procedure. In this study, a point-based stereoscopic tracking technique is investigated
to compensate for rigid prostate motion so that the same motion can be applied to the pre-operative images.
This method benefits from stereoscopic tracking of the surface markers implanted over the surface of the prostate
phantom. The average target registration error using this approach was 3.25±1.43mm.
3D prostate segmentation of ultrasound images combining longitudinal image registration and machine learning
Show abstract
We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based
on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we
register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were
used to extract texture features from each registered image. Patient-specific Gabor features from the registered images
are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The
segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual
segmentation is 1.18 ± 0.31 mm, indicating that our automatic segmentation method based on longitudinal image
registration is feasible for segmenting the prostate in TRUS images.
Poster Session: Cardiac and Vascular
Calibration and evaluation of a magnetically tracked ICE probe for guidance of left atrial ablation therapy
Show abstract
The novel prototype system for advanced visualization for image-guided left atrial ablation therapy developed
in our laboratory permits ready integration of multiple imaging modalities, surgical instrument tracking, interventional
devices and electro-physiologic data. This technology allows subject-specific procedure planning and
guidance using 3D dynamic, patient-specific models of the patient's heart, augmented with real-time intracardiac
echocardiography (ICE). In order for the 2D ICE images to provide intuitive visualization for accurate
catheter to surgical target navigation, the transducer must be tracked, so that the acquired images can be appropriately
presented with respect to the patient-specific anatomy. Here we present the implementation of a
previously developed ultrasound calibration technique for a magnetically tracked ICE transducer, along with a
series of evaluation methods to ensure accurate imaging and faithful representation of the imaged structures.
Using an engineering-designed phantom, target localization accuracy is assessed by comparing known target
locations with their transformed locations inferred from the tracked US images. In addition, the 3D volume
reconstruction accuracy is also estimated by comparing a truth volume to that reconstructed from sequential 2D
US images. Clinically emulating validation studies are conducted using a patient-specific left atrial phantom.
Target localization error of clinically-relevant surgical targets represented by nylon fiducials implanted within the
endocardial wall of the phantom was assessed. Our studies have demonstrated 2.4 ± 0.8 mm target localization
error in the engineering-designed evaluation phantoms, 94.8 ± 4.6 % volume reconstruction accuracy, and 3.1 ±
1.2 mm target localization error in the left atrial-mimicking phantom. These results are consistent with those
disseminated in the literature and also with the accuracy constraints imposed by the employed technology and
the clinical application.
Evaluation of mitral valve replacement anchoring in a phantom
A. Jonathan McLeod,
John Moore,
Pencilla Lang,
et al.
Show abstract
Conventional mitral valve replacement requires a median sternotomy and cardio-pulmonary bypass with aortic crossclamping
and is associated with significant mortality and morbidity which could be reduced by performing the procedure
off-pump. Replacing the mitral valve in the closed, off-pump, beating heart requires extensive development and
validation of surgical and imaging techniques. Image guidance systems and surgical access for off-pump mitral valve
replacement have been previously developed, allowing the prosthetic valve to be safely introduced into the left atrium
and inserted into the mitral annulus. The major remaining challenge is to design a method of securely anchoring the
prosthetic valve inside the beating heart. The development of anchoring techniques has been hampered by the expense
and difficulty in conducting large animal studies. In this paper, we demonstrate how prosthetic valve anchoring may be
evaluated in a dynamic phantom. The phantom provides a consistent testing environment where pressure measurements
and Doppler ultrasound can be used to monitor and assess the valve anchoring procedures, detecting pararvalvular leak
when valve anchoring is inadequate. Minimally invasive anchoring techniques may be directly compared to the current
gold standard of valves sutured under direct vision, providing a useful tool for the validation of new surgical
instruments.
Cryo-balloon catheter position planning using AFiT
Show abstract
Atrial fibrillation (AFib) is the most common heart arrhythmia. In certain situations,
it can result in life-threatening complications such as stroke and heart failure. For paroxsysmal
AFib, pulmonary vein isolation (PVI) by catheter ablation is the recommended choice of
treatment if drug therapy fails. During minimally invasive procedures, electrically active tissue
around the pulmonary veins is destroyed by either applying heat or cryothermal energy to the
tissue. The procedure is usually performed in electrophysiology labs under fluoroscopic guidance.
Besides radio-frequency catheter ablation devices, so-called single-shot devices, e.g., the
cryothermal balloon catheters, are receiving more and more interest in the electrophysiology
(EP) community. Single-shot devices may be advantageous for certain cases, since they can
simplify the creation of contiguous (gapless) lesion sets around the pulmonary vein which is
needed to achieve PVI. In many cases, a 3-D (CT, MRI, or C-arm CT) image of a patient's left
atrium is available. This data can then be used for planning purposes and for supporting catheter
navigation during the procedure. Cryo-thermal balloon catheters are commercially available in
two different sizes. We propose the Atrial Fibrillation Planning Tool (AFiT), which visualizes
the segmented left atrium as well as multiple cryo-balloon catheters within a virtual reality, to
find out how well cryo-balloons fit to the anatomy of a patient's left atrium. First evaluations
have shown that AFiT helps physicians in two ways. First, they can better assess whether cryoballoon
ablation or RF ablation is the treatment of choice at all. Second, they can select the
proper-size cryo-balloon catheter with more confidence.
Simulation based patient-specific optimal catheter selection for right coronary angiography
Show abstract
Selecting the best catheter prior to coronary angiography significantly reduces the exposure time to radiation
as well as the risk of artery punctures and internal bleeding. In this paper we describe a simulation based
technique for selecting an optimal catheter for right coronary angiography using the Simulation Open Framework
Architecture (SOFA). We simulate different catheters in a patient-specific arteries model, obtain final placement
of different catheters and suggest an optimally placed catheter. The patient-specific arteries model is computed
from the patient image data acquired prior to the intervention and the catheters are modeled using Finite Element
Method (FEM).
Automatic contour and centerline extractions of single and bifurcated vessels in coronary angiogram
Show abstract
We propose an automatic contour and centerline extraction methods of single and bifurcated vessels in coronary
angiograms. Our method consists of four steps. First, for enhancing vascular structures, anisotropic diffusing filtering
and Hessian-based multi-scale filtering are performed. Second, an initial vessel region is segmented by region growing
with adaptively defined threshold value. Third, the initial vessel region is thinned and pruned for extracting an initial
vessel centerline. For bifurcated vessel analysis, a bifurcated polygon is defined in a bifurcated lesion and a bifurcated
point is detected by matching of the bifurcation patterns from vessel centerline. Finally, contrast is stretched in the ROI
except lower 10 and upper 30 percent density ranges. Then vessel contour points are extracted by canny edge detector
and are selected by the crossing contour points perpendicular to the vessel centerline. In the inner contour with high
curvature and weak contrast of the bifurcated polygon, the vessel contour points are selected by the crossing contour
points radial to the vessel centerline. Experimental results show that our method provides accurate results in narrow
sections such as occlusion or stenosed vessels and ignores overlapped regions such as diaphragm and crossing vessels.
For bifurcated area, the bifurcated point and vessels are well-extracted in low-contrast and non-uniform illumination.
Robust tracking of a virtual electrode on a coronary sinus catheter for atrial fibrillation ablation procedures
Show abstract
Catheter tracking in X-ray fluoroscopic images has become more important in interventional applications
for atrial fibrillation (AF) ablation procedures. It provides real-time guidance for the physicians and can
be used as reference for motion compensation applications. In this paper, we propose a novel approach to
track a virtual electrode (VE), which is a non-existing electrode on the coronary sinus (CS) catheter at a
more proximal location than any real electrodes. Successful tracking of the VE can provide more accurate
motion information than tracking of real electrodes. To achieve VE tracking, we first model the CS catheter
as a set of electrodes which are detected by our previously published learning-based approach.1 The tracked
electrodes are then used to generate the hypotheses for tracking the VE. Model-based hypotheses are fused
and evaluated by a Bayesian framework. Evaluation has been conducted on a database of clinical AF
ablation data including challenging scenarios such as low signal-to-noise ratio (SNR), occlusion and nonrigid
deformation. Our approach obtains 0.54mm median error and 90% of evaluated data have errors
less than 1.67mm. The speed of our tracking algorithm reaches 6 frames-per-second on most data. Our
study on motion compensation shows that using the VE as reference provides a good point to detect
non-physiological catheter motion during the AF ablation procedures.2
Real-time circumferential mapping catheter tracking for motion compensation in atrial fibrillation ablation procedures
Show abstract
Atrial fibrillation (AFib) has been identified as a major cause of stroke. Radiofrequency
catheter ablation has become an increasingly important treatment option, especially
when drug therapy fails. Navigation under X-ray can be enhanced by using augmented fluoroscopy.
It renders overlay images from pre-operative 3-D data sets which are then fused with
X-ray images to provide more details about the underlying soft-tissue anatomy. Unfortunately,
these fluoroscopic overlay images are compromised by respiratory and cardiac motion. Various
methods to deal with motion have been proposed. To meet clinical demands, they have to be
fast. Methods providing a processing frame rate of 3 frames-per-second (fps) are considered
suitable for interventional electrophysiology catheter procedures if an acquisition frame rate of
2 fps is used. Unfortunately, when working at a processing rate of 3 fps, the delay until the actual
motion compensated image can be displayed is about 300 ms. More recent algorithms can
achieve frame rates of up to 20 fps, which reduces the lag to 50 ms. By using a novel approach
involving a 3-D catheter model, catheter segmentation and a distance transform, we can speed
up motion compensation to 25 fps which results in a display delay of only 40 ms on a standard
workstation for medical applications. Our method uses a constrained 2-D/3-D registration to
perform catheter tracking, and it obtained a 2-D tracking error of 0.61 mm.
Enhanced segmentation and skeletonization for endovascular surgical planning
Irene Cheng,
Amirhossein Firouzmanesh,
Arnaud Leleve,
et al.
Show abstract
Endovascular surgery is becoming widely deployed for many critical procedures, replacing invasive medical operations
with long recovery times. However, there are still many challenges in improving the efficiency and safety of its usage,
and reducing surgery time; namely, regular exposure to radiation, manual navigation of surgical tools, lack of 3D
visualization, and lack of intelligent planning and automatic tracking of a surgical end-effector. Thus, our goal is to
develop hardware and software components of a tele-operation system to alleviate the abovementioned problems. There
are three specific objectives in this project: (i) to reduce the need for a surgeon to be physically next to a patient during
endovascular surgery; (ii) to overcome the difficulties encountered in manual navigation; and, (iii) to improve the speed
and experience of performing such surgeries. To achieve (i) we will develop an electro-mechanical interface to
accurately guide mechanically controlled surgical tools from a close distance, along with a 3D visualization interface; for
(ii) we will replace the current surgical tools with an "intelligent wire" controlled by the electro-mechanical system; for
(iii) we will segment 3D medical images to extract precise shapes of blood vessels, following which we will perform
automatic path planning for a surgical end-effector.
Feature identification for image-guided transcatheter aortic valve implantation
Pencilla Lang,
Martin Rajchl,
A. Jonathan McLeod,
et al.
Show abstract
Transcatheter aortic valve implantation (TAVI) is a less invasive alternative to open-heart surgery, and is critically
dependent on imaging for accurate placement of the new valve. Augmented image-guidance for TAVI can be
provided by registering together intra-operative transesophageal echo (TEE) ultrasound and a model derived
from pre-operative CT. Automatic contour delineation on TEE images of the aortic root is required for real-time
registration. This study develops an algorithm to automatically extract contours on simultaneous cross-plane
short-axis and long-axis (XPlane) TEE views, and register these features to a 3D pre-operative model. A
continuous max-flow approach is used to segment the aortic root, followed by analysis of curvature to select
appropriate contours for use in registration. Results demonstrate a mean contour boundary distance error of
1.3 and 2.8mm for the short and long-axis views respectively, and a mean target registration error of 5.9mm.
Real-time image guidance has the potential to increase accuracy and reduce complications in TAVI.
Towards image-guided atrial septal defect repair: an ex vivo analysis
Show abstract
The use of medical images in the operating room for navigation and planning is well established in many clinical
disciplines. In cardiology, the use of fluoroscopy for the placement of catheters within the heart has become
the standard of care. While fluoroscopy provides a live video sequence with the current location, it poses risks
the patient and clinician through exposure to radiation. Radiation dose is cumulative and thus children are at
even greater risk from exposure. To reduce the use of radiation, and improve surgical technique we have begun
development of an image-guided navigation system, which can deliver therapeutic devices via catheter. In this
work we have demonstrated the intrinsic properties of our imaging system, which have led to the development
of a phantom emulating a childs heart with an ASD. Further investigation into the use of this information, in a
series of mock clinical experiments, will be performed to design procedures for inserting devices into the heart
while minimizing fluoroscopy use.
Poster Session: Neuro and Head
Optimizing the delivery of deep brain stimulation using electrophysiological atlases and an inverse modeling approach
Show abstract
The use of deep brain stimulation (DBS) for the treatment of neurological movement degenerative disorders requires the
precise placement of the stimulating electrode and the determination of optimal stimulation parameters that maximize
symptom relief (e.g. tremor, rigidity, movement difficulties, etc.) while minimizing undesired physiological side-effects.
This study demonstrates the feasibility of determining the ideal electrode placement and stimulation current amplitude
by performing a patient-specific multivariate optimization using electrophysiological atlases and a bioelectric finite
element model of the brain. Using one clinical case as a preliminary test, the optimization routine is able to find the most
efficacious electrode location while avoiding the high side-effect regions. Future work involves optimization validation
clinically and improvement to the accuracy of the model.
Visualizing the path of blood flow in static vessel images for image guided surgery of cerebral arteriovenous malformations
Show abstract
Cerebral arteriovenous malformations (AVMs) are a type of vascular anomaly consisting of large intertwined
vascular growth (the nidus) that are prone to serious hemorrhaging and can result in patient death if left
untreated. Intervention through surgical clipping of feeding and draining vessels to the nidus is a common
treatment. However, identification of which vessels to clip is challenging even to experienced surgeons aided by
conventional image guidance systems. In this work, we describe our methods for processing static preoperative
angiographic images in order to effectively visualize the feeding and draining vessels of an AVM nidus. Maps from
level-set front propagation processing of the vessel images are used to label the vessels by colour. Furthermore,
images are decluttered using the topological distances between vessels. In order to aid the surgeon in the
vessel clipping decision-making process during surgery, the results are displayed to the surgeon using augmented
virtuality.
Intraoperative brain tumor resection cavity characterization with conoscopic holography
Show abstract
Brain shift compromises the accuracy of neurosurgical image-guided interventions if not corrected by either intraoperative
imaging or computational modeling. The latter requires intraoperative sparse measurements for constraining and driving
model-based compensation strategies. Conoscopic holography, an interferometric technique that measures the distance
of a laser light illuminated surface point from a fixed laser source, was recently proposed for non-contact surface data
acquisition in image-guided surgery and is used here for validation of our modeling strategies. In this contribution, we
use this inexpensive, hand-held conoscopic holography device for intraoperative validation of our computational modeling
approach to correcting for brain shift. Laser range scan, instrument swabbing, and conoscopic holography data sets were
collected from two patients undergoing brain tumor resection therapy at Vanderbilt University Medical Center. The results
of our study indicate that conoscopic holography is a promising method for surface acquisition since it requires no contact
with delicate tissues and can characterize the extents of structures within confined spaces. We demonstrate that for two
clinical cases, the acquired conoprobe points align with our model-updated images better than the uncorrected images lending
further evidence that computational modeling approaches improve the accuracy of image-guided surgical interventions
in the presence of soft tissue deformations.
Analysis of electrodes' placement and deformation in deep brain stimulation from medical images
Show abstract
Deep brain stimulation (DBS) is used to reduce the motor symptoms such as rigidity or bradykinesia, in patients
with Parkinson's disease (PD). The Subthalamic Nucleus (STN) has emerged as prime target of DBS in idiopathic PD.
However, DBS surgery is a difficult procedure requiring the exact positioning of electrodes in the pre-operative selected
targets. This positioning is usually planned using patients' pre-operative images, along with digital atlases, assuming that
electrode's trajectory is linear. However, it has been demonstrated that anatomical brain deformations induce electrode's
deformations resulting in errors in the intra-operative targeting stage. In order to meet the need of a higher degree of
placement accuracy and to help constructing a computer-aided-placement tool, we studied the electrodes' deformation in
regards to patients' clinical data (i.e., sex, mean PD duration and brain atrophy index). Firstly, we presented an automatic
algorithm for the segmentation of electrode's axis from post-operative CT images, which aims to localize the electrodes'
stimulated contacts. To assess our method, we applied our algorithm on 25 patients who had undergone bilateral STNDBS.
We found a placement error of 0.91±0.38 mm. Then, from the segmented axis, we quantitatively analyzed the
electrodes' curvature and correlated it with patients' clinical data. We found a positive significant correlation between
mean curvature index of the electrode and brain atrophy index for male patients and between mean curvature index of the
electrode and mean PD duration for female patients. These results help understanding DBS electrode' deformations and
would help ensuring better anticipation of electrodes' placement.
A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery
Show abstract
Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or
anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve
navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to
critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the
engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a
clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to
other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is
underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime,
high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic
nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that
demonstrates mean re-projection accuracy (0.7±0.3) pixels and mean target registration error of (2.3±1.5) mm. An
IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which
each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented)
video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to
assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and
targets by means of video overlay during surgical approach, resection, and reconstruction.
Quantifying cortical surface harmonic deformation with stereovision during open cranial neurosurgery
Show abstract
Cortical surface harmonic motion during open cranial neurosurgery is well observed in image-guided neurosurgery.
Recently, we quantified cortical surface deformation noninvasively with synchronized blood pressure pulsation (BPP)
from a sequence of stereo image pairs using optical flow motion tracking. With three subjects, we found the average
cortical surface displacement can reach more than 1 mm and in-plane principal strains of up to 7% relative to the first
image pair. In addition, the temporal changes in deformation and strain were in concert with BPP and patient respiration
[1]. However, because deformation was essentially computed relative to an arbitrary reference, comparing cortical
surface deformation at different times was not possible. In this study, we extend the technique developed earlier by
establishing a more reliable reference profile of the cortical surface for each sequence of stereo image acquisitions.
Specifically, fast Fourier transform (FFT) was applied to the dynamic cortical surface deformation, and the fundamental
frequencies corresponding to patient respiration and BPP were identified, which were used to determine the number of
image acquisitions for use in averaging cortical surface images. This technique is important because it potentially allows
in vivo characterization of soft tissue biomechanical properties using intraoperative stereovision and motion tracking.
Poster Session: Lung and Abdomen
Automated microwave ablation therapy planning with single and multiple entry points
Show abstract
Microwave ablation (MWA) has become a recommended treatment modality for interventional cancer treatment.
Compared with radiofrequency ablation (RFA), MWA provides more rapid and larger-volume tissue heating. It
allows simultaneous ablation from different entry points and allows users to change the ablation size by controlling
the power/time parameters. Ablation planning systems have been proposed in the past, mainly addressing the needs
for RFA procedures. Thus a planning system addressing MWA-specific parameters and workflows is highly
desirable to help physicians achieve better microwave ablation results. In this paper, we design and implement an
automated MWA planning system that provides precise probe locations for complete coverage of tumor and margin.
We model the thermal ablation lesion as an ellipsoidal object with three known radii varying with the duration of the
ablation and the power supplied to the probe. The search for the best ablation coverage can be seen as an iterative
optimization problem. The ablation centers are steered toward the location which minimizes both un-ablated tumor
tissue and the collateral damage caused to the healthy tissue. We assess the performance of our algorithm using
simulated lesions with known "ground truth" optimal coverage. The Mean Localization Error (MLE) between the
computed ablation center in 3D and the ground truth ablation center achieves 1.75mm (Standard deviation of the
mean (STD): 0.69mm). The Mean Radial Error (MRE) which is estimated by comparing the computed ablation radii
with the ground truth radii reaches 0.64mm (STD: 0.43mm). These preliminary results demonstrate the accuracy
and robustness of the described planning algorithm.
Optimization of CT-video registration for image-guided bronchoscopy
Show abstract
Global registration has been shown to be a potential reality for a bronchoscopy guidance system. Global registration
involves establishing the bronchoscope position by comparing a given real bronchoscopic (RB) video
view to target virtual bronchoscopic (VB) views derived from a patient's three-dimensional (3D) multi-detector
computed tomography (MDCT) chest scan. Registration performance depends significantly on the quality of
the computer-generated VB views and the error metric used to compare the VB and RB views. In particular,
the quality of the extracted endoluminal surfaces and the lighting model used during rendering has especial
importance in determining VB view quality. Registration performance is also affected by the positioning of the
bronchoscope during acquisition of the RB frame and the error metric used. We present a study considering the
impact of these factors on global registration performance. Results show that using a direct-lighting-based model
gives slightly better results than a global illumination model. However, the VB views generated by the global
illumination model more closely resemble the RB views when using the weighted normalized sum-of-square error
(WNSSE) metric. Results also show that the best global registration results are obtained by using a computergenerated
bronchoscope-positioning target with a WNSSE metric and a direct-lighting model. We also identify
the best airway surface-extraction method for global registration.
Automatic segmentation and centroid detection of skin sensors for lung interventions
Show abstract
Electromagnetic (EM) tracking has been recognized as a valuable tool for locating the interventional devices in
procedures such as lung and liver biopsy or ablation. The advantage of this technology is its real-time connection
to the 3D volumetric roadmap, i.e. CT, of a patient's anatomy while the intervention is performed. EM-based
guidance requires tracking of the tip of the interventional device, transforming the location of the device onto
pre-operative CT images, and superimposing the device in the 3D images to assist physicians to complete the
procedure more effectively. A key requirement of this data integration is to find automatically the mapping
between EM and CT coordinate systems. Thus, skin fiducial sensors are attached to patients before acquiring
the pre-operative CTs. Then, those sensors can be recognized in both CT and EM coordinate systems and used
calculate the transformation matrix. In this paper, to enable the EM-based navigation workflow and reduce
procedural preparation time, an automatic fiducial detection method is proposed to obtain the centroids of the
sensors from the pre-operative CT. The approach has been applied to 13 rabbit datasets derived from an animal
study and eight human images from an observation study. The numerical results show that it is a reliable and
efficient method for use in EM-guided application.
Image processing of liver computed tomography angiographic (CTA) images for laser induced thermotherapy (LITT) planning
Yue Li,
Xiang Gao,
Qingyu Tang,
et al.
Show abstract
Analysis of patient images is highly desired for simulating and planning the laser-induced thermotherapy (LITT) to
study the cooling effect of big vessels around tumors during the procedure. In this paper, we present an image processing
solution for simulating and planning LITT on liver cancer using computed tomography angiography (CTA) images. This
includes first performing a 3D anisotropic filtering on the data to remove noise. The liver region is then segmented with
a level sets based contour tracking method. A 3D level sets based surface evolution driven by boundary statistics is then
used to segment the surfaces of vessels and tumors. Then the medial lines of vessels were extracted by a thinning
algorithm. Finally the vessel tree is found on the thinning result, by first constructing a shortest path spanning tree by
Dijkstra algorithm and then pruning the unnecessary branches. From the segmentation and vessel skeletonization results,
important geometric parameters of the vessels and tumors are calculated for simulation and surgery planning. The
proposed methods was applied to a patient's image and the result is shown.
Tumor image extraction from fluoroscopy for a markerless lung tumor motion tracking and prediction
Noriyasu Homma,
Keita Ishihara,
Yoshihiro Takai,
et al.
Show abstract
We develop a markerless tumor motion tracking technique for accurate and safer image-guided radiation therapy (IGRT). The technique is implemented based on a new image model of the moving tumor and the background structure in an x-ray fluorocopic image sequence. By using the technique, the moving tumor image can be extracted from the sequential fluoroscopic images. The extraction from the fluoroscopy is obviously ill- posed, but we have suggested that it can be regularized into a well-posed problem by temporally accumulating constraints that must be satisfied by the extracted tumor image and the background.
In this paper, the effect of the tumor extraction and motion of both the tumor and background in the image model is extensively studied on the tracking accuracy. The tracking accuracy of the proposed method with extraction of both the moving tumor and background was within $0.2$ mm of the spatial resolution for a phantom dataset. The accuracy within 1 mm can be clinically sufficient and is superior to the results by the previous method with extraction model of only the moving tumor and by a conventional method without extraction. Thus, the results clearly demonstrate the efficiency and usefulness of the proposed extraction model for the IGRT.
Real-time motion compensation for EM bronchoscope tracking with smooth output - ex-vivo validation
Show abstract
Navigated bronchoscopy provides benefits for endoscopists and patients, but accurate tracking information is
needed. We present a novel real-time approach for bronchoscope tracking combining electromagnetic (EM)
tracking, airway segmentation, and a continuous model of output. We augment a previously published approach
by including segmentation information in the tracking optimization instead of image similarity. Thus, the new
approach is feasible in real-time. Since the true bronchoscope trajectory is continuous, the output is modeled
using splines and the control points are optimized with respect to displacement from EM tracking measurements
and spatial relation to segmented airways. Accuracy of the proposed method and its components is evaluated
on a ventilated porcine ex-vivo lung with respect to ground truth data acquired from a human expert. We
demonstrate the robustness of the output of the proposed method against added artificial noise in the input
data. Smoothness in terms of inter-frame distance is shown to remain below 2 mm, even when up to 5 mm of
Gaussian noise are added to the input. The approach is shown to be easily extensible to include other measures
like image similarity.
Combining supine MRI and 3D optical scanning for improved surgical planning of breast conserving surgeries
Matthew J. Pallone,
Steven P. Poplack,
Richard J. Barth Jr.,
et al.
Show abstract
Image-guided wire localization is the current standard of care for the excision of non-palpable carcinomas
during breast conserving surgeries (BCS). The efficacy of this technique depends upon the accuracy of wire placement,
maintenance of the fixed wire position (despite patient movement), and the surgeon's understanding of the spatial
relationship between the wire and tumor. Notably, breast shape can vary significantly between the imaging and surgical
positions. Despite this method of localization, re-excision is needed in approximately 30% of patients due to the
proximity of cancer to the specimen margins. These limitations make wire localization an inefficient and imprecise
procedure. Alternatively, we investigate a method of image registration and finite element (FE) deformation which
correlates preoperative supine MRIs with 3D optical scans of the breast surface.
MRI of the breast can accurately define the extents of very small cancers. Furthermore, supine breast MR
reduces the amount of tissue deformation between the imaging and surgical positions. At the time of surgery, the surface
contour of the breast may be imaged using a handheld 3D laser scanner. With the MR images segmented by tissue type,
the two scans are approximately registered using fiducial markers present in both acquisitions. The segmented MRI
breast volume is then deformed to match the optical surface using a FE mechanical model of breast tissue. The resulting
images provide the surgeon with 3D views and measurements of the tumor shape, volume, and position within the breast
as it appears during surgery which may improve surgical guidance and obviate the need for wire localization.
A novel external bronchoscope tracking model beyond electromagnetic localizers: dynamic phantom validation
Show abstract
Localization of a bronchoscope and estimation of its motion is a core component for constructing a bronchoscopic
navigation system that can guide physicians to perform any bronchoscopic interventions such as the
transbronchial lung biopsy (TBLB) and the transbronchial needle aspiration (TBNA). To overcome the limitations
of current methods, e.g., image registration (IR) and electromagnetic (EM) localizers, this study develops
a new external tracking technique on the basis of an optical mouse (OM) sensor and IR augmented by sequential
Monte Carlo (SMC) sampling (here called IR-SMC). We first construct an external tracking model by an OM
sensor that is uded to directly measure the bronchoscope movement information including the insertion depth
and the rotation of the viewing direction of the bronchoscope. To utilize OM sensor measurements, we employed
IR with SMC sampling to determine the bronchoscopic camera motion parameters. The proposed method was
validated on a dynamic phantom. Experimental results demonstrate that our constructed external tracking prototype
is a perspective means to estimate the bronchoscope motion, compared to the start-of-the-art, especially
for image-based methods, improving the tracking performance by 17.7% successfully processed video images.
Utilizing ultrasound as a surface digitization tool in image guided liver surgery
Show abstract
Intraoperative ultrasound imaging is a commonly used modality for image guided surgery and can be used to monitor
changes from pre-operative data in real time. Often a mapping of the liver surface is required to achieve image-tophysical
alignment for image guided liver surgery. Laser range scans and tracked optical stylus instruments have both
been utilized in the past to create an intraoperative representation of the organ surface. This paper proposes a method to
digitize the organ surface utilizing tracked ultrasound and to evaluate a relatively simple correction technique. Surfaces
are generated from point clouds obtained from the US transducer face itself during tracked movement. In addition, a
surface generated from a laser range scan (LRS) was used as the gold standard for evaluating the accuracy of the US
transducer swab surfaces. Two liver phantoms with varying stiffness were tested. The results reflected that the average
deformation observed for a 60 second swab of the liver phantom was 3.7 ± 0.9 mm for the more rigid phantom and 4.6 ±
1.2 mm for the less rigid phantom. With respect to tissue targets below the surface, the average error in position due to
ultrasound surface digitization was 3.5 ± 0.5 mm and 5.9 ± 0.9 mm for the stiffer and softer phantoms respectively. With
the simple correction scheme, the surface error was reduced to 1.1 ± 0.8 mm and 1.7 ± 1.0 mm, respectively; and the
subsurface target error was reduced to 2.0 ± 0.9 mm and 4.5 ± 1.8 mm, respectively. These results are encouraging and
suggest that the ultrasound probe itself and the acquired images could serve as a comprehensive digitization approach for
image guided liver surgery.
Automatic alignment of pre- and post-interventional liver CT images for assessment of radiofrequency ablation
Show abstract
Image-guided radiofrequency ablation (RFA) is becoming a standard procedure for minimally invasive tumor
treatment in clinical practice. To verify the treatment success of the therapy, reliable post-interventional assessment
of the ablation zone (coagulation) is essential. Typically, pre- and post-interventional CT images have to
be aligned to compare the shape, size, and position of tumor and coagulation zone. In this work, we present
an automatic workflow for masking liver tissue, enabling a rigid registration algorithm to perform at least as
accurate as experienced medical experts. To minimize the effect of global liver deformations, the registration is
computed in a local region of interest around the pre-interventional lesion and post-interventional coagulation
necrosis. A registration mask excluding lesions and neighboring organs is calculated to prevent the registration
algorithm from matching both lesion shapes instead of the surrounding liver anatomy. As an initial registration
step, the centers of gravity from both lesions are aligned automatically. The subsequent rigid registration method
is based on the Local Cross Correlation (LCC) similarity measure and Newton-type optimization. To assess the
accuracy of our method, 41 RFA cases are registered and compared with the manually aligned cases from four
medical experts. Furthermore, the registration results are compared with ground truth transformations based on
averaged anatomical landmark pairs. In the evaluation, we show that our method allows to automatic alignment
of the data sets with equal accuracy as medical experts, but requiring significancy less time consumption and
variability.