Proceedings Volume 7261

Medical Imaging 2009: Visualization, Image-Guided Procedures, and Modeling

cover
Proceedings Volume 7261

Medical Imaging 2009: Visualization, Image-Guided Procedures, and Modeling

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 27 February 2009
Contents: 18 Sessions, 116 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2009
Volume Number: 7261

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7261
  • Neuro
  • Minimally Invasive I
  • Liver
  • CT Guidance
  • Cardiac
  • Keynote and Modeling
  • Robotics and Guidance Systems
  • Ultrasound
  • Minimally Invasive II
  • Visualization and Geometry
  • Registration
  • Poster Session: Cardiac
  • Poster Session: CT Guidance
  • Poster Session: Modeling
  • Poster Session: Guidance and Technology
  • Poster Session: Visualization and Geometry
  • Poster Session: Registration
Front Matter: Volume 7261
icon_mobile_dropdown
Front Matter: Volume 7261
This PDF file contains the front matter associated with SPIE Proceedings Volume 7261, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing
Neuro
icon_mobile_dropdown
Fiducial registration error and target registration error are uncorrelated
Image-guidance systems based on fiducial registration typically display some measure of registration accuracy based on the goodness of fit of the fiducials. A common measure is fiducial registration error (FRE), which equals the root-meansquare error in fiducial alignment between image space and physical space. It is natural for the surgeon to regard the displayed estimate of error as an indication of the accuracy of the system's ability to provide guidance to surgical targets for a given case. Thus, when the estimate is smaller than usual, it may be assumed that the target registration error (TRE) is likely to be smaller than usual. We show that this assumption, while intuitively convincing, is in fact wrong. We show it in two ways. First, we prove to first order that for a given system with a given level of normally distributed fiducial localization error, all measures of goodness of fit are statistically independent of TRE, and therefore FRE and TRE are uncorrelated. Second, we demonstrate by means of computer simulations that they are uncorrelated for the exact problem as well. Since TRE is the true measure of registration accuracy of importance to the success of the surgery, our results show that no estimate of accuracy for a given patient that is based on goodness of fiducial fit for that patient gives any information whatever about true registration accuracy for that patient. Therefore surgeons should stop using such measures as indicators of registration quality for the patients on whom they are about to operate.
Brain tumor resection guided by fluorescence imaging and MRI image guidance
Pablo Valdes, Brent T. Harris, Frederic Leblond, et al.
Recent evidence suggests a correlation between extent of tumor resection and patient prognosis, making maximal tumor resection a clinical ideal for neurosurgeons. Our group is currently undertaking a clinical study using fluorescence-based detection of tumor coupled with a standard 3-D image guidance system to study the effectiveness of fluorescence-based detection in the neurosurgical operating room. For fluorescence-based detection, we used 5-aminolevulinic acid to induce accumulation of protoporphyrin IX in malignant tissues. In this paper, we chose one prototypical, highly fluorescent case of glioblastoma multiforme, a high-grade glioma, to highlight some of the key findings and methodology used in our study of fluorescence-based detection and resection of brain tumors.
Automatic segmentation of cortical vessels in pre- and post-tumor resection laser range scan images
Siyi Ding, Michael I. Miga, Reid C. Thompson, et al.
Measurement of intra-operative cortical brain movement is necessary to drive mechanical models developed to predict sub-cortical shift. At our institution, this is done with a tracked laser range scanner. This device acquires both 3D range data and 2D photographic images. 3D cortical brain movement can be estimated if 2D photographic images acquired over time can be registered. Previously, we have developed a method, which permits this registration using vessels visible in the images. But, vessel segmentation required the localization of starting and ending points for each vessel segment. Here, we propose a method, which automates the segmentation process further. This method involves several steps: (1) correction of lighting artifacts, (2) vessel enhancement, and (3) vessels' centerline extraction. Result obtained on 5 images obtained in the operating room suggests that our method is robust and is able to segment vessels reliably.
Towards real-time guidewire detection and tracking in the field of neuroradiology
Two-dimensional roadmapping is considered state-of-the-art in guidewire navigation during endovascular interventions. This paper presents a methodology for extracting the guidewire from a sequence of 2-D roadmap images in almost real time. The detected guidewire can be used to improve its visibility on noisy fluoroscopic images or to do a back projection of the guidewire into a registered 3-D vessel tree. A lineness filter based on the Hessian matrix is used to detect only those line structures in the image that lie within the vessel tree. Loose wire fragments are properly linked by a novel connection method fulfilling clinical processing requirements. We show that Dijkstra's algorithm can be applied to efficiently compute the optimal connection path. The entire guidewire is finally approximated by a B-spline curve in a least-squares manner. The proposed method is both integrated into a commercial clinical prototype and evaluated on five different patient data sets containing up to 249 frames per image series.
Spinal cord stress injury assessment (SCOSIA): clinical applications of mechanical modeling of the spinal cord and brainstem
Kenneth H. Wong, Jae Choi, William Wilson, et al.
Abnormal stretch and strain is a major cause of injury to the spinal cord and brainstem. Such forces can develop from age-related degeneration, congenital malformations, occupational exposure, or trauma such as sporting accidents, whiplash and blast injury. While current imaging technologies provide excellent morphology and anatomy of the spinal cord, there is no validated diagnostic tool to assess mechanical stresses exerted upon the spinal cord and brainstem. Furthermore, there is no current means to correlate these stress patterns with known spinal cord injuries and other clinical metrics such as neurological impairment. We have therefore developed the spinal cord stress injury assessment (SCOSIA) system, which uses imaging and finite element analysis to predict stretch injury. This system was tested on a small cohort of neurosurgery patients. Initial results show that the calculated stress values decreased following surgery, and that this decrease was accompanied by a significant decrease in neurological symptoms. Regression analysis identified modest correlations between stress values and clinical metrics. The strongest correlations were seen with the Brainstem Disability Index (BDI) and the Karnofsky Performance Score (KPS), whereas the weakest correlations were seen with the American Spinal Injury Association (ASIA) scale. SCOSIA therefore shows encouraging initial results and may have wide applicability to trauma and degenerative disease involving the spinal cord and brainstem.
Minimally Invasive I
icon_mobile_dropdown
Fusion of MDCT-based endoluminal renderings and endoscopic video
Early lung cancer can cause structural and color changes to the airway mucosa. A three-dimensional (3D) multidetector CT (MDCT) chest scan provides 3D structural data for airway walls, but no detailed mucosal information. Conversely, bronchoscopy gives color mucosal information, due to airway-wall inflammation and early cancer formation. Unfortunately, each bronchoscopic video image provides only a limited local view of the airway mucosal surface and no 3D structural/location information. The physician has to mentally correlate the video images with each other and the airway surface data to analyze the airway mucosal structure and color. A fusion of the topographical information from the 3D MDCT data and the color information from the bronchoscopic video enables 3D visualization, navigation, localization, and combined color-topographic analysis of the airways. This paper presents a fast method for topographic airway-mucosal surface fusion of bronchoscopic video with 3D MDCT endoluminal views. Tests were performed on phantom sequences, real bronchoscopy patient video, and associated 3D MDCT scans. Results show that we can effectively accomplish mapping over a continuous sequence of airway images spanning several generations of airways in a few seconds. Real-time navigation and visualization of the combined data was performed. The average surface-point mapping error for a phantom case was estimated to be only on the order of 2 mm for 20 mm diameter airway.
A method for accelerating bronchoscope tracking based on image registration by using GPU
This paper presents a method for accelerating bronchoscope tracking based on image registration by using the GPU (Graphics Processing Unit). Parallel techniques for efficient utilization of CPU (Central Processing Unit) and GPU in image registration are presented. Recently, a bronchoscope navigation system has been developed for enabling a bronchoscopist to perform safe and efficient examination. In such system, it is indispensable to track the motion of the bronchoscope camera at the tip of the bronchoscope in real time. We have previously developed a method for tracking a bronchoscope by computing image similarities between real and virtual bronchoscopic images. However, since image registration is quite time consuming, it is difficult to track the bronchoscope in real time. This paper presents a method for accelerating the process of image registration by utilizing the GPU of the graphics card and the CUDA (Compute Unified Device Architecture) architexture. In particular, we accelerate two parts: (1) virtual bronchoscopic image generation by volume rendering and (2) image similarity calculation between a real bronchoscopic image and virtual bronchoscopic images. Furthermore, to efficiently use the GPU, we minimize (i) the amount of data transfer between CPU and GPU, and (ii) the number of GPU function calls from the CPU. We applied the proposed method to bronchoscopic videos of 10 patients and their corresponding CT data sets. The experimental results showed that the proposed method can track a bronchoscope at 15 frames per second and 5.17 times faster than the same method only using the CPU.
Fusion of stereoscopic video and laparoscopic ultrasound for minimally invasive partial nephrectomy
Carling L. Cheung, Christopher Wedlake, John Moore, et al.
The development of an augmented reality environment that combines laparoscopic video and ultrasound imaging for image-guided minimally invasive abdominal surgical procedures, such as partial nephrectomy and radical prostatectomy, is an ongoing project in our laboratory. Our system overlays magnetically tracked ultrasound images onto endoscopic video to create a more intuitive visualization for mapping lesions intraoperatively and to give the ultrasound image context in 3D space. By presenting data in a common environment, this system will allow surgeons to visualize the multimodality information without having to switch between different images. A stereoscopic laparoscope from Visionsense Limited enhances our current system by providing surgeons with additional visual information through improved depth perception. In this paper, we develop and validate a calibration method that determines the transformation between the images from the stereoscopic laparoscope and the 3D locations of structures represented by a tracked laparoscopic ultrasound probe. We first calibrate the laparoscope with a checkerboard pattern and measure how accurate the transformation from image space to tracking space is. We then perform a target localization task using our fused environment. Our initial experience has demonstrated an RMS registration accuracy in 3D of 2.21mm for the laparoscope and 1.16mm for the ultrasound in a working volume of 0.125m3, indicating that magnetically tracked stereoscopic laparoscope and ultrasound images may be appropriately combined using magnetic tracking as long as steps are taken to ensure that the magnetic field generated by the system is not distorted by surrounding objects close to the working volume.
Automatic classification of minimally invasive instruments based on endoscopic image sequences
Stefanie Speidel, Julia Benzko, Sebastian Krappe, et al.
Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To analyze the current situation for context-aware assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based analysis with the objective to gain as much information as possible about the current situation. An important visual cue is the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument models.
Absolute length measurement using manually decided stereo correspondence for endoscopy
In recent years, various kinds of endoscope have been developed and widely used to endoscopic biopsy, endoscopic operation and endoscopy. The size of the inflammatory part is important to determine a method of medical treatment. However, it is not easy to measure absolute size of inflammatory part such as ulcer, cancer and polyp from the endoscopic image. Therefore, it is required measuring the size of those part in endoscopy. In this paper, we propose a new method to measure the absolute length in a straight line between arbitrary two points based on the photogrammetry using endoscope with magnetic tracking sensor which gives camera position and angle. In this method, the stereo-corresponding points between two endoscopic images are determined by the endoscopist without any apparatus of projection and calculation to find the stereo correspondences, then the absolute length can be calculated on the basis of the photogrammetry. The evaluation experiment using a checkerboard showed that the errors of the measurements are less than 2% of the target length when the baseline is sufficiently-long.
Validation of CT-video registration for guiding a novel ultrathin bronchoscope to peripheral lung nodules using electromagnetic tracking
Timothy D. Soper, David R. Haynor, Robb W. Glenny, et al.
The development of an ultrathin scanning fiber bronchoscope (SFB) at the University of Washington permits bronchoscopic examination of small peripheral airways inaccessible to conventional bronchoscopes. Due to the extensive branching in higher generation airways, a form of bronchoscopic guidance is needed. For accurate intraoperative localization of the SFB, we propose a hybrid approach, using electromagnetic tracking (EMT) and 2D/3D registration of bronchoscopic video images to a preoperative CT scan. Three similarity metrics were evaluated for CT-video registration, including normalized mutual information (NMI), dark-weighted NMI (dw-NMI), and a surface gradient matching (SGM) strategy. From four bronchoscopic sessions, CT-video registration using SGM proved to be more robust than NMI-based metrics, averaging 320 frames of tracking before failure as compared with 100 and 160 frame averages for NMI and dw-NMI metrics respectively. In the hybrid configuration, EMT and CT-video registration were blended using a Kalman filter to recursively refine the registration error between the EMT system and airway anatomy. As part of the implementation, respiratory motion compensation (RMC) was implemented by adaptively estimating respiratory phase-dependent deformation. With the addition of RMC, average hybrid tracking disagreement with a set of manually registered key frames was 3.36 mm as compared with 6.30 mm when RMC was not used. In peripheral airway regions that undergo larger respiratory-induced deformation, disagreement was only 2.01 mm with RMC on average, as compared with 8.65 mm otherwise.
Liver
icon_mobile_dropdown
Automated RFA planning for complete coverage of large tumors
Karen Trovato, Sandeep Dalal, Jochen Krücker, et al.
Radiofrequency ablation (RFA) is a minimally invasive procedure used for the treatment of small-to-moderate sized tumors most commonly in the liver, kidney and lung. An RFA procedure for successfully treating large or complex shape tumors may require many ablations, in a non-obvious pattern. Tumor size > 3cm predisposes to incomplete treatment [1] and potential recurrence, therefore RFA is less often successful and less often used for treating large tumors. A mental solution is the current clinical practice standard, but is a daunting task for defining the complete 3D geometrical coverage of a tumor and margin (planned target volume, PTV) with the fewest ellipsoidal ablation volumes, while also minimizing collateral damage to healthy tissue. In order to generate a repeatable and reliable result, a solution must quantify precise locations. A new interactive planning system with an automated coverage algorithm is described. The planning system allows the interventional radiologist to segment the potentially complex PTV, select an RFA needle (which determines the specific 3D ablation shape), and identify the skin entry location that defines the shape's orientation. The algorithm generates a cluster of overlapping ablations from the periphery of the PTV, filling toward the center. The cluster is first tightened toward the center to reduce the overall number of ablations and collateral damage, and then pulled toward optimal attractors to further reduce the number of ablations. For most clinical applications, computation requires less than 15 seconds. This fast ablation planning enables rapid scenario assessment, including proper probe selection, skin entry location, collateral damage and procedure duration. The plan can be executed by transferring target locations to a navigation system.
A novel technique for the three-dimensional visualization of radio-frequency ablation lesions using delayed enhancement magnetic resonance imaging
Benjamin R. Knowles, Dennis Caulfield, Matthew Ginks, et al.
The detection of radio-frequency ablation lesions has been shown to be feasible using delayed enhancement magnetic resonance imaging (MRI). However, it is the determination of the lesion patterns that is of import for correlation with clinical outcome and location of gaps. Visualisation of ablation patterns on two-dimensional (2D) MR images is not intuitive. We present a technique for the three-dimensional (3D) visualisation of ablation patterns by creating a surface from a segmentation of the cardiac chamber of interest, fusing with the delayed enhancement MRI and integrating the MR signal along vectors normal to the cardiac surface. Areas of delayed enhancement will have a larger integral value than healthy myocardium. Maximum intensity projection (MIP) values were used to colour code the cardiac surface for 3D visualisation of the areas of delayed enhancement. The technique was applied to three patients with a cardiac arrhythmia, with successful visualisation of the ablation pattern. Patterns of delayed enhancement were correlated with ablation points derived from electro-anatomical mapping systems (EAMS) and were found to have similar patterns. This visualisation technique allows for the intuitive visualisation of ablation lesions and has many applications for use in electrophysiology.
Fast registration of pre- and peri-interventional CT images for targeting support in radiofrequency ablation of hepatic tumors
J. Bieberstein, C. Schumann, A. Weihusen, et al.
Radiofrequency (RF) ablation is an image-guided minimally invasive therapy which destroys a tumor by locally inducing electrical energy into the malignant tissue through a radiofrequency applicator. Treatment success is essentially dependent on the accurate placement of the RF applicator. In the case of CT-guided RF ablation of liver tumors, a central problem during monitoring is the reduced quality and information content in the peri-interventional images compared to the images used for planning. Therefore, the question of how to effectively transfer information from the planning scan into the peri-interventional scan in order to support the interventionalist is of high interest. Key to such an enhancement of peri-interventional scans is an adequate registration of the pre- and peri-interventional image, which also needs to be fast since intervention duration is still a challenge. We present an approach for the fast and automatic registration of a high quality CT volume scan of the liver to a spiral CT scan of lower quality. Our method combines an approximate pre-registration to compensate large displacements and a rigid registration of a liver subvolume for further refinement. The method focuses on the position of the tumor to be ablated and the corresponding access path. Thereby, it achieves both fast and precise results in the region of interest. A preliminary evaluation, on 37 data sets from 20 different patients, shows that the registration is performed within a maximum of 18 seconds, while obtaining high accuracy in the relevant region of the liver comprising tumor and the planned access path.
Matching CT and ultrasound data of the liver by landmark constrained image registration
In navigated liver surgery the key challenge is the registration of pre-operative planing and intra-operative navigation data. Due to the patients individual anatomy the planning is based on segmented, pre-operative CT scans whereas ultrasound captures the actual intra-operative situation. In this paper we derive a novel method based on variational image registration methods and additional given anatomic landmarks. For the first time we embed the landmark information as inequality hard constraints and thereby allowing for inaccurately placed landmarks. The yielding optimization problem allows to ensure the accuracy of the landmark fit by simultaneous intensity based image registration. Following the discretize-then-optimize approach the overall problem is solved by a generalized Gauss-Newton-method. The upcoming linear system is attacked by the MinRes solver. We demonstrate the applicability of the new approach for clinical data which lead to convincing results.
A variational method for vessels segmentation: algorithm and application to liver vessels visualization
M. Freiman, L. Joskowicz, J. Sosna
We present a new variational-based method for automatic liver vessels segmentation from abdominal CTA images. The segmentation task is formulated as a functional minimization problem within a variational framework. We introduce a new functional that incorporates both geometrical vesselness measure and vessels surface properties. The functional describes the distance between the desired segmentation and the original image. To minimize the functional, we derive the Euler-Lagrange equation from it and solve it using the conjugate gradients algorithm. Our approach is automatic and improves upon other Hessian-based methods in the detection of bifurcations and complex vessels structures by incorporating a surface term into the functional. To assess our method, we conducted with an expert radiologist two comparative studies on 8 abdominal CTA clinical datasets. In the first study, the radiologist assessed the presence of 11 vascular bifurcations on each dataset, totaling of 73 bifurcations. The radiologist qualitatively compared the bifurcations segmentation of our method and that of a Hessian-based threshold method. Our method correctly segmented 88% of the bifurcations with a higher visibility score of 82%, as compared to only 55% in the Hessian-based method with a visibility score of 33%. In the second study, the radiologist assessed the individual vessels visibility on the 3D segmentation images and on the original CTA slices. Ten main liver vessels were examined in each dataset The overall visibility score was 93%. These results indicate that our method is suitable for the automatic segmentation and visualization of the liver vessels.
CT Guidance
icon_mobile_dropdown
Fiducial localization in C-arm based cone-beam CT
C-arm based Cone-Beam CT (CBCT) imaging enables the in-situ acquisition of three dimensional images. In the context of image-guided interventions this technology potentially reduces the complexity of a procedure's workflow. Instead of acquiring the preoperative volumetric images in a separate location and transferring the patient to the interventional suite, both imaging and intervention are carried out in the same location. A key component in image-guided interventions is image to patient registration. The most common registration approach, in clinical use, is based on fiducial markers placed on the patient's skin which are then localized in the volumetric image and in the interventional environment. When using C-arm CBCT this registration approach is challenging as in many cases the small size of the volumetric reconstruction cannot include both the skin fiducials and the organ of interest. In this paper we show that fiducial localization outside of the reconstructed volume is possible if the projection images from which the reconstruction was obtained are available. By replacing direct fiducial localization in the volumetric images with localization in the projection images we obtain the fiducial coordinates in the volume's coordinate system even when the fiducials are outside of the reconstructed region. The approach was evaluated using two anthropomorphic phantoms. When using the projection images all fiducials were localized, including those that were outside the reconstruction volume. The method's maximal localization error as evaluated using fiducials that could be directly localized in the CBCT reconstruction was 0.67 millimeters.
High-performance intraoperative cone-beam CT on a mobile C-arm: an integrated system for guidance of head and neck surgery
A system for intraoperative cone-beam CT (CBCT) surgical guidance is under development and translation to trials in head and neck surgery. The system provides 3D image updates on demand with sub-millimeter spatial resolution and soft-tissue visibility at low radiation dose, thus overcoming conventional limitations associated with preoperative imaging alone. A prototype mobile C-arm provides the imaging platform, which has been integrated with several novel subsystems for streamlined implementation in the OR, including: real-time tracking of surgical instruments and endoscopy (with automatic registration of image and world reference frames); fast 3D deformable image registration (a newly developed multi-scale Demons algorithm); 3D planning and definition of target and normal structures; and registration / visualization of intraoperative CBCT with the surgical plan, preoperative images, and endoscopic video. Quantitative evaluation of surgical performance demonstrates a significant advantage in achieving complete tumor excision in challenging sinus and skull base ablation tasks. The ability to visualize the surgical plan in the context of intraoperative image data delineating residual tumor and neighboring critical structures presents a significant advantage to surgical performance and evaluation of the surgical product. The system has been translated to a prospective trial involving 12 patients undergoing head and neck surgery - the first implementation of the research prototype in the clinical setting. The trial demonstrates the value of high-performance intraoperative 3D imaging and provides a valuable basis for human factors analysis and workflow studies that will greatly augment streamlined implementation of such systems in complex OR environments.
Automated segmentation of muscle and adipose tissue on CT images for human body composition analysis
Howard Chung, Dana Cobzas, Laura Birdsell, et al.
The ability to compute body composition in cancer patients lends itself to determining the specific clinical outcomes associated with fat and lean tissue stores. For example, a wasting syndrome of advanced disease associates with shortened survival. Moreover, certain tissue compartments represent sites for drug distribution and are likely determinants of chemotherapy efficacy and toxicity. CT images are abundant, but these cannot be fully exploited unless there exist practical and fast approaches for tissue quantification. Here we propose a fully automated method for segmenting muscle, visceral and subcutaneous adipose tissues, taking the approach of shape modeling for the analysis of skeletal muscle. Muscle shape is represented using PCA encoded Free Form Deformations with respect to a mean shape. The shape model is learned from manually segmented images and used in conjunction with a tissue appearance prior. VAT and SAT are segmented based on the final deformed muscle shape. In comparing the automatic and manual methods, coefficients of variation (COV) (1 - 2%), were similar to or smaller than inter- and intra-observer COVs reported for manual segmentation.
C-arm cone beam CT guidance of sinus and skull base surgery: quantitative surgical performance evaluation and development of a novel high-fidelity phantom
A. D. Vescan, H. Chan, M. J. Daly, et al.
Surgical simulation has become a critical component of surgical practice and training in the era of high-precision image-guided surgery. While the ability to simulate surgery of the paranasal sinuses and skull base has been conventionally limited to 3D digital simulation or cadaveric dissection, we have developed novel methods employing rapid prototyping technology and 3D printing to create high-fidelity models from real patient images (CT or MR). Such advances allow creation of patient-specific models for preparation, simulation, and training before embarking on the actual surgery. A major challenge included the development of novel material formulations compatible with the rapid prototyping process while presenting anatomically realistic flexibility, cut-ability, drilling purchase, and density (CT number). Initial studies have yielded realistic models of the paranasal sinuses and skull base for simulation and training in image-guided surgery. The process of model development and material selection is reviewed along with the application of the phantoms in studies of high-precision surgery guided by C-arm cone-beam CT (CBCT). Surgical performance is quantitatively evaluated under CBCT guidance, with the high-fidelity phantoms providing an excellent test-bed for reproducible studies across a broad spectrum of challenging surgical tasks. Future work will broaden the atlas of models to include normal anatomical variations as well as a broad spectrum of benign and malignant disease. The role of high-fidelity models produced by rapid prototyping is discussed in the context of patient-specific case simulation, novel technology development (specifically CBCT guidance), and training of future generations of sinus and skull base surgeons.
Experimental comparison of landmark-based methods for 3D elastic registration of pre- and postoperative liver CT data
Thomas Lange, Stefan Wörz, Karl Rohr, et al.
The qualitative and quantitative comparison of pre- and postoperative image data is an important possibility to validate surgical procedures, in particular, if computer assisted planning and/or navigation is performed. Due to deformations after surgery, partially caused by the removal of tissue, a non-rigid registration scheme is a prerequisite for a precise comparison. Interactive landmark-based schemes are a suitable approach, if high accuracy and reliability is difficult to achieve by automatic registration approaches. Incorporation of a priori knowledge about the anatomical structures to be registered may help to reduce interaction time and improve accuracy. Concerning pre- and postoperative CT data of oncological liver resections the intrahepatic vessels are suitable anatomical structures. In addition to using branching landmarks for registration, we here introduce quasi landmarks at vessel segments with high localization precision perpendicular to the vessels and low precision along the vessels. A comparison of interpolating thin-plate splines (TPS), interpolating Gaussian elastic body splines (GEBS) and approximating GEBS on landmarks at vessel branchings as well as approximating GEBS on the introduced vessel segment landmarks is performed. It turns out that the segment landmarks provide registration accuracies as good as branching landmarks and can improve accuracy if combined with branching landmarks. For a low number of landmarks segment landmarks are even superior.
Disablement of a surgical drill via CT guidance to protect vital anatomy
Christopher C.. Heath, Ramya Balachandran, Omid Majdani, et al.
Applying image-guidance to an electronically-controlled surgical drill can prevent damage to patients' anatomy during resection. A system is presented that disables the drill when it nears pre-defined critical patient anatomy. The system consists of a tracking system, image-guidance software, and drill-control circuit. The software was developed in C++ with the help of the Image-Guided Surgery Toolkit, and was designed to track tools based on input from a MicronTracker (Claron Tech, Toronto, Ontario) tracking system. The system registers physical to image space using fiducial markers rigidly attached to the patient, tracks the drill, and automatically disables the drill when close to restricted regions. A coordinate reference frame is used for all physical acquisitions. Visual feedback of the tool's position in image space is provided during tracking. Two tests were performed to determine the feasibility of the system. Virtual restricted regions were defined inside a phantom, and an operator attempted to drill the phantom with the help of the application. No feedback was provided to the user except for the automatic disablement of the drill by the application when close to a restricted region. In the first test, the drill was disabled at 0.74 ± 0.46 mm from the restricted region and entered 5.3% of the surface area of the restricted region. In the second test, the drill was disabled 1.3 ± 0.69 mm from the restricted region and entered the restricted region 8.5% of the time. We conclude that the system shows promise and further testing should be conducted.
Cardiac
icon_mobile_dropdown
In vitro cardiac catheter navigation via augmented reality surgical guidance
Cristian A. Linte, John Moore, Andrew Wiles, et al.
Catheter-driven cardiac interventions have emerged in response to the need of reducing invasiveness associated with the traditional cut-and-sew techniques. Catheter manipulation is traditionally performed under real-time fluoroscopy imaging, resulting in an overall trade-off of procedure invasiveness for radiation exposure of both the patient and clinical staff. Our approach to reducing and potentially eliminating the use of flouroscopy in the operating room entails the use of multi-modality imaging and magnetic tracking technologies, wrapped together into an augmented reality environment for enhanced intra-procedure visualization and guidance. Here we performed an in vitro study in which a catheter was guided to specific targets located on the endocardial atrial surface of a beating heart phantom. "Therapy delivery" was modeled in the context of a blinded procedure, mimicking a beating heart, intracardiac intervention. The users navigated the tip of a magnetically tracked Freezor 5 CRYOCATH catheter to the specified targets. Procedure accuracy was determined as the distance between the tracked catheter tip and the tracked surgical target at the time of contact, and it was assessed under three different guidance modalities: endoscopic, augmented reality, and ultrasound image guidance. The overall RMS targeting accuracy achieved under augmented reality guidance averaged to 1.1 mm. This guidance modality shows significant improvements in both procedure accuracy and duration over ultrasound image guidance alone, while maintianing an overall targeting accuracy comparable to that achieved under endoscopic guidance.
Computer-assisted LAD bypass grafting at the open heart
Christine Hartung, Claudia Gnahm, Reinhard Friedl, et al.
Open heart bypass surgery is the standard treatment in advanced coronary heart diseases. For an effective revascularization procedure, optimal placement of the bypass is very important. To accelerate the intraoperative localization of the anastomosis site and to increase the precision of the procedure, a concept for computer assistance in open heart bypass surgery has been developed comprising the following steps: 1. Preprocedural planning: A patient-specific coronary map with information on vessel paths and wall plaque formations is extracted from a multi-slice computed tomography (MSCT). On this basis, the heart surgeon and the cardiac radiologist define the optimal anastomosis site prior to surgery. 2. Intraoperative navigation: During surgery, data are recorded at the beating heart using a stereo camera system. After registering the pre- and intraoperative data sets, preprocedural information can be transferred to the surgical site by overlaying the coronary map and the planned anastomosis site on the live video stream. With this visual guidance system, the surgeon can navigate to the planned anastomosis site. In this work, the proposed surgical assistance system has been validated for the left anterior descending coronary artery (LAD). The accuracy of the registration mechanism has been evaluated in retrospective on patient data sets and the effects of breathing motion were quantified. The promising results of the retrospective evaluation led to the in-vivo application of the computer assistance system during several bypass grafting procedures. Intraoperative navigation has been performed successfully and postoperative evaluation confirms that the bypass grafts were accurately positioned to the preoperatively planned anastomosis sites.
Echocardiography to magnetic resonance image registration for use in image-guide electrophysiology procedures
We present a novel method to register three-dimensional echocardiography (echo) images with magnetic resonance images (MRI) based on anatomical features, which could be used in the registration pipeline for overlaying MRI-derived roadmaps onto two-dimensional live X-ray images in electrophysiology (EP) procedures. The features used in image registration are the surface of the left ventricle and a manually defined centerline of the descending aorta. The MR-derived surface is generated using a fully automated algorithm, and the echo-derived surface is produced using a semi-automatic process. We test our method on six volunteers and three patients. We validated registration accuracy using two methods. The first calculated a root mean square distance error using anatomical landmarks. The second method used catheters as landmarks in one clinical EP procedure. Results show a mean error of 4.24 mm, which is acceptable for our clinical application, and no failed registrations were observed. In addition, our algorithm works on clinical data, is fast and only requires a small amount of manual input, and so it is applicable to use during EP procedures.
Model-driven physiological assessment of the mitral valve from 4D TEE
Disorders of the mitral valve are second most frequent, cumulating 14 percent of total number of deaths caused by Valvular Heart Disease each year in the United States and require elaborate clinical management. Visual and quantitative evaluation of the valve is an important step in the clinical workflow according to experts as knowledge about mitral morphology and dynamics is crucial for interventional planning. Traditionally this involves examination and metric analysis of 2D images comprising potential errors being intrinsic to the method. Recent commercial solutions are limited to specific anatomic components, pathologies and a single phase of cardiac 4D acquisitions only. This paper introduces a novel approach for morphological and functional quantification of the mitral valve based on a 4D model estimated from ultrasound data. A physiological model of the mitral valve, covering the complete anatomy and eventual shape variations, is generated utilizing parametric spline surfaces constrained by topological and geometrical prior knowledge. The 4D model's parameters are estimated for each patient using the latest discriminative learning and incremental searching techniques. Precise evaluation of the anatomy using model-based dynamic measurements and advanced visualization are enabled through the proposed approach in a reliable, repeatable and reproducible manner. The efficiency and accuracy of the method is demonstrated through experiments and an initial validation based on clinical research results. To the best of our knowledge this is the first time such a patient specific 4D mitral valve model is proposed, covering all of the relevant anatomies and enabling to model the common pathologies at once.
Curve-based 2D-3D registration of coronary vessels for image guided procedure
3D roadmap provided by pre-operative volumetric data that is aligned with fluoroscopy helps visualization and navigation in Interventional Cardiology (IC), especially when contrast agent-injection used to highlight coronary vessels cannot be systematically used during the whole procedure, or when there is low visibility in fluoroscopy for partially or totally occluded vessels. The main contribution of this work is to register pre-operative volumetric data with intraoperative fluoroscopy for specific vessel(s) occurring during the procedure, even without contrast agent injection, to provide a useful 3D roadmap. In addition, this study incorporates automatic ECG gating for cardiac motion. Respiratory motion is identified by rigid body registration of the vessels. The coronary vessels are first segmented from a multislice computed tomography (MSCT) volume and correspondent vessel segments are identified on a single gated 2D fluoroscopic frame. Registration can be explicitly constrained using one or multiple branches of a contrast-enhanced vessel tree or the outline of guide wire used to navigate during the procedure. Finally, the alignment problem is solved by Iterative Closest Point (ICP) algorithm. To be computationally efficient, a distance transform is computed from the 2D identification of each vessel such that distance is zero on the centerline of the vessel and increases away from the centerline. Quantitative results were obtained by comparing the registration of random poses and a ground truth alignment for 5 datasets. We conclude that the proposed method is promising for accurate 2D-3D registration, even for difficult cases of occluded vessel without injection of contrast agent.
Keynote and Modeling
icon_mobile_dropdown
Accelerated statistical shape model-based technique for tissue deformation estimation
Iman Khalaji, Kaamran Rahemifar, Abbas Samani
A novel finite element (FE) based technique is introduced, which can be applied for real-time or near real-time soft tissue deformation calculation, irrespective of the complexities arising from the tissue constitutive law or loading conditions. Unlike classical FE methods, which are computationally slow, this technique is very fast and yet highly accurate. The proposed technique is based on statistical analysis of pre-processed FE models on a class of organ shapes similar to the object shape of interest. We show that FE analysis results of any new shape in the class of objects can be obtained by a linear combination of main modes of the FE output parameter space. Several examples are presented for validation and finally an application of this method in real-time elastography is demonstrated.
Effect of heterogeneous material of the lung on deformable image registration
Adil Al-Mayah, Joanne Moseley, Mike Velec, et al.
Patient specific 3D finite element models have been developed to investigate the effect of heterogeneous material properties on modeling of the deformation of the lungs by including the bronchial trees of each lung. Each model consists of both lungs, body, tumor, and bronchial trees. Triangular shell elements with 0.1 cm wall thickness are used to model the bronchial trees. Body, lungs and tumor are modeled using 4-node tetrahedral elements. Experimental test data are used for the nonlinear material properties of the lungs. Three elastic modulii of 0.5, 10 and 18 MPa are used for the bronchial tree. Frictionless contact surfaces are applied to lung surfaces and cavities. The accuracy of the results is examined using an average of 40 bifurcation points. Preliminary results have shown an insignificant effect of modeling the bronchial trees explicitly on the overall accuracy of the model. However, local changes in the predicted motion of the bronchial tree of up to 5.2 mm were observed, indicating that modeling the bronchial tree explicitly, with unique material properties, may ensure a more accurately detailed model of the lung as well as reduced maximum residual errors.
Using a statistical appearance model to predict the fracture load of the proximal femur
Benedikt Schuler, Karl D. Fritscher, Volker Kuhn, et al.
Nowadays clinical diagnostic techniques like e.g. dual-energy X-ray absorptiometry are used to quantify bone quality. However, bone mineral density alone is not sufficient to predict biomechanical properties like the fracture load for an individual patient. Therefore, the development of tools, which can assess the bone quality in order to predicting individual biomechanics of a bone, would mean a significant improvement for the prevention of fractures. In this paper an approach to predict the fracture load of proximal femora by using a statistical appearance model will be presented. For this purpose, 96 CT-datasets of anatomical specimen of human femora are used to create statistical models for the prediction of the individual fracture load. Calculating statistical appearance models in different regions of interest by using principal component analysis (PCA) makes it possible to use geometric as well as structural information about the proximal femur. By regressing the output of PCA against the individual fracture load of 96 femora multi-linear regression models using a leave-one-out cross validation scheme have been created. The resulting correlations are comparable to studies that partly use higher image resolutions.
Robotics and Guidance Systems
icon_mobile_dropdown
Development and evaluation of a new image-based user interface for robot-assisted needle placements with the Robopsy system
Alexander Seitel, Conor J. Walsh, Nevan C. Hanumara, et al.
The main challenges of Computed Tomography (CT)-guided organ puncture are the mental registration of the medical imaging data with the patient anatomy, required when planning a trajectory, and the subsequent precise insertion of a needle along it. An interventional telerobotic system, such as Robopsy, enables precise needle insertion, however, in order to minimize procedure time and number of CT scans, this system should be driven by an interface that is directly integrated with the medical imaging data. In this study we have developed and evaluated such an interface that provides the user with a point-and-click functionality for specifying the desired trajectory, segmenting the needle and automatically calculating the insertion parameters (angles and depth). In order to highlight the advantages of such an interface, we compared robotic-assisted targeting using the old interface (non-image-based) where the path planning was performed on the CT console and transferred manually to the interface with the targeting procedure using the new interface (image-based). We found that the mean procedure time (n=5) was 22±5 min (non-image-based) and 19±1 min (image-based) with a mean number of CT scans of 6±1 (non-image-based) and 5±1 (image-based). Although the targeting experiments were performed in gelatin with homogenous properties our results indicate that an image-based interface can reduce procedure time as well as number of CT scans for percutaneous needle biopsies.
Human vs. robot operator error in a needle-based navigation system for percutaneous liver interventions
Lena Maier-Hein, Conor J. Walsh, Alexander Seitel, et al.
Computed tomography (CT) guided percutaneous punctures of the liver for cancer diagnosis and therapy (e.g. tumor biopsy, radiofrequency ablation) are well-established procedures in clinical routine. One of the main challenges related to these interventions is the accurate placement of the needle within the lesion. Several navigation concepts have been introduced to compensate for organ shift and deformation in real-time, yet, the operator error remains an important factor influencing the overall accuracy of the developed systems. The aim of this study was to investigate whether the operator error and, thus, the overall insertion error of an existing navigation system could be further reduced by replacing the user with the medical robot Robopsy. For this purpose, we performed navigated needle insertions in a static abdominal phantom as well as in a respiratory liver motion simulator and compared the human operator error with the targeting error performed by the robot. According to the results, the Robopsy driven needle insertion system is able to more accurately align the needle and insert it along its axis compared to a human operator. Integration of the robot into the current navigation system could thus improve targeting accuracy in clinical use.
Real-time video fusion using a distributed architecture in robotic surgery
The use of medical robotics has been increasing in recent years. This increase in popularity can be attributed to the improvement in dexterity robots provide over traditional laparoscopy, as well as the increasing number of applications of robotic surgery. The daVinci from Intuitive Surgical, one of the more commonly used robotic surgery systems, relies on stereo laparoscopic video for guidance, which restricts visualization to only surface anatomy. Oftentimes the localization of subsurface anatomic structures is critical to the success of surgical intervention. The implementation of image guidance in medical robotics adds the ability to see into the surface; however, current implementations are restrictive in terms of flexibility or scalability, especially in the ability to process real-time video data. We present a system architecture which allows for use of multiple computers through a centralized database; which can fuse additional information to the real-time video stream. This architecture is independent of hardware or software and is extensible to a large number of clinical applications.
Time-of-flight sensor for patient positioning
Christian Schaller, Andre Adelt, Jochen Penne, et al.
In this paper we present a system that uses Time-of-Flight (ToF) technology to correct the position of a patient in respect to a previously acquired reference surface. A ToF sensor enables the acquisition of a 3-D surface model containing more than 25,000 points using a single sensor in real time. One advantage of this technology is that the high lateral resolution makes it possible to accurately compute translation and rotation of the patient in respect to a reference surface. We are using an Iterative Closest Point (ICP) algorithm to determine the 6 degrees of freedom (DOF) vector. Current results show that for rigid phantoms it is possible to obtain an accuracy of 2.88 mm and 0.28° respectively. Tests with human persons validate the robustness and stability of the proposed system. We achieve a mean registration error of 3.38 mm for human test persons. Potential applications for this system can be found within radiotherapy or multimodal image acquisition with different devices.
Application of an image-guided navigation system in breast cancer localization
Tanja Alderliesten, Claudette Loo, Angelique T. E. F. Schlief, et al.
Image-guided navigation on the basis of pre-therapy images in a deformable organ, such as the breast, requires a survey of the factors that cause uncertainties. A deformable breast-tissue-mimicking phantom with simulated tumors was employed to investigate the accuracy of lesion localization with a needle instrument coupled to an optical measurement system. The RMS deviation was 1.1 mm with errors ≤ 2.0 mm in 96% of the procedures. Ultrasonography data acquired during needle localization of breast tumors were analyzed in 20 patients (23 tumors; 12 benign, 11 malignant) to investigate the deformation due to presence of instruments. The overall RMS tumor shift was 2.3 mm after release of pressure on the needle. To establish an optimal strategy to correct for breast motion due to breathing experiments with a volunteer were performed. Tracking a single centre marker was found to be most effective to improve registration accuracy. Average deviations of 8.2 mm were reduced to 1.1 mm. The combined impact of these different uncertainties resulted in distributions defined by: μ = 2.5 mm, σ = 1.4 mm (benign and malignant), μ = 3.1 mm, σ = 1.8 mm (benign), μ = 1.7 mm, σ = 0.9 mm (malignant).
Implant alignment in total elbow arthroplasty: conventional vs. navigated techniques
Colin P. McDonald, James A. Johnson, Graham J. W. King, et al.
Incorrect selection of the native flexion-extension axis during implant alignment in elbow replacement surgery is likely a significant contributor to failure of the prosthesis. Computer and image-assisted surgery is emerging as a useful surgical tool in terms of improving the accuracy of orthopaedic procedures. This study evaluated the accuracy of implant alignment using an image-based navigation technique compared against a conventional non-navigated approach. Implant alignment error was 0.8 ± 0.3 mm in translation and 1.1 ± 0.4° in rotation for the navigated alignment, compared with 3.1 ± 1.3 mm and 5.0 ± 3.8° for the non-navigated alignment. Five (5) of the 11 non-navigated alignments were malaligned greater than 5° while none of the navigated alignments were placed with an error of greater than 2.0°. It is likely that improved implant positioning will lead to reduced implant loading and wear, resulting in fewer implantrelated complications and revision surgeries.
Fast 3D vision with robust structured light coding
Chadi Albitar, Pierre Graebling, Christophe Doignon
In this paper we present a new monochromatic pattern for a robust structured light coding based on the spatial neighborhood scheme and using the M-array approach. We tackle the design problem with the definition of a small set of symbols associated to simple geometrical features. One of these primitives embeds the local orientation of the pattern which is helpful for the neighborhood detection during the decoding process. The pattern codification is robust as it allows a high error rate characterized by an average Hamming distance higher than 6. The design of the pattern takes into account its integration in an endoscopic tool. Moreover, the color to be used in the projection is chosen after a study on the interaction color-organ. The aim of this work is to use this pattern for the real-time 3D reconstruction of dynamic scenes, particularly in endoscopic surgery, with fast and reliable detection and decoding stages. Ongoing results are presented to assess both the capabilities of he proposed pattern and the reliable decoding algorithm.
Ultrasound
icon_mobile_dropdown
Fast hybrid freehand ultrasound volume reconstruction
Athanasios Karamalis, Wolfgang Wein, Oliver Kutter, et al.
The volumetric reconstruction of a freehand ultrasound sweep, also called compounding, introduces additional diagnostic value to the ultrasound acquisition by allowing 3D visualization and fast generation of arbitrary MPR(Multi-Planar-Reformatting) slices. Furthermore reconstructing a sweep adds to the general availability of the ultrasound data since volumes are more common to a variety of clinical applications/systems like PACS. Generally there are two reconstruction approaches, namely forward and backward with their respective advantages and disadvantages. In this paper we present a hybrid reconstruction method partially implemented on the GPU that combines the forward and backward approaches to efficiently reconstruct a continuous freehand ultrasound sweep, while ensuring at the same time a high reconstruction quality. The main goal of this work was to significantly decrease the waiting time from sweep acquisition to volume reconstruction in order to make an ultrasound examination more convenient for both the patient and the sonographer. Testing our algorithm demonstrated a significant performance gain by an average factor of 197 for simple interpolation and 84 for advanced interpolation schemes, reconstructing a 2563 volume in 0.35 seconds and 0.82 seconds respectively.
Validation of four-dimensional ultrasound for targeting in minimally-invasive beating-heart surgery
Danielle F. Pace, Andrew D. Wiles, John Moore, et al.
Ultrasound is garnering significant interest as an imaging modality for surgical guidance, due to its affordability, real-time temporal resolution and ease of integration into the operating room. Minimally-invasive intracardiac surgery performed on the beating-heart prevents direct vision of the surgical target, and procedures such as mitral valve replacement and atrial septal defect closure would benefit from intraoperative ultrasound imaging. We propose that placing 4D ultrasound within an augmented reality environment, along with a patient-specific cardiac model and virtual representations of tracked surgical tools, will create a visually intuitive platform with sufficient image information to safely and accurately repair tissue within the beating heart. However, the quality of the imaging parameters, spatial calibration, temporal calibration and ECG-gating must be well characterized before any 4D ultrasound system can be used clinically to guide the treatment of moving structures. In this paper, we describe a comprehensive accuracy assessment framework that can be used to evaluate the performance of 4D ultrasound systems while imaging moving targets. We image a dynamic phantom that is comprised of a simple robot and a tracked phantom to which point-source, distance and spherical objects of known construction can be attached. We also follow our protocol to evaluate 4D ultrasound images generated in real-time by reconstructing ECG-gated 2D ultrasound images acquired from a tracked multiplanar transesophageal probe. Likewise, our evaluation framework allows any type of 4D ultrasound to be quantitatively assessed.
Ultrasound goes GPU: real-time simulation using CUDA
Tobias Reichl, Josh Passenger, Oscar Acosta, et al.
Despite the increasing adoption of other imaging modalities, ultrasound guidance is widely used for surgical procedures and clinical imaging due to its low cost, non-invasiveness, and real-time visual feedback. Many ultrasound-guided procedures require extensive training and where possible training on simulations should be preferred over patients. Computational resources for existing approaches to ultrasound simulation are usually limited by real-time requirements. Unlike previous approaches we simulate freehand ultrasound images from CT data on the Graphics Processing Unit (GPU). We build upon the method proposed by Wein et al. for estimating ultrasound reflection properties of tissue and modify it to a computationally more efficient form. In addition to previous approaches, we also estimate ultrasound absorption properties from CT data. Using NVIDIA's "Compute Unified Device Architecture" (CUDA), we provide a physically plausible simulation of ultrasound reflection, shadowing artifacts, speckle noise and radial blurring. The same algorithm can be used for simulating either linear or radial imaging, and all parameters of the simulated probe are interactively configurable at runtime, including ultrasound frequency and intensity as well as field geometry. With current hardware we are able to achieve an image width of up to 1023 pixels from raw CT data in real-time, without any pre-processing and without any loss of information from the CT image other than from interpolation of the input data. Visual comparison to real ultrasound images indicates satisfactory results.
A GPU-based framework for simulation of medical ultrasound
Oliver Kutter, Athanasios Karamalis, Wolfgang Wein, et al.
Simulation of ultrasound (US) images from volumetric medical image data has been shown to be an important tool in medical image analysis. However, there is a trade off between the accuracy of the simulation and its real-time performance. In this paper, we present a framework for acceleration of ultrasound simulation on the graphics processing unit (GPU) of commodity computer hardware. Our framework can accommodate ultrasound modeling with varying degrees of complexity. To demonstrate the flexibility of our proposed method, we have implemented several models of acoustic propagation through 3D volumes. We conducted multiple experiments to evaluate the performance of our method for its application in multi-modal image registration and training. The results demonstrate the high performance of the GPU accelerated simulation outperforming CPU implementations by up to two orders of magnitude and encourage the investigation of even more realistic acoustic models.
A guided wave technique for needle biopsy under ultrasound guidance
Needle biopsy under ultrasound guidance is routinely used in clinical applications. However, in order to track the position of the needle as it penetrates the tissue a particular alignment between the ultrasound probe and needle must be kept, thus requiring highly skilled radiologists. In this paper we present a new technique which leads to the detection of the needle regardless of its orientation relative to the imaging probe. We discuss the fundamental aspects of the method and present some preliminary results that show the potential of the technique.
Minimally Invasive II
icon_mobile_dropdown
A system for the registration of arthroscopic images to magnetic resonance images of the knee: for improved virtual knee arthroscopy
Chengliang Hu, Giancarlo Amati, Nicola Gullick, et al.
Knee arthroscopy is a minimally invasive procedure that is routinely carried out for the diagnosis and treatment of pathologies of the knee joint. A high level of expertise is required to carry out this procedure and therefore the clinical training is extensive. There are several reasons for this that include the small field of view seen by the arthroscope and the high degree of distortion in the video images. Several virtual arthroscopy simulators have been proposed to augment the learning process. One of the limitations of these simulators is the generic models that are used. We propose to develop a new virtual arthroscopy simulator that will allow the use of pathology-specific models with an increased level of photo-realism. In order to generate these models we propose to use registered magnetic resonance images (MRI) and arthroscopic video images collected from patients with a variety of knee pathologies. We present a method to perform this registration based on the use of a combined X-ray and MR imaging system (XMR). In order to validate our technique we carried out MR imaging and arthroscopy of a custom-made acrylic phantom in the XMR environment. The registration between the two modalities was computed using a combination of XMR and camera calibration, and optical tracking. Both two-dimensional (2D) and three-dimensional (3D) registration errors were computed and shown to be approximately 0.8 and 3 mm, respectively. Further to this, we qualitatively tested our approach using a more realistic plastic knee model that is used for the arthroscopy training.
Remote vs. manual catheter navigation: a comparison study of operator performance using a 2D multi-path phantom
Yogesh Thakur, Chris J. Norley, David W. Holdsworth, et al.
A remote catheter navigation system (RCNS) has been developed to permit fluoroscopic x-ray guidance of percutaneous catheters from a radiation-safe location. The RCNS employs a unique method to manipulate the remote catheter - namely, real-time motion sensing and motion replication of a local catheter. This maintains and utilizes the dexterous skills required for successful, conventional, bedside catheter navigation, while eliminating cumulative radiation exposure to the interventionalist. This paper presents a study investigating catheter navigation efficacy and learning effects during remote and manual catheter navigation. An operator, with no interventional experience, or experience with the RCNS, traversed 16 paths, containing 90 turns, in a custom-made, 2D multi-path phantom using conventional catheter manipulation and the RCNS. Each path was repeated 8 times in succession. Path success and navigation time were recorded for all trials. The operator successfully traversed all 16 paths and 90 turns using both navigation techniques. A mean increase of 12 seconds was observed using RCNS. Successive, repeated trials, of the same path, did not exhibit any learning trends. The operator successfully traversed all paths in the multi-path model using both navigation techniques, with only a slight increase in navigation time using the remote navigation system. This suggests that the RCNS, which requires minimal operator training, is comparable to, and as robust as, conventional bedside navigation.
New vision based navigation clue for a regular colonoscope's tip
Anouar Mekaouar, Chokri Ben Amar, Tanneguy Redarce
Regular colonoscopy has always been regarded as a complicated procedure requiring a tremendous amount of skill to be safely performed. In deed, the practitioner needs to contend with both the tortuousness of the colon and the mastering of a colonoscope. So, he has to take the visual data acquired by the scope's tip into account and rely mostly on his common sense and skill to steer it in a fashion promoting a safe insertion of the device's shaft. In that context, we do propose a new navigation clue for the tip of regular colonoscope in order to assist surgeons over a colonoscopic examination. Firstly, we consider a patch of the inner colon depicted in a regular colonoscopy frame. Then we perform a sketchy 3D reconstruction of the corresponding 2D data. Furthermore, a suggested navigation trajectory ensued on the basis of the obtained relief. The visible and invisible lumen cases are considered. Due to its low cost reckoning, such strategy would allow for the intraoperative configuration changes and thus cut back the non-rigidity effect of the colon. Besides, it would have the trend to provide a safe navigation trajectory through the whole colon, since this approach is aiming at keeping the extremity of the instrument as far as possible from the colon wall during navigation. In order to make effective the considered process, we replaced the original manual control system of a regular colonoscope by a motorized one allowing automatic pan and tilt motions of the device's tip.
Swallowable capsule with air channel for improved image-guided cancer detection in the esophagus
Eric J. Seibel, C. David Melville, Jonathan K. C. Lung, et al.
A new type of endoscope has been developed and tested in the human esophagus, a tethered-capsule endoscope (TCE) that requires no sedation for oral ingestion and esophageal inspection. The TCE uses scanned red, green, and blue laser light to image the upper digestive tract using a swallowable capsule of 6.4mm in diameter and 18mm in length on a 1.4mm diameter tether. The TCE has been modified for image-guided interventions in the lower esophagus, specifically for more effective detection and measurement of the extent of Barrett's esophagus, a precursor to esophageal cancer. Three modifications have been tested in vivo: (1) weighting the capsule so it is negatively buoyant in water, (2) increasing the frame rate of 500-line images to 30 Hz (video rate), and (3) adding a 1.0mm inner diameter working channel alongside the tether for distending the lower esophagus with air pressure during endoscopy. All three modifications proved effective for more clearly visualizing the lower esophagus in the first few human subjects. The air channel was especially useful because it did not change tolerability in the first subject for unsedated endoscopy and the air easily removed bubbles obscuring tissue from the field of view. The air provided a non-invasive intervention by stimulating the mechanosensor of the lower esophageal sphincter at the precise time that the TCE was positioned for most informative imaging. All three TCE modifications proved successful for improved visualization of esophageal pathology, such as suspected Barrett's esophagus, without the use of sedation.
Direct global adjustment methods for endoscopic mosaicking
Sharmishtaa Seshamani, Michael D. Smith, Jason J. Corso, et al.
Endoscopy is an invaluable tool for several surgical and diagnostic applications. It permits minimally invasive visualization of internal structures thus involving little or no injury to internal structures. This method of visualization however restricts the size of the imaging device and therefore compromises on the field of view captured in a single image. The problem of a narrow field of view can be solved by capturing video sequences and stitching them to generate a mosaic of the scene under consideration. Registration of images in the sequence is therefore a crucial step. Existing methods compute frame-to-frame registration estimates and use these to resample images in order to generate a mosaic. The complexity of the appearance of internal structures and accumulation of registration error in frame to frame estimates however can be large enough to cause a cumulative drift that can misrepresent the scene. These errors can be reduced by application of global adjustment schemes. In this paper, we present a set of techniques for overcoming this problem of drift for pixel based registration in order to achieve global consistency of mosaics. The algorithm uses the frame-to-frame estimate as an initialization and subsequently corrects these estimates by setting up a large scale optimization problem which simultaneously solves for all corrections of estimates. In addition we set up a graph and introduce loop closure constraints in order to ensure consistency of registration. We present our method and results in semi global and fully global graph based adjustment methods as well as validation of our results.
A planning system for transapical aortic valve implantation
Michael Gessat, Denis R. Merk, Volkmar Falk, et al.
Stenosis of the aortic valve is a common cardiac disease. It is usually corrected surgically by replacing the valve with a mechanical or biological prosthesis. Transapical aortic valve implantation is an experimental minimally invasive surgical technique that is applied to patients with high operative risk to avoid pulmonary arrest. A stented biological prosthesis is mounted on a catheter. Through small incisions in the fifth intercostal space and the apex of the heart, the catheter is positioned under flouroscopy in the aortic root. The stent is expanded and unfolds the valve which is thereby implanted into the aortic root. Exact targeting is crucial, since major complications can arise from a misplaced valve. Planning software for the perioperative use is presented that allows for selection of the best fitting implant and calculation of the safe target area for that implant. The software uses contrast enhanced perioperative DynaCT images acquired under rapid pacing. In a semiautomatic process, a surface segmentation of the aortic root is created. User selected anatomical landmarks are used to calculate the geometric constraints for the size and position of the implant. The software is integrated into a PACS network based on DICOM communication to query and receive the images and implants templates from a PACS server. The planning results can be exported to the same server and from there can be rertieved by an intraoperative catheter guidance device.
Visualization and Geometry
icon_mobile_dropdown
Uniscale multi-view registration using double dog-leg method
Chao-I Chen, Dusty Sargent, Chang-Ming Tsai, et al.
3D computer models of body anatomy can have many uses in medical research and clinical practices. This paper describes a robust method that uses videos of body anatomy to construct multiple, partial 3D structures and then fuse them to form a larger, more complete computer model using the structure-from-motion framework. We employ the Double Dog-Leg (DDL) method, a trust-region based nonlinear optimization method, to jointly optimize the camera motion parameters (rotation and translation) and determine a global scale that all partial 3D structures should agree upon. These optimized motion parameters are used for constructing local structures, and the global scale is essential for multi-view registration after all these partial structures are built. In order to provide a good initial guess of the camera movement parameters and outlier free 2D point correspondences for DDL, we also propose a two-stage scheme where multi-RANSAC with a normalized eight-point algorithm is first performed and then a few iterations of an over-determined five-point algorithm is used to polish the results. Our experimental results using colonoscopy video show that the proposed scheme always produces more accurate outputs than the standard RANSAC scheme. Furthermore, since we have obtained many reliable point correspondences, time-consuming and error-prone registration methods like the iterative closest points (ICP) based algorithms can be replaced by a simple rigid-body transformation solver when merging partial structures into a larger model.
Optimal search guided by partial active shape model for prostate segmentation in TRUS images
Pingkun Yan, Sheng Xu, Baris Turkbey, et al.
Automatic prostate segmentation in transrectal ultrasound (TRUS) can be used to register TRUS with magnetic resonance (MR) images for TRUS/MR-guided prostate interventions. However, robust and automated prostate segmentation is challenging due to not only the low signal to noise ratio in TRUS but also the missing boundaries in shadow areas caused by calcifications or hyper-dense prostate tissue. Lack of image information in those areas is a barrier for most existing segmentation methods, which normally leads to user interaction for manual correction. This paper presents a novel method to utilize prior shapes estimated from partial contours to guide an optimal search for prostate segmentation. The proposed method is able to automatically extract prostate boundary from 2D TRUS images without user interaction for correcting shapes in shadow areas. In our approach, the point distribution model was first used to learn shape priors of prostate from manual segmentation results. During segmentation, the missing boundaries in shadow areas are estimated by using a new partial active shape model, which uses partial contour as input but returns complete estimated shape. Prostate boundary is then obtained by using a discrete deformable model with optimal search, which is implemented efficiently by using dynamic programming to produce robust segmentation results. The segmentation of each frame is performed in multi-scale for robustness and computational efficiency. In our experiments of segmenting 162 images grabbed from ultrasound video sequences of 10 patients, the average mean absolute distance was 1.79mm±0.95mm. The proposed method was implemented in C++ based on ITK and took about 0.3 seconds to segment the prostate from a 640x480 image on a Core2 1.86 GHz PC.
3D annotation and manipulation of medical anatomical structures
Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.
nD statistical shape model building via recursive boundary subdivision
Landmark based statistical object modeling techniques, such as Active Shape Modeling, have proven useful in medical image analysis. Identification of the same homologous set of points in a training set of object shapes is the most crucial step in ASM, which has encountered challenges, the most crucial among these being (C1) defining and characterizing landmarks; (C2) ensuring homology; (C3) generalizing to n > 2 dimensions; (C4) achieving practical computations. In this paper, we propose a novel global-to-local strategy that attempts to address C3 and C4 directly and works in Rn. The 3D version of it attempts to address C1 and C2 indirectly by starting from three initial corresponding points determined in all training shapes via a method α, and subsequently by subdividing the shapes into connected boundary segments by a plane determined by these points. A shape analysis method β is applied on each segment to determine a landmark on the segment. This point introduces more triplets of points, the planes defined by which are used to further subdivide the boundary segments. This recursive boundary subdivision (RBS) process continues simultaneously on all training shapes, maintaining synchrony of the level of recursion, and thereby keeping correspondence among generated points automatically by the correspondence of the homologous shape segments in all training shapes. The process terminates when no subdividing planes are left to be considered that indicate (as per method β) that a point can continue to be selected on the associated segment. Several examples of α and β are provided as well as some preliminary results on 3D shapes.
A GPU-based fiber tracking framework using geometry shaders
Alexander Köhn, Jan Klein, Florian Weiler, et al.
The clinical application of fiber tracking becomes more widespread. Thus it is of high importance to be able to produce high quality results in a very short time. Additionally, research in this field would benefit from fast implementation and evaluation of new algorithms. In this paper we present a GPU-based fiber tracking framework using latest features of commodity graphics hardware such as geometry shaders. The implemented streamline algorithm performs fiber reconstruction of a whole brain using 30,000 seed points in less than 120 ms on a high-end GeForce GTX 280 graphics board. Seed points are sent to the GPU which emits up to a user-defined number of fiber points per seed vertex. These are recorded to a vertex buffer that can be rendered or downloaded to main memory for further processing. If the output limit of the geometry shader is reached before the stopping criteria are fulfilled, the last vertices generated are then used in a subsequent pass where the geometry shader continues the tracking. Since all the data resides on graphics memory the intermediate steps can be visualized in real-time. The fast reconstruction not only allows for an interactive change of tracking parameters but, since the tracking code is implemented using GPU shaders, even for a runtime change of the algorithm. Thus, rapid development and evaluation of different algorithms and parameter sets becomes possible, which is of high value for e.g. research on uncertainty in fiber tracking.
Registration
icon_mobile_dropdown
Prostate brachytherapy seed localization using a mobile C-arm without tracking
The success of prostate brachytherapy depends on the faithful delivery of a dose plan. In turn, intraoperative localization and visualization of the implanted radioactive brachytherapy seeds enables more proficient and informed adjustments to the executed plan during therapy. Prior work has demonstrated adequate seed reconstructions from uncalibrated mobile c-arms using either external tracking devices or image-based fiducials for c-arm pose determination. These alternatives are either time-consuming or interfere with the clinical flow of the surgery, or both. This paper describes a seed reconstruction approach that avoids both tracking devices and fiducials. Instead, it uses the preoperative dose plan in conjunction with a set of captured images to get initial estimates of the c-arm poses followed by an auto-focus technique using the seeds themselves as fiducials to refine the pose estimates. Intraoperative seed localization is achieved through iteratively solving for poses and seed correspondences across images and reconstructing the 3D implanted seeds. The feasibility of this approach was demonstrated through a series of simulations involving variable noise levels, seed densities, image separability and number of images. Preliminary results indicate mean reconstruction errors within 1.2 mm for noisy plans of 84 seeds or fewer. These are attained for additive noise whose standard deviation of the 3D mean error introduced to the plan to simulate the implant is within 3.2 mm.
Atlas-driven scan planning for high-resolution micro-SPECT data acquisition based on multi-view photographs: a pilot study
Martin Baiker, Brendan Vastenhouw, Woutjan Branderhorst, et al.
Highly focused Micro-SPECT scanners enable the acquisition of functional small animal data with very high-resolution. To acquire a maximum of emitted photons from a specific structure of interest and at the same time minimize the required acquisition time, typically only a small subvolume of the animal is scanned that contains the organs of interest. This Volume of Interest (VOI) can be defined manually based on photographs of the animal taken prior to SPECT scanning, for example two lateral views and a top view. In these photographs however, only the surface of the animal is visible and therefore visual estimation of the location of these organs may be difficult. In this paper, we propose a novel atlas-based technique for estimating the organ VOI for the major organs by mapping a small animal atlas to optical scout images. The user is required to outline the animal contour in one lateral view, and to mark two lateral landmarks in the top view photograph. These landmarks subsequently serve as fiducial landmarks to define a 3D Thin-Plate-Spline mapping of an anatomical mouse atlas to the photographic coordinate space. Planar projections of the mapped atlas organs on the photographs greatly facilitate the estimation of the size and position of the target organ. To validate the proposed approach, the estimated organ VOIs were compared to manually drawn organ outlines in a Micro-CT scan, which was co-registered to the scout photographs using physical landmarks. The results demonstrate a highly promising volume correspondence between the real and the estimated organ VOIs.
Conoscopic holography for image registration: a feasibility study
Ray A. Lathrop, Tiffany T. Cheng, Robert J. Webster III
Preoperative image data can facilitate intrasurgical guidance by revealing interior features of opaque tissues, provided image data can be accurately registered to the physical patient. Registration is challenging in organs that are deformable and lack features suitable for use as alignment fiducials (e.g. liver, kidneys, etc.). However, provided intraoperative sensing of surface contours can be accomplished, a variety of rigid and deformable 3D surface registration techniques become applicable. In this paper, we evaluate the feasibility of conoscopic holography as a new method to sense organ surface shape. We also describe potential advantages of conoscopic holography, including the promise of replacing open surgery with a laparoscopic approach. Our feasibility study investigated use of a tracked off-the-shelf conoscopic holography unit to perform a surface scans on several types of biological and synthetic phantom tissues. After first exploring baseline accuracy and repeatability of distance measurements, we performed a number of surface scan experiments on the phantom and ex vivo tissues with a variety of surface properties and shapes. These indicate that conoscopic holography is capable of generating surface point clouds of at least comparable (and perhaps eventually improved) accuracy in comparison to published experimental laser triangulation-based surface scanning results.
Cluster of workstation based nonrigid image registration using free-form deformation
Nonrigid image registration plays an important role in medical application fields. Owing to its complex computations, it incurs high computational cost. In this paper, a parallel algorithm schema for nonrigid image registration methods that use B-splines for deformation and mutual information as a similarity measure is proposed. It involves a complex interplay of various steps which are analyzed in considerable detail from the view point of parallelizing registration. The algorithms are implemented on a cluster of workstations. We present some results on a 10 processor cluster of PCs and compare them with a sequential implementation. The results show that a speed up of n/2 for n processors in registering large images. The method is fully portable and seamlessly expandable.
Group-wise registration of ultrasound to CT images of human vertebrae
Automatic registration of ultrasound (US) to computed tomography (CT) datasets is a challenge of considerable interest, particularly in orthopaedic and percutaneous interventions. We propose an algorithm for group-wise volume-to-volume registration of US to CT images of the lumbar spine. Each vertebra in CT is treated as a sub-volume and transformed individually. The sub-volumes are then reconstructed into a single volume. The algorithm dynamically combines simulated US reflections from the vertebrae surfaces and surrounding soft tissue in the reconstructed CT, with scaled CT data to simulate US images of the spine anatomy. The simulated US data is used to register preoperative CT data to intra-operative US images. Covariance Matrix Adaption - Evolution Strategy (CMA-ES) is utilized as the optimization strategy. The registration is tested using a phantom of the lumbar spine (L3-L5). Initial misalignments of up to 8 mm were registered with a mean target registration error of 1.87±0.73 mm for L3, 2.79±0.93 mm for L4, 1.72±0.70 mm for L5, and 2.08±0.55 mm across the entire volume. To select an appropriate optimization strategy, we performed a volume-to- volume registration of US to CT of the lumbar spine, allowing no relative motion between vertebrae. We compare the results of this registration using three optimization strategies: simplex, gradient descent and CMA-ES. CMA-ES was found to converge slower than gradient descent and simplex, but was more robust for rigid volume-to-volume registration for initial misalignments up to 20 mm.
Accuracy of non-rigid registration for local analysis of elasticity restrictions of the lungs
Daniel Stein, Ralf Tetzlaff, Ivo Wolf, et al.
Diseases of the lung often begin with regionally limited changes altering the tissue elasticity. Therefore, quantification of regional lung tissue motion would be desirable for early diagnosis, treatment monitoring, and follow-up. Dynamic MRI can capture such changes, but quantification requires non-rigid registration. However, analysis of dynamic MRI data of the lung is challenging due to inherently low image signal and contrast. Towards a computer-assisted quantification for regional lung diseases, we have evaluated two Demons-based registration methods for their accuracy in quantifying local lung motion on dynamic MRI data. The registration methods were applied on masked image data, which were pre-segmented with a graph-cut algorithm. Evaluation was performed on five datasets from healthy humans with nine time frames each. As gold standard, manually defined points (between 8 and 24) on prominent landmarks (essentially vessel structures) were used. The distance between these points and the predicted landmark location as well as the overlap (Dice coefficient) of the segmentations transformed with the deformation field were calculated. We found that the Demons algorithm performed better than the Symmetric Forces Demons algorithm with respect to average landmark distance (6.5 mm ± 4.1 mm vs. 8.6 mm ± 6.1 mm), but comparable regarding the Dice coefficient (0.946 ± 0.018 vs. 0.961 ± 0.018). Additionally, the Demons algorithm computes the deformation in only 10 seconds, whereas the Symmetric Forces Demons algorithm takes about 12 times longer.
Poster Session: Cardiac
icon_mobile_dropdown
Localization and tracking of aortic valve prosthesis in 2D fluoroscopic image sequences
M. Karar, C. Chalopin, D. R. Merk, et al.
This paper presents a new method for localization and tracking of the aortic valve prosthesis (AVP) in 2D fluoroscopic image sequences to assist the surgeon to reach the safe zone of implantation during transapical aortic valve implantation. The proposed method includes four main steps: First, the fluoroscopic images are preprocessed using a morphological reconstruction and an adaptive Wiener filter to enhance the AVP edges. Second, a target window, defined by a user on the first image of the sequences which includes the AVP, is tracked in all images using a template matching algorithm. In a third step the corners of the AVP are extracted based on the AVP dimensions and orientation in the target window. Finally, the AVP model is generated in the fluoroscopic image sequences. Although the proposed method is not yet validated intraoperatively, it has been applied to different fluoroscopic image sequences with promising results.
Locally homogenized and de-noised vector fields for cardiac fiber tracking in DT-MRI images
Alireza Akhbardeh, Fijoy Vadakkumpadan, Jason Bayer, et al.
In this study we develop a methodology to accurately extract and visualize cardiac microstructure from experimental Diffusion Tensor (DT) data. First, a test model was constructed using an image-based model generation technique on Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) data. These images were derived from a dataset having 122x122x500 um3 voxel resolution. De-noising and image enhancement was applied to this high-resolution dataset to clearly define anatomical boundaries within the images. The myocardial tissue was segmented from structural images using edge detection, region growing, and level set thresholding. The primary eigenvector of the diffusion tensor for each voxel, which represents the longitudinal direction of the fiber, was calculated to generate a vector field. Then an advanced locally regularizing nonlinear anisotropic filter, termed Perona-Malik (PEM), was used to regularize this vector field to eliminate imaging artifacts inherent to DT-MRI from volume averaging of the tissue with the surrounding medium. Finally, the vector field was streamlined to visualize fibers within the segmented myocardial tissue to compare the results with unfiltered data. With this technique, we were able to recover locally regularized (homogenized) fibers with a high accuracy by applying the PEM regularization technique, particularly on anatomical surfaces where imaging artifacts were most apparent. This approach not only aides in the visualization of noisy complex 3D vector fields obtained from DT-MRI, but also eliminates volume averaging artifacts to provide a realistic cardiac microstructure for use in electrophysiological modeling studies.
Computer-aided patch planning for treatment of complex coarctation of the aorta
Urte Rietdorf, Eugénie Riesenkampff, Titus Kuehne, et al.
Between five and eight percent of all children born with congenitally malformed hearts suffer from coarctations of the aorta. Some severe coarctations can only be treated by surgical repair. Untreated, this defect can cause serious damage to organ development or even lead to death. Patch repair requires open surgery. It can affect patients of any age: newborns with severe coarctation and/or hypoplastic aortic arch as well as older patients with late diagnosis of coarctation of the aorta. Another patient group are patients of varying age with re-coarctation of the aorta or hypoplastic aortic arch after surgical and/or interventional repair. If anatomy is complex and interventional treatment by catheterization, balloon angioplasty or stent placement is not possible, surgery is indicated. The choice of type of surgery depends not only on the given anatomy but also on the experience the surgical team has with each method. One surgical approach is patch repair. A patch of a suitable shape and size is sewed into the aorta to expand the aortic lumen at the site of coarctation. At present, the shape and size of the patch are estimated intra-operatively by the surgeon. We have developed a software application that allows planning of the patch pre-operatively on the basis of magnetic resonance angiographic data. The application determines the diameter of the coarctation and/or hypoplastic segment and constructs a patch proposal by calculating the difference to the normal vessel diameter pre-operatively. Evaluation of MR angiographic datasets from 12 test patients with different kinds of aortic arch stenosis shows a divergence of only (1.5±1.2) mm in coarctation diameters between manual segmentations and our approach, with comparable time expenditure. Following this proposal the patch can be prepared and adapted to the patient's anatomy pre-operatively. Ideally, this leads to shorter operation times and a better long-term outcome with a reduced rate of residual stenosis and re-stenosis and aneurysm formation.
Left atrium pulmonary veins: segmentation and quantification for planning atrial fibrillation ablation
R. Karim, R. Mohiaddin, D. Rueckert
The paper presents a technique for detecting detecting left atrium as well as the pulmonary veins of the left atrium by tracing out their centerlines. A vessel detection and traversal process is initiated from the venoatrial junctions. Pulmonary veins draining into the left atrium via these junctions are thus detected, also enabling the detection of the ostium. Ostial diameters are measured from the detected centerlines using a best-fitting ellipse. Quantitative validation of the techniques are reported on nine patient datasets. In only two of the datasets, mis-detections were identified. The ostial diameter measurements indicated an error of at most 5% in most of the cases. We envisage that the techniques presented will facilitate in planning the non-pharmacological treatment of atrial fibrillation using radio-frequency ablation therapy.
Quantification of abdominal aortic deformation after EVAR
Stefanie Demirci, Frode Manstad-Hulaas, Nassir Navab
Quantification of abdominal aortic deformation is an important requirement for the evaluation of endovascular stenting procedures and the further refinement of stent graft design. During endovascular aortic repair (EVAR) treatment, the aortic shape is subject to severe deformation that is imposed by medical instruments such as guide wires, catheters, and, the stent graft. This deformation can affect the flow characteristics and morphology of the aorta which have been shown to be elicitors for stent graft failures and be reason for reappearance of aneurysms. We present a method for quantifying the deformation of an aneurysmatic aorta imposed by an inserted stent graft device. The outline of the procedure includes initial rigid alignment of the two abdominal scans, segmentation of abdominal vessel trees, and automatic reduction of their centerline structures to one specified region of interest around the aorta. This is accomplished by preprocessing and remodeling of the pre- and postoperative aortic shapes before performing a non-rigid registration. We further narrow the resulting displacement fields to only include local non-rigid deformation and therefore, eliminate all remaining global rigid transformations. Finally, deformations for specified locations can be calculated from the resulting displacement fields. In order to evaluate our method, experiments for the extraction of aortic deformation fields are conducted on 15 patient datasets from endovascular aortic repair (EVAR) treatment. A visual assessment of the registration results and evaluation of the usage of deformation quantification were performed by two vascular surgeons and one interventional radiologist who are all experts in EVAR procedures.
Numerical analysis of the hemodynamic effect of plaque ulceration in the stenotic carotid artery bifurcation
Emily Y. Wong, Jaques S. Milner, David A. Steinman, et al.
The presence of ulceration in carotid artery plaque is an independent risk factor for thromboembolic stroke. However, the associated pathophysiological mechanisms - in particular the mechanisms related to the local hemodynamics in the carotid artery bifurcation - are not well understood. We investigated the effect of carotid plaque ulceration on the local time-varying three-dimensional flow field using computational fluid dynamics (CFD) models of a stenosed carotid bifurcation geometry, with and without the presence of ulceration. CFD analysis of each model was performed with a spatial finite element discretization of over 150,000 quadratic tetrahedral elements and a temporal discretization of 4800 timesteps per cardiac cycle, to adequately resolve the flow field and pulsatile flow, respectively. Pulsatile flow simulations were iterated for five cardiac cycles to allow for cycle-to-cycle analysis following the damping of initial transients in the solution. Comparison between models revealed differences in flow patterns induced by flow exiting from the region of the ulcer cavity, in particular, to the shape, orientation and helicity of the high velocity jet through the stenosis. The stenotic jet in both models exhibited oscillatory motion, but produced higher levels of phase-ensembled turbulence intensity in the ulcerated model. In addition, enhanced out-of-plane recirculation and helical flow was observed in the ulcerated model. These preliminary results suggest that local fluid behaviour may contribute to the thrombogenic risk associated with plaque ulcerations in the stenotic carotid artery bifurcation.
Automated 3D heart segmentation by search rays for building individual conductor models
Jaeil Kim, Seokyeol Kim, Kiwoong Kim, et al.
Magnetocardiograph (MCG) is one of the most useful diagnosing tools for myocardial ischemic diseases and the conduction abnormality, since the technique directly measures magnetic fields generated by myocardial currents without distortion in a non-invasive way. To localize the current source accurately, building a patient-specific conductor model is indispensable. In this paper, we present the method to automatically construct a patient-specific three-dimensional (3D) mesh model of a human thorax and a heart consisting of pericardium and four chambers. We represent the standard thorax model by simplex meshes, and deform them to fit into the individual CT data to reconstruct accurate surface representations for the MCG conductor model. The deformable simplex mesh model deforms based on the external forces exerted by the edge and gradient components of the source volume data while its internal force acts to maintain the integrity of the shape. However, image driven deformation is often very sensitive to its initial position. Therefore, we suggest our solution to automatic region-of-interest (ROI) detection using search rays, which are casted to 3D volume images to identify the region of a heart based on both the radiodensity values and their continuity along the path of the rays. Upon automatic ROI detection with search rays, the initial position and orientation of the standard mesh model is determined, and each vertex of the model is respectively moved by the weighted sum of the internal and external forces to conform to the each patient's own thorax and heart shape while minimizing the user's input.
Photo-consistency registration of a 4D cardiac motion model to endoscopic video for image guidance of robotic coronary artery bypass
The aim of the work described in this paper is registration of a 4D preoperative motion model of the heart to the video view of the patient through the intraoperative endoscope. The heart motion is cyclical and can be modelled using multiple reconstructions of cardiac gated coronary CT. We propose the use of photoconsistency between the two views through the da Vinci endoscope to align to the preoperative heart surface model from CT. The temporal alignment from the video to the CT model could in principle be obtained from the ECG signal. We propose averaging of the photoconsistency over the cardiac cycle to improve the registration compared to a single view. Though there is considerable motion of the heart, after correct temporal alignment we suggest that the remaining motion should be close to rigid. Results are presented for simulated renderings and for real video of a beating heart phantom. We found much smoother sections at the minimum when using multiple phases for the registration, furthermore convergence was found to be better when more phases are used.
Poster Session: CT Guidance
icon_mobile_dropdown
Preliminary experiments of a single x-ray view catheter 3D localization algorithm for targeted stem cell injections
M. Iovea, J. Creed, E. Perin, et al.
The aim of this study was to conduct a preliminary check of a new method for measuring the 3D catheter position based on only one X-Ray view (image) and a simple pre-calibration procedure for catheters that could be equipped with high-opacity equal-spaced markers. The application chosen for this experiment is the targeted delivery of cell based therapeutic via a transendocardial retrograde approach into the left ventricle. This approach has shown promising therapeutic retention data when injected directly into the myocardial tissue, but lacks in the ability of the user to confidently manipulate the catheter within the left ventricle cavity space under traditional fluoroscopic guidance using a needle based catheter. The need for a new technique arose from the potential for increased safety and therapeutic efficacy by improving the targeting of the agent. The new technique, destined for Image guided catheter navigation systems for cardiac interventions, is based on a measurement of the marker's size and distance between them and followed by a comparison with the referenced catheter position. Preliminary experiments made with a simple phantom are presented, emphasizing the ability of the new technique in measuring the markers and the catheter tip 3D position. An overall maximum error in positioning markers and catheter tip below 12% has been obtained, yielding a promising result for continuing the future work of improving the algorithm accuracy.
Accuracy of x-ray image-based 3D localization from two C-arm views: a comparison between an ideal system and a real device
Alexander Brost, Norbert Strobel, Liron Yatziv, et al.
arm X-ray imaging devices are commonly used for minimally invasive cardiovascular or other interventional procedures. Calibrated state-of-the-art systems can, however, not only be used for 2D imaging but also for three-dimensional reconstruction either using tomographic techniques or even stereotactic approaches. To evaluate the accuracy of X-ray object localization from two views, a simulation study assuming an ideal imaging geometry was carried out first. This was backed up with a phantom experiment involving a real C-arm angiography system. Both studies were based on a phantom comprising five point objects. These point objects were projected onto a flat-panel detector under different C-arm view positions. The resulting 2D positions were perturbed by adding Gaussian noise to simulate 2D point localization errors. In the next step, 3D point positions were triangulated from two views. A 3D error was computed by taking differences between the reconstructed 3D positions using the perturbed 2D positions and the initial 3D positions of the five points. This experiment was repeated for various C-arm angulations involving angular differences ranging from 15° to 165°. The smallest 3D reconstruction error was achieved, as expected, by views that were 90° degrees apart. In this case, the simulation study yielded a 3D error of 0.82 mm ± 0.24 mm (mean ± standard deviation) for 2D noise with a standard deviation of 1.232 mm (4 detector pixels). The experimental result for this view configuration obtained on an AXIOM Artis C-arm (Siemens AG, Healthcare Sector, Forchheim, Germany) system was 0.98 mm ± 0.29 mm, respectively. These results show that state-of-the-art C-arm systems can localize instruments with millimeter accuracy, and that they can accomplish this almost as well as an idealized theoretical counterpart. High stereotactic localization accuracy, good patient access, and CT-like 3D imaging capabilities render state-of-the-art C-arm systems ideal devices for X-ray based minimally invasive procedures.
A method for semi-automatic segmentation and evaluation of intracranial aneurysms in bone-subtraction computed tomography angiography (BSCTA) images
Susanne Krämer, Hendrik Ditt, Christina Biermann, et al.
The rupture of an intracranial aneurysm has dramatic consequences for the patient. Hence early detection of unruptured aneurysms is of paramount importance. Bone-subtraction computed tomography angiography (BSCTA) has proven to be a powerful tool for detection of aneurysms in particular those located close to the skull base. Most aneurysms though are chance findings in BSCTA scans performed for other reasons. Therefore it is highly desirable to have techniques operating on standard BSCTA scans available which assist radiologists and surgeons in evaluation of intracranial aneurysms. In this paper we present a semi-automatic method for segmentation and assessment of intracranial aneurysms. The only user-interaction required is placement of a marker into the vascular malformation. Termination ensues automatically as soon as the segmentation reaches the vessels which feed the aneurysm. The algorithm is derived from an adaptive region-growing which employs a growth gradient as criterion for termination. Based on this segmentation values of high clinical and prognostic significance, such as volume, minimum and maximum diameter as well as surface of the aneurysm, are calculated automatically. the segmentation itself as well as the calculated diameters are visualised. Further segmentation of the adjoining vessels provides the means for visualisation of the topographical situation of vascular structures associated to the aneurysm. A stereolithographic mesh (STL) can be derived from the surface of the segmented volume. STL together with parameters like the resiliency of vascular wall tissue provide for an accurate wall model of the aneurysm and its associated vascular structures. Consequently the haemodynamic situation in the aneurysm itself and close to it can be assessed by flow modelling. Significant values of haemodynamics such as pressure onto the vascular wall, wall shear stress or pathlines of the blood flow can be computed. Additionally a dynamic flow model can be generated. Thus the presented method supports a better understanding of the clinical situation and assists the evaluation of therapeutic options. Furthermore it contributes to future research addressing intervention planning and prognostic assessment of intracranial aneurysms.
Tumor correlated CT: a new paradigm for motion compensated CT for image-guided therapy
Respiratory motion has significant effects on abdominal and lung tumor position, and incorporation of this uncertainty increases volumes for focal cancer treatments. Respiratory correlated CT, obtained by oversampling images throughout the respiratory cycle based on an external surrogate, is increasingly being used for radiation therapy planning. Respiratory correlated CT is dependant on a fixed relationship between the external surrogate and the tumor, which may change based on weight loss, breathing pattern changes or non-respiratory motion. Moreover, the process decouples localization of the tumor (which is the goal of tumor directed therapy) with respiratory motion management. Recently, implantable passive transponders (Calypso Medical Technologies) have been developed which can be tracked via an external electromagnetic array in real-time and without ionizing radiation. We aimed to integrate wireless electromagnetic tracking with multislice CT, and create volumetric datasets that are correlated to tumor position, as opposed to an external surrogate. We call this process 'tumor correlated CT' (TCCT). Use of these images for treatment planning will allow localization of the tumor to predict the position of other organs during treatment delivery. We show the preliminary work in the integration of electromagnetic tracking and CT imaging.
Comparison of pre/post-operative CT image volumes to preoperative digitization of partial hepatectomies: a feasibility study in surgical validation
Preoperative planning combined with image-guidance has shown promise towards increasing the accuracy of liver resection procedures. The purpose of this study was to validate one such preoperative planning tool for four patients undergoing hepatic resection. Preoperative computed tomography (CT) images acquired before surgery were used to identify tumor margins and to plan the surgical approach for resection of these tumors. Surgery was then performed with intraoperative digitization data acquire by an FDA approved image-guided liver surgery system (Pathfinder Therapeutics, Inc., Nashville, TN). Within 5-7 days after surgery, post-operative CT image volumes were acquired. Registration of data within a common coordinate reference was achieved and preoperative plans were compared to the postoperative volumes. Semi-quantitative comparisons are presented in this work and preliminary results indicate that significant liver regeneration/hypertrophy in the postoperative CT images may be present post-operatively. This could challenge pre/post operative CT volume change comparisons as a means to evaluate the accuracy of preoperative surgical plans.
Evaluating optimal CNR as a preset criteria for nonlinear moidal blending of dual energy CT data
D. R. Holmes III, A. Apel, J. G. Fletcher, et al.
Nonlinear blending of dual-energy CT data is available on current scanners. Selection of the blending parameters can be time-consuming and challenging. The purpose of this study was to determine if the Contrast-To-Noise Ratio (CNR) may be used ti automatic select of blending parameters. A Bovine liver was built with six syringes filled with varying concentrations of CT contrast yielding six 140kV HU levels (15, 47, 64, 79, 116, and 145). The phantom was scanned using 95 mAs @ 140kV and 404mAs @ 80 kV. The 80 and 140 kV datasets were blended using a modified sigmoid (moidal) function which requires two parameters - level and width. Every combination of moidal level and width was applied to the data, and the CNR was calculated as (mean(syringe ROI) - mean(liver ROI)) / STD(water). The maximum CNR was determined for each of the 6 HU levels. Pairs of blended images were presented in a blind manner to observers. Nine comparisons for each of the 6 HU settings were made by a staff radiologist, a resident, and a physicist. For each comparison, the observer selected the more "visually appealing" image. Outcomes from the study were compared using the Fisher Sign Test statistic. Analysis by observer showed a statistical (p<0.01) preference towards the optimal CNR image ranging from 71%-81%. Using moidal settings which provide the maximal CNR within the image is consistent with visually appealing images. Optimization of the viewing parameters of nonlinearly blended dual energy CT data may provide consistency across radiologists and facilitate the clinical review process.
Poster Session: Modeling
icon_mobile_dropdown
Determining material properties of the breast for image-guided surgery
We have previously proposed a system for image-guided breast surgery that compensates for the deformation of the breast during patient set-up. Since breast surgery is performed with the patient positioned supine, but MR imaging is performed with the patient positioned prone, a large soft tissue deformation must be accounted for. A biomechanical model can help to constrain the associated registrations. However the necessary material properties for breast tissue under such strains are not available in the literature. This paper describes a method to determine these properties. We first show that the stress-free or 'reference' state of an object can be approximated by submerging it in liquid of a similar density. MR images of the breast submerged in water and in a pendulous prone position are acquired. An intensity-based non-rigid image registration algorithm is used to establish point-by-point correspondence between these images. A finite element model of the breast is then constructed from the submerged images and the deformation to free-pendulous is simulated. The material properties for which the model deformation best fits the observed deformation are determined. Assuming neo-Hookean material properties, the initial shear moduli of fibroglandular and adipose tissue are found to be 0.4 kPa and 0.3 kPa respectively.
Recognition of surgical skills using hidden Markov models
Stefanie Speidel, Tom Zentek, Gunther Sudra, et al.
Minimally invasive surgery is a highly complex medical discipline and can be regarded as a major breakthrough in surgical technique. A minimally invasive intervention requires enhanced motor skills to deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To recognize and analyze the current situation for context-aware assistance, we need intraoperative sensor data and a model of the intervention. Characteristics of a situation are the performed activity, the used instruments, the surgical objects and the anatomical structures. Important information about the surgical activity can be acquired by recognizing the surgical gesture performed. Surgical gestures in minimally invasive surgery like cutting, knot-tying or suturing are here referred to as surgical skills. We use the motion data from the endoscopic instruments to classify and analyze the performed skill and even use it for skill evaluation in a training scenario. The system uses Hidden Markov Models (HMM) to model and recognize a specific surgical skill like knot-tying or suturing with an average recognition rate of 92%.
3D finite element model for treatment of cleft lip
Chun Jiao, Dongming Hong, Hongbing Lu, et al.
Cleft lip is a congenital facial deformity with high occurrence rate in China. Surgical procedure involving Millard or Tennison methods is usually employed for treatment of cleft lip. However, due to the elasticity of the soft tissues and the mechanical interaction between skin and maxillary, the occurrence rate of facial abnormality or dehisce is still high after the surgery, leading to multiple operations of the patient. In this study, a framework of constructing a realistic 3D finite element model (FEM) for the treatment of cleft lip has been established. It consists of two major steps. The first one is the reconstruction of a 3D geometrical model of the cleft lip from scanning CT data. The second step is the build-up of a FEM for cleft lip using the geometric model, where the material property of all the tetrahedrons was calculated from the CT densities directly using an empirical curve. The simulation results demonstrated (1) the deformation procedure of the model step-by-step when forces were applied, (2) the stress distribution inside the model, and (3) the displacement of all elements in the model. With the computer simulation, the minimal force of having the cleft be repaired is predicted, as well as whether a given force sufficient for the treatment of a specific individual. It indicates that the proposed framework could integrate the treatment planning with stress analysis based on a realistic patient model.
Deformable hollow organ models with self-collision processing between inner surfaces
This paper presents a deformable hollow organ model considering the self-collision between the inner surfaces of a hollow organ for real-time surgical simulation. The hollow organ was modeled by the finite element method with 10400 tetrahedral elements, 2160 nodes, and 1040 inner meshes. In the model, the continuous collision detection is performed between the inner surfaces to prevent penetrations of them. As a result, it was shown that the model is well-behaved about 40 fps by a standard PC with Pentium4 3GHz and 2GB RAM.
Accuracy of localization of prostate lesions using manual palpation and ultrasound elastography
Carmen Kut, Caitlin Schneider, Naima Carter-Monroe, et al.
Purpose: To compare the accuracy of detecting tumor location and size in the prostate using both manual palpation and ultrasound elastography (UE). Methods: Tumors in the prostate were simulated using both synthetic and ex vivo tissue phantoms. 25 participants were asked to provide the presence, size and depth of these simulated lesions using manual palpation and UE. Ultrasound images were captured using a laparoscopic ultrasound probe, fitted with a Gore-Tetrad transducer with frequency of 7.5 MHz and a RF capture depth of 4-5 cm. A MATLAB GUI application was employed to process the RF data for ex vivo phantoms, and to generate UE images using a cross-correlation algorithm. Ultrasonix software was used to provide real time elastography during laparoscopic palpation of the synthetic phantoms. Statistical analyses were performed based on a two-tailed, student t-test with α = 0.05. Results: UE displays both a higher accuracy and specificity in tumor detection (sensitivity = 84%, specificity = 74%). Tumor diameters and depths are better estimated using ultrasound elastography when compared with manual palpation. Conclusions: Our results indicate that UE has strong potential in assisting surgeons to intra-operatively evaluate the tumor depth and size. We have also demonstrated that ultrasound elastography can be implemented in a laparoscopic environment, in which manual palpation would not be feasible. With further work, this application can provide accurate and clinically relevant information for surgeons during prostate resection.
Curvature and shape variance based landmark tagging methods for building statistical object models
Model-based segmentation approaches, such as those employing Active Shape Models (ASMs), have proved to be useful for medical image segmentation and understanding. To build the model, however, we need an annotated training set of shapes wherein corresponding landmarks are identified in every shape. Manual positioning of landmarks is a tedious, time consuming, and error prone task, and almost impossible in the 3D space. In an attempt to overcome some of these drawbacks, we have devised several automatic methods under two approaches: c-scale based and shape variance based. The c-scale based methods use the concept of local curvature to find landmarks on the mean shape of the training set. These landmarks are then propagated to all the shapes of the training set to establish correspondence in a local-to-global manner. The variance-based method is guided by the strategy of equalization of the shape variance contained in the training set for selecting landmarks. The main premise here is that this strategy itself takes care of the correspondence issue and at the same time deploys landmarks very frugally and optimally considering shape variations. The desired landmarks are positioned around each contour so as to equally distribute the total variance existing in the training set in a global-to-local manner. The methods are evaluated on 40 MRI foot data sets and compared in terms of compactness. The results show that, for the same number of landmarks, the proposed methods are more compact than manual and equally spaced methods of annotation, and the variance equalization method tops the list.
Investigating an approach to identifying the biomechanical differences between intercostal cartilage in subjects with pectus excavatum and normals in vivo: preliminary assessment of normal subjects
Krzysztof Rechowicz, Frederic McKenzie, Zhenzhen Yan, et al.
The cause of pectus excavatum (PE) is unknown and little research has been done to assess the material properties of the PE costal cartilage. One source reported, after studying ex vivo various properties of the costal cartilage in cases of PE that the biomechanical stability of PE cartilage is decreased when compared to that of normals. Building on this idea, it would be beneficial to measure the biomechanical properties of the costal cartilages in vivo to further determine the differences between PE subjects and normals. An approach to doing this would be to use a modified FARO arm, which can read applied loads and resulting deflections. These values can be used to establish a finite element model of the chest area of a person with PE. So far, a validated technique for the registration between a CT based 3D model of the ribcage and a skin surface scan in case of PE has been addressed. On the basis of the data gathered from 10 subjects with normal chests using a robot arm, stylus and 3D laser scanner, we tried to evaluate the influence of inter-measurement respiration of a subject on results accuracy and the possibility of using the stylus for deflection measurement. In addition, we established the best strategy for taking measurements.
3D reconstruction of the human spine from radiograph(s) using a multi-body statistical model
Jonathan Boisvert, Farida Cheriet, Xavier Pennec, et al.
Three-dimensional models of the spine are very important in diagnosing, assessing, and studying spinal deformities. These models are generally computed using multi-planar radiography, since it minimizes the radiation dose delivered to patients and allows them to assume a natural standing position during image acquisition. However, conventional reconstruction methods require at a minimum two sufficiently distant radiographs (e.g., posterior-anterior and lateral radiographs) to compute a satisfactory model. Still, it is possible to expand the applicability of 3D reconstructions by using a statistical model of the entire spine shape. In this paper, we describe a reconstruction method that takes advantage of a multi-body statistical model to reconstruct 3D spine models. This method can be applied to reconstruct a 3D model from any number of radiographs and can also integrate prior knowledge about spine length or preexisting vertebral models. Radiographs obtained from a group of 37 scoliotic patients were used to validate the proposed reconstruction method using a single posterior-anterior radiograph. Moreover, we present simulation results where 3D reconstructions obtained from two radiographs using the proposed method and using the direct linear transform method are compared. Results indicate that it is possible to reconstruct 3D spine models from a single radiograph, and that its accuracy is improved by the addition of constraints, such as a prior knowledge of spine length or of the vertebral anatomy. Results also indicate that the proposed method can improve the accuracy of 3D spine models computed from two radiographs.
Model-based brain shift compensation in image-guided neurosurgery
Songbai Ji, Fenghong Liu, Xiaoyao Fan, et al.
Intraoperative brain shift compensation is important for improving the accuracy of neuronavigational systems and ultimately, the accuracy of brain tumor resection as well as patient quality of life. Biomechanical models are practical methods for brain shift compensation in the operating room (OR). These methods assimilate incomplete deformation data on the brain acquired from intraoperative imaging techniques (e.g., ultrasound and stereovision), and simulate whole-brain deformation under loading and boundary conditions in the OR. Preoperative images of the patient's head (e.g., preoperative magnetic resonance images (pMR)) are then deformed accordingly based on the computed displacement field to generate updated visualizations for subsequent surgical guidance. Apparently, the clinical feasibility of the technique depends on the efficiency as well as the accuracy of the computational scheme. In this paper, we identify the major steps involved in biomechanical simulation of whole-brain deformation and demonstrate the efficiency and accuracy of each step. We show that a combined computational cost of 5 minutes with an accuracy of 1-2 millimeter can be achieved which suggests that the technique is feasible for routine application in the OR.
A PDE approach for quantifying and visualizing tumor progression and regression
Benjamin J. Sintay, J. Daniel Bourland
Quantification of changes in tumor shape and size allows physicians the ability to determine the effectiveness of various treatment options, adapt treatment, predict outcome, and map potential problem sites. Conventional methods are often based on metrics such as volume, diameter, or maximum cross sectional area. This work seeks to improve the visualization and analysis of tumor changes by simultaneously analyzing changes in the entire tumor volume. This method utilizes an elliptic partial differential equation (PDE) to provide a roadmap of boundary displacement that does not suffer from the discontinuities associated with other measures such as Euclidean distance. Streamline pathways defined by Laplace's equation (a commonly used PDE) are used to track tumor progression and regression at the tumor boundary. Laplace's equation is particularly useful because it provides a smooth, continuous solution that can be evaluated with sub-pixel precision on variable grid sizes. Several metrics are demonstrated including maximum, average, and total regression and progression. This method provides many advantages over conventional means of quantifying change in tumor shape because it is observer independent, stable for highly unusual geometries, and provides an analysis of the entire three-dimensional tumor volume.
Constrained hyperelastic parameters reconstruction of PVA (Polyvinyl Alcohol) phantom undergoing large deformation
Hatef Mehrabian, Abbas Samani
The nonlinear mechanical behavior of tissues that undergo large deformations, e.g. the breast, is characterized by hyperelastic parameters. These parameters take into account both types of nonlinearities: tissue intrinsic nonlinearity and geometric nonlinearity. Elastography technique capable of tissue hyperelastic parameter reconstruction has important clinical applications such as cancer diagnosis and interventional procedure planning. In this study we report our progress on the development of constrained reconstruction technique of breast tissue hyperelastic parameters [1]. The extension of this work is twofold: the inclusion of the popular Veronda-Westmann hyperelastic model and using a novel technique for tissue displacement tracking. This tracking technique is based on the Horn-Schunck optical flow method [2]. The objective of this paper is to validate the numerical analysis performed in [1] by phantom experiment. For this purpose, a PVA (Polyvinyl Alcohol) phantom that consists of three tissue types was constructed and tested. PVA exhibits nonlinear mechanical behavior and has been recently used for tissue mimicking purposes. Reconstruction results showed that it is feasible to find the relative hyperelastic parameters of the tissue with acceptable accuracy. The error reported for the relative parameter reconstruction was less than 20%, which may be sufficient for cancer diagnosis purposes.
Poster Session: Guidance and Technology
icon_mobile_dropdown
Collision-free 6D non-holonomic planning for nested cannulas
Karen Trovato, Aleksandra Popovic
Natural orifice access is the next frontier in minimally invasive technology. This requires dexterity for reaching through complex translumenal paths to a target. We propose a fast algorithm to define shapes of tiny, Nested Cannula devices based on patient CT images to deliver diagnostic and therapeutic procedures, and apply it to deep lung access. Each pre-shaped tube is extended sequentially in either a curved or a straight direction, requiring the solution to a 6D non-holonomic problem with obstacle avoidance in order to reach through free anatomy. A 3D image of the lung provides the specification of free and forbidden regions as well as the core structure for a configuration space. By using an A* search, each state holds the detailed specification leading to the 'goal'. These specifications include the shape, 3D orientation, and 3D position, which can be stored in an adjacent structure in high precision. This allows the normally massive 6D configuration space to be stored in an augmented 3D structure, reducing massive memory requirements by about two orders of magnitude. The adapted configuration space and A* algorithm requires under a minute on a desktop PC to compute a set of shaped tubes that can reach far inside a segmented lung. This paper describes three advances. The first defines new ways to structure searched configuration spaces so that it no longer requires intractable memory. The second solves the non-holonomic 6D problem of defining shaped tubes that extend sequentially into the body while avoiding obstacles. The third incorporates the physics of the interacting tubes.
Ultrasound elastography: enabling technology for image guided laparoscopic prostatectomy
Ioana N. Fleming, Hassan Rivaz, Katarzyna Macura, et al.
Radical prostatectomy using the laparoscopic and robot-assisted approach lacks tactile feedback. Without palpation, the surgeon needs an affordable imaging technology which can be easily incorporated into the laparoscopic surgical procedure, allowing for precise real time intraoperative tumor localization that will guide the extent of surgical resection. Ultrasound elastography (USE) is a novel ultrasound imaging technology that can detect differences in tissue density or stiffness based on tissue deformation. USE was evaluated here as an enabling technology for image guided laparoscopic prostatectomy. USE using a 2D Dynamic Programming (DP) algorithm was applied on data from ex vivo human prostate specimens. It proved consistent in identification of lesions; hard and soft, malignant and benign, located in the prostate's central gland or in the peripheral zone. We noticed the 2D DP method was able to generate low-noise elastograms using two frames belonging to the same compression or relaxation part of the palpation excitation, even at compression rates up to 10%. Good preliminary results were validated by pathology findings, and also by in vivo and ex vivo MR imaging. We also evaluated the use of ultrasound elastography for imaging cavernous nerves; here we present data from animal model experiments.
Improved navigation for image-guided bronchoscopy
Past work has shown that guidance systems help improve both the navigation through airways and final biopsy of regions of interest via bronchoscopy. We have previously proposed an image-based bronchoscopic guidance system. The system, however, has three issues that arise during navigation: 1) sudden disorienting changes can occur in endoluminal views; 2) more feedback could be afforded during navigation; and 3) the system's graphical user interface (GUI) lacks a convenient interface for smooth navigation between bifurcations. In order to alleviate these issues, we present an improved navigation system. The improvements offer the following: 1) an enhanced visual presentation; 2) smooth navigation; 3) an interface for handling registration errors; and 4) improved bifurcation-point identification. The improved navigation system thus provides significant ergonomic and navigational advantages over the previous system.
Direct endoscopic video registration for sinus surgery
Daniel Mirota, Russell H. Taylor, Masaru Ishii, et al.
Advances in computer vision have made possible robust 3D reconstruction of monocular endoscopic video. These reconstructions accurately represent the visible anatomy and, once registered to pre-operative CT data, enable a navigation system to track directly through video eliminating the need for an external tracking system. Video registration provides the means for a direct interface between an endoscope and a navigation system and allows a shorter chain of rigid-body transformations to be used to solve the patient/navigation-system registration. To solve this registration step we propose a new 3D-3D registration algorithm based on Trimmed Iterative Closest Point (TrICP)1 and the z-buffer algorithm.2 The algorithm takes as input a 3D point cloud of relative scale with the origin at the camera center, an isosurface from the CT, and an initial guess of the scale and location. Our algorithm utilizes only the visible polygons of the isosurface from the current camera location during each iteration to minimize the search area of the target region and robustly reject outliers of the reconstruction. We present example registrations in the sinus passage applicable to both sinus surgery and transnasal surgery. To evaluate our algorithm's performance we compare it to registration via Optotrak and present closest distance point to surface error. We show our algorithm has a mean closest distance error of .2268mm.
Using a wireless motion controller for 3D medical image catheter interactions
Dime Vitanovski, Dieter Hahn, Volker Daum, et al.
State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications is therefore a critical feature for clinical imaging applications. For the domain of catheter navigation and surgery planning, it is crucial to assist the physician with appropriate visualization techniques, such as 3D segmentation maps, fly-through cameras or virtual interaction approaches. There has been an ongoing development and improvement for controllers that help to interact with 3D environments in the domain of computer games. These controllers are based on both motion and infrared sensors and are typically used to detect 3D position and orientation. We have investigated how a state-of-the-art wireless motion sensor controller (Wiimote), developed by Nintendo, can be used for catheter navigation and planning purposes. By default the Wiimote controller only measure rough acceleration over a range of +/- 3g with 10% sensitivity and orientation. Therefore, a pose estimation algorithm was developed for computing accurate position and orientation in 3D space regarding 4 Infrared LEDs. Current results show that for the translation it is possible to obtain a mean error of (0.38cm, 0.41cm, 4.94cm) and for the rotation (0.16, 0.28) respectively. Within this paper we introduce a clinical prototype that allows steering of a virtual fly-through camera attached to the catheter tip by the Wii controller on basis of a segmented vessel tree.
Post-operative assessment in Deep Brain Stimulation based on multimodal images: registration workflow and validation
Florent Lalys, Claire Haegelen, Alexandre Abadie, et al.
Object Movement disorders in Parkinson disease patients may require functional surgery, when medical therapy isn't effective. In Deep Brain Stimulation (DBS) electrodes are implanted within the brain to stimulate deep structures such as SubThalamic Nucleus (STN). This paper describes successive steps for constructing a digital Atlas gathering patient's location of electrodes and contacts for post operative assessment. Materials and Method 12 patients who had undergone bilateral STN DBS have participated to the study. Contacts on post-operative CT scans were automatically localized, based on black artefacts. For each patient, post operative CT images were rigidly registered to pre operative MR images. Then, pre operative MR images were registered to a MR template (super-resolution Collin27 average MRI template). This last registration was the combination of global affine, local affine and local non linear registrations, respectively. Four different studies were performed in order to validate the MR patient to template registration process, based on anatomical landmarks and clinical scores (i.e., Unified Parkinson's disease rating Scale). Visualisation software was developed for displaying into the template images the stimulated contacts represented as cylinders with a colour code related to the improvement of the UPDRS. Results The automatic contact localization algorithm was successful for all the patients. Validation studies for the registration process gave a placement error of 1.4 +/- 0.2 mm and coherence with UPDRS scores. Conclusion The developed visualization tool allows post-operative assessment for previous interventions. Correlation with additional clinical scores will certainly permit to learn more about DBS and to better understand clinical side-effects.
Optimal landmarks selection and fiducial marker placement for minimal target registration error in image-guided neurosurgery
Reuben R. Shamir, Leo Joskowicz, Yigal Shoshan
We describe a new framework and method for the optimal selection of anatomical landmarks and optimal placement of fiducial markers in image-guided neurosurgery. The method allows the surgeon to optimally plan the markers locations on routine diagnostic images before preoperative imaging and to intraoperatively select the fiducial markers and the anatomical landmarks that minimize the Target Registration Error (TRE). The optimal fiducial marker configuration selection is performed by the surgeon on the diagnostic image following the target selection based on a visual Estimated TRE (E-TRE) map. The E-TRE map is automatically updated when the surgeon interactively adds and deletes candidate markers and targets. The method takes the guesswork out of the registration process, provides a reliable localization uncertainty error for navigation, and can reduce the localization error without additional imaging and hardware. Our clinical experiments on five patients who underwent brain surgery with a navigation system show that optimizing one marker location and the anatomical landmarks configuration reduces the average TRE from 4.7mm to 3.2mm, with a maximum improvement of 4mm. The reduction of the target registration error has the potential to support safer and more accurate minimally invasive neurosurgical procedures.
Fusion of intraoperative cortical images with preoperative models for neurosurgical planning and guidance
An Wang, Seyed M. Mirsattari, Andrew G. Parrent, et al.
During surgery for epilepsy it is important for the surgeon to correlate the preoperative cortical morphology (from preoperative images) with the intraoperative environment. We extend our visualization method presented earlier, to achieves this goal by fusing a direct (photographic) view of the surgical field with the 3D patient model. To correlate the preoperative plan with the intraoperative surgical scene, an intensity-based perspective 3D-2D registration was employed for camera pose estimation. The 2D photographic image was then texture-mapped onto the 3D preoperative model using the solved camera pose. In the proposed method, we employ direct volume rendering to obtain a perspective view of the brain image using GPU-accelerated ray-casting. This is advantageous compared to the point-based or other feature-based registration since no intermediate processing is required. To validate our registration algorithm, we used a point-based 3D-2D registration, that was validated using ground truth from simulated data, and then the intensity-based 3D-2D registration method was validated using the point-based registration result as the gold standard. The registration error of the intensity-based 3D- 2D method was around 3mm when the initial pose is close to the gold standard. Application of the proposed method for correlating fMRI maps with intraoperative cortical stimulation is shown for surgical planning in an epilepsy patient.
Transbronchial needle aspiration with a new electromagnetically-tracked TBNA needle
Jae Choi, Teo Popa, Lucian Gruionu
Transbronchial needle aspiration (TBNA) is a common method used to collect tissue for diagnosis of different chest diseases and for staging lung cancer, but the procedure has technical limitations. These limitations are mostly related to the difficulty of accurately placing the biopsy needles into the target mass. Currently, pulmonologists plan TBNA by examining a number of Computed Tomography (CT) scan slices before the operation. Then, they manipulate the bronchoscope down the respiratory track and blindly direct the biopsy. Thus, the biopsy success rate is low. The diagnostic yield of TBNA is approximately 70 percent. To enhance the accuracy of TBNA, we developed a TBNA needle with a tip position that can be electromagnetically tracked. The needle was used to estimate the bronchoscope's tip position and enable the creation of corresponding virtual bronchoscopic images from a preoperative CT scan. The TBNA needle was made with a flexible catheter embedding Wang Transbronchial Histology Needle and a sensor tracked by electromagnetic field generator. We used Aurora system for electromagnetic tracking. We also constructed an image-guided research prototype system incorporating the needle and providing a user-friendly interface to assist the pulmonologist in targeting lesions. To test the feasibility of the accuracy of the newly developed electromagnetically-tracked needle, a phantom study was conducted in the interventional suite at Georgetown University Hospital. Five TBNA simulations with a custom-made phantom with a bronchial tree were performed. The experimental results show that our device has potential to enhance the accuracy of TBNA.
A dual compute resource strategy for computational model-assisted therapeutic interventions
Acquiring and incorporating intraoperative data into image-guided surgical systems has been shown to increase the accuracy of these systems and the accuracy of image-guided surgical procedures. Even with the advent of powerful computers and parallel clusters, the ability to integrate highly resolved computer model information in the planning and execution of image-guided surgery is challenging. More often than not, the computational times required to process preoperative models and incorporate intraoperative data for feedback are too cumbersome and do not meet the real time constraints of surgery, for both planning and intraoperative guidance. To decrease the computational time for the surgeon and minimize the resources in the operating room, we have developed a dual compute node framework for image-guided surgical procedures: (i) a high-capability compute resource which acts as a server to facilitate preoperative planning, and (ii) a low-capability compute resource which acts as a server node/compute node to process the intraoperative data and rapidly integrate the model-based analysis for therapeutic/surgical feedback. In this framework, the preoperative planning utilities and intraoperative guidance system act as client-nodes/graphics-nodes that are assisted by the model-assistant. Processed data is transferred back to the graphics node for planning display or intraoperative feedback depending on which resource is engaged. In order to efficiently manage the data and the computational resources we also developed a novel software manager. This dual-capability resource compute node concept and the software manager are reported in this work, and the low-capability resource compute node is investigated within the context of image-guided liver surgery using data acquired during hepatic tumor resection therapies. Preliminary results indicate that the dual node concept can significantly decrease the computational resources and time required for image-guided surgical procedures.
An open-source framework for testing tracking devices using Lego Mindstorms
Julien Jomier, Luis Ibanez, Andinet Enquobahrie, et al.
In this paper, we present an open-source framework for testing tracking devices in surgical navigation applications. At the core of image-guided intervention systems is the tracking interface that handles communication with the tracking device and gathers tracking information. Given that the correctness of tracking information is critical for protecting patient safety and for ensuring the successful execution of an intervention, the tracking software component needs to be thoroughly tested on a regular basis. Furthermore, with widespread use of extreme programming methodology that emphasizes continuous and incremental testing of application components, testing design becomes critical. While it is easy to automate most of the testing process, it is often more difficult to test components that require manual intervention such as tracking device. Our framework consists of a robotic arm built from a set of Lego Mindstorms and an open-source toolkit written in C++ to control the robot movements and assess the accuracy of the tracking devices. The application program interface (API) is cross-platform and runs on Windows, Linux and MacOS. We applied this framework for the continuous testing of the Image-Guided Surgery Toolkit (IGSTK), an open-source toolkit for image-guided surgery and shown that regression testing on tracking devices can be performed at low cost and improve significantly the quality of the software.
An improved method for compensating ultra-tiny electromagnetic tracker utilizing position and orientation information and its application to a flexible neuroendoscopic surgery navigation system
Zhengang Jiang, Kensaku Mori, Yukitaka Nimura, et al.
This paper presents an improved method for compensating ultra-tiny electromagnetic tracker (UEMT) outputs and its application to a flexible neuroendoscopic surgery navigation system. Recently, UEMT is widely used in a surgical navigation system using a flexible endoscope to obtain the position and the orientation of an endoscopic camera.However, due to the distortion of the electromagnetic field, the accuracy of such UEMT system becomes low. Several research groups have presented methods for compensating UEMT outputs that are deteriorated by ferromagnetic objects existing around the UEMT. These compensation methods firstly acquired positions and orientations (sample data) by sweeping a special tool (hybrid tool) having a UEMT and an optical tracker (OT) in free-hand. Then a polynomial compensating UEMT outputs is computed from both outputs. However, these methods have following problems: 1) Compensation function is obtained as a function of position, and orientation information is not used in compensation. 2) Although we need to slowly move the hybrid tool to obtain better compensation results, this leads increase of time. To overcome such problems, this paper presents a UEMT-output compensation function that is a function of not only position but also orientation. Also, a new sweeping method of the hybrid tool is proposed in order to reduce the sweeping time required for obtaining sample data. We evaluated the accuracy and feasibility of the proposed method by experiments in an OpenMR operating room. According to the result of experiments, the accuracy of the compensation method is improved about 20% than that of the previous method. We implemented the proposed method in a navigation system for flexible neuroendoscopic surgery and performed a phantom test and several clinical application tests. The result showed the proposed method is efficient for UEMT output compensation and improves accuracy of a flexible neuroendoscopic surgery system.
Evaluation of dynamic electromagnetic tracking deviation
Electromagnetic tracking systems (EMTS's) are widely used in clinical applications. Many reports have evaluated their static behavior and errors caused by metallic objects were examined. Although there exist some publications concerning the dynamic behavior of EMTS's the measurement protocols are either difficult to reproduce with respect of the movement path or only accomplished at high technical effort. Because dynamic behavior is of major interest with respect to clinical applications we established a simple but effective modal measurement easy to repeat at other laboratories. We built a simple pendulum where the sensor of our EMTS (Aurora, NDI, CA) could be mounted. The pendulum was mounted on a special bearing to guarantee that the pendulum path is planar. This assumption was tested before starting the measurements. All relevant parameters defining the pendulum motion such as rotation center and length are determined by static measurement at satisfactory accuracy. Then position and orientation data were gathered over a time period of 8 seconds and timestamps were recorded. Data analysis provided a positioning error and an overall error combining both position and orientation. All errors were calculated by means of the well know equations concerning pendulum movement. Additionally, latency - the elapsed time from input motion until the immediate consequences of that input are available - was calculated using well-known equations for mechanical pendulums for different velocities. We repeated the measurements with different metal objects (rods made of stainless steel type 303 and 416) between field generator and pendulum. We found a root mean square error (eRMS) of 1.02mm with respect to the distance of the sensor position to the fit plane (maximum error emax = 2.31mm, minimum error emin = -2.36mm). The eRMS for positional error amounted to 1.32mm while the overall error was 3.24 mm. The latency at a pendulum angle of 0° (vertical) was 7.8ms.
Elasticity-based three dimensional ultrasound real-time volume rendering
Emad M. Boctor, Mohammad Matinfar, Omar Ahmad, et al.
Volumetric ultrasound imaging has not gained wide recognition, despite the availability of real-time 3D ultrasound scanners and the anticipated potential of 3D ultrasound imaging in diagnostic and interventional radiology. Their use, however, has been hindered by the lack of real-time visualization methods that are capable of producing high quality 3D rendering of the target/surface of interest. Volume rendering is a known visualization method, which can display clear surfaces out of the acquired volumetric data, and has an increasing number of applications utilizing CT and MRI data. The key element of any volume rendering pipeline is the ability to classify the target/surface of interest by setting an appropriate opacity function. Practical and successful real-time 3D ultrasound volume rendering can be achieved in Obstetrics and Angio applications where setting these opacity functions can be done rapidly, and reliably. Unfortunately, 3D ultrasound volume rendering of soft tissues is a challenging task due to the presence of significant amount of noise and speckle. Recently, several research groups have shown the feasibility of producing 3D elasticity volume from two consecutive 3D ultrasound scans. This report describes a novel volume rendering pipeline utilizing elasticity information. The basic idea is to compute B-mode voxel opacity from the rapidly calculated strain values, which can also be mixed with conventional gradient based opacity function. We have implemented the volume renderer using GPU unit, which gives an update rate of 40 volume/sec.
Poster Session: Visualization and Geometry
icon_mobile_dropdown
Reliability of vascular geometry factors derived from clinical MRA
Recent work from our group has demonstrated that the amount of disturbed flow at the carotid bifurcation, believed to be a local risk factor for carotid atherosclerosis, can be predicted from luminal geometric factors. The next step along the way to a large-scale retrospective or prospective imaging study of such local risk factors for atherosclerosis is to investigate whether these geometric features are reproducible and accurate from routine 3D contrast-enhanced magnetic resonance angiography (CEMRA) using a fast and practical method of extraction. Motivated by this fact, we examined the reproducibility of multiple geometric features that are believed important in atherosclerosis risk assessment. We reconstructed three-dimensional carotid bifurcations from 15 clinical study participants who had previously undergone baseline and repeat CEMRA acquisitions. Certain geometric factors were extracted and compared between the baseline and the repeat scan. As the spatial resolution of the CEMRA data was noticeably coarse and anisotropic, we also investigated whether this might affect the measurement of the same geometric risk factors by simulating the CEMRA acquisition for 15 normal carotid bifurcations previously acquired at high resolution. Our results show that the extracted geometric factors are reproducible and faithful, with intra-subject uncertainties well below inter-subject variabilities. More importantly, these geometric risk factors can be extracted consistently and quickly for potential use as disturbed flow predictors.
Visualization of multiresolution model for volumetric medical data by using weighted alpha shapes
In real world applications, given data points are located arbitraily rather than in a regular distribution. The numerous applications of volumetric scattered data can be enumerated, such as computational fluid dynamics, medicine, terrain modeling and oil exploration. Multiresolution is desired to visualize volumetric scattered data, because the common problem of volumetric data is that the amount of data is too much. The modeling of such multiscale phenomena is computationally expensive. The mathematical model needs to reflect the different levels of details by approximating the mathematical object on multiple different scales, ranging from a coarse repesentation at a low resolution to a fine representation at a high resolution. The weighted alpha shapes method is defined for a finite set of weighted points. In other words, it is a polytope uniquely determined by the points, their weights, and a parameter α ∈ R that controls the desired level of detail. Therefore, we need to investigate the way to achieve different levels of detail in a single shape by assigning weights to the data points. In this paper, Gaussian curvature can be considered as the weight value of each data point.
Interactive vessel-tracking with a hybrid model-based and graph-based approach
For assessment of coronary artery disease (CAD) and peripheral artery disease (PAD) the automatic extraction of vessel centerlines is a crucial technology. In the most common approach two seed points have to be manually placed in the vessel and the centerline is automatically computed between these points. This methodology is appropriate for the quantitative analysis of single vessel segments. However, for an interactive and fast reading of complete datasets a more interactive approach would be beneficial. In this work we introduce an interactive vessel-tracking approach which eases the reading of cardiac and vascular CTA datasets. Starting with a single seed point a local vessel-tracking is initialized and extended in both directions while the user "walks" along the vessel centerline. For a robust tracking of a wide variety of vessel diameters, from coronaries to the aorta, we combine a local A*-graph-search for tiny vessels and a model-based tracking for larger vessels to an hybrid model-based and graph-based approach. In order to further ease the reading of cardiac and vascular CTA datasets, we introduce a subdivision of the interactively acquired centerline into segments that can be approximated by a single plane. This subdivision allows the visualization of the vessel in optimally oriented multi-planar reformations (MPR). The proposed visualization combines the advantage of a curved planar reformation (CPR), showing a large part of the vessel in one view, with the benefits of a MPR, having a non distorted more trustable image.
A visualization system for CT based pulmonary fissure analysis
In this study we describe a visualization system of pulmonary fissures depicted on CT images. The purpose is to provide clinicians with an intuitive perception of a patient's lung anatomy through an interactive examination of fissures, enhancing their understanding and accurate diagnosis of lung diseases. This system consists of four key components: (1) region-of-interest segmentation; (2) three-dimensional surface modeling; (3) fissure type classification; and (4) an interactive user interface, by which the extracted fissures are displayed flexibly in different space domains including image space, geometric space, and mixed space using simple toggling "on" and "off" operations. In this system, the different visualization modes allow users not only to examine the fissures themselves but also to analyze the relationship between fissures and their surrounding structures. In addition, the users can adjust thresholds interactively to visualize the fissure surface under different scanning and processing conditions. Such a visualization tool is expected to facilitate investigation of structures near the fissures and provide an efficient "visual aid" for other applications such as treatment planning and assessment of therapeutic efficacy as well as education of medical professionals.
Quantitative and visual analysis of white matter integrity using diffusion tensor imaging
A new fiber tract-oriented quantitative and visual analysis scheme using diffusion tensor imaging (DTI) is developed to study the regional micro structural white matter changes along major fiber bundles which may not be effectively revealed by existing methods due to the curved spatial nature of neuronal paths. Our technique is based on DTI tractography and geodesic path mapping, which establishes correspondences to allow cross-subject evaluation of diffusion properties by parameterizing the fiber pathways as a function of geodesic distance. A novel isonodes visualization scheme is proposed to render regional statistical features along the fiber pathways. Assessment of the technique reveals specific anatomical locations along the genu of the corpus callosum paths with significant diffusion property changes in the amnestic mild cognitive impairment subjects. The experimental results show that this approach is promising and may provide a sensitive technique to study the integrity of neuronal connectivity in human brain.
Evaluation of topology correction methods for the generation of the cortical surface
The cerebral cortex is a highly convoluted anatomical structure. The folding pattern defined by sulci and gyri is a complex pattern that is very heterogeneous across subjects. The heterogeneity across subjects has made the automated labeling of this structure into its constituent components a challenge to the field of neuroimaging. One way to approach this problem is to conformally map the surface to another representation such as a plane or sphere. Conformal mapping of the surface requires that surface to be topologically correct. However, noise and partial volume artifacts in the MR images frequently cause holes or handles to exist in the surface that must be removed. Topology correction techniques have been proposed that operate on the cortical surface, the original image data, and hybrid methods have been proposed. This paper presents an experimental assessment of two different topology correction methods. The first approach is based on modification of 3D voxel data. The second method is a hybrid approach that determines the location of defects from the surface representation while repairing the surface by modifying the underlying image data. These methods have been applied to 10 brains, and a comparison is made among them. In addition, detailed statistics are given based on the voxel correction method. Based on these 10 MRI datasets, we have found the hybrid method incapable of correcting the cortical surface appropriately when a handles and holes exist in close proximity. In several cases, holes in the anatomical surface were labeled as handles thus resulting in discontinuities in the folding pattern. The image-based approach in this study was found to correct the topology in all ten cases within a reasonable time. Furthermore, the distance between the original and corrected surfaces, thickness of brain cortex, curvatures and surface areas are provided as assessments of the approach based on our datasets.
Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences
Nils Daniel Forkert, Dennis Säring, Jens Fiehler, et al.
In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.
Visualization of risk structures for interactive planning of image guided radiofrequency ablation of liver tumors
Christian Rieder, Michael Schwier, Andreas Weihusen, et al.
Image guided radiofrequency ablation (RFA) is becoming a standard procedure as a minimally invasive method for tumor treatment in the clinical routine. The visualization of pathological tissue and potential risk structures like vessels or important organs gives essential support in image guided pre-interventional RFA planning. In this work our aim is to present novel visualization techniques for interactive RFA planning to support the physician with spatial information of pathological structures as well as the finding of trajectories without harming vitally important tissue. Furthermore, we illustrate three-dimensional applicator models of different manufactures combined with corresponding ablation areas in homogenous tissue, as specified by the manufacturers, to enhance the estimated amount of cell destruction caused by ablation. The visualization techniques are embedded in a workflow oriented application, designed for the use in the clinical routine. To allow a high-quality volume rendering we integrated a visualization method using the fuzzy c-means algorithm. This method automatically defines a transfer function for volume visualization of vessels without the need of a segmentation mask. However, insufficient visualization results of the displayed vessels caused by low data quality can be improved using local vessel segmentation in the vicinity of the lesion. We also provide an interactive segmentation technique of liver tumors for the volumetric measurement and for the visualization of pathological tissue combined with anatomical structures. In order to support coagulation estimation with respect to the heat-sink effect of the cooling blood flow which decreases thermal ablation, a numerical simulation of the heat distribution is provided.
Poster Session: Registration
icon_mobile_dropdown
A contrast and registration template for magnetic resonance image data guided dental implant placement
Georg Eggers, Raluca Cosgarea, Marcus Rieker, et al.
An oral imaging template was developed to address the shortcomings of MR image data for image guided dental implant planning and placement. The template was conctructed as a gadolinium filled plastic shell to give contrast to the dentition and also to be accurately re-attachable for use in image guided dental implant placement. The result of segmentation and modelling of the dentition from MR Image data with the template was compared to plaster casts of the dentition. In a phantom study dental implant placement was performed based on MR image data. MR imaging with the contrast template allowed complete representation of the existing dentition. In the phantom study, a commercially available system for image guided dental implant placement was used. Transformation of the imaging contrast template into a surgical drill guide based on the MR image data resulted in pilot burr hole placement with an accuracy of 2 mm. MRI based imaging of the existing dentition for proper image guided planning is possible with the proposed template. Using the image data and the template resulted in less accurate pilot burr hole placement in comparison to CT-based image guided implant placement.
Feature-driven deformation for dense correspondence
Deboshmita Ghosh, Andrei Sharf, Nina Amenta
Establishing reliable correspondences between object surfaces is a fundamental operation, required in many contexts such as cleaning up and completing imperfect captured data, texture and deformation trans- fer, shape-space analysis and exploration, and the automatic generation of realistic distributions of objects. We present a method for matching a template to a collection of possibly target meshes. Our method uses a very small number of user-placed landmarks, which we augment with automatically detected feature correspondences, found using spin images. We deform the template onto the data using an ICP-like framework, smoothing the noisy correspondences at each step so as to produce an averaged motion. The deformation uses a dierential representation of the mesh, with which the deformation can be computed at each iteration by solving a sparse linear system. We have applied our algorithm to a variety of data sets. Using only 11 landmarks between a template and one of the scans from the CEASAR data set, we are able to deform the template, and correctly identify and transfer distinctive features, which are not identied by user-supplied landmarks. We have also successfully established correspondences between several scans of monkey skulls, which have dangling triangles, non-manifold vertices, and self intersections. Our algorithm does not require a clean target mesh, and can even generate correspondence without trimming our extraneous pieces from the target mesh, such as scans of teeth.
Reduction of multi-fragment fractures of the distal radius using atlas-based 2D/3D registration
We describe a method to guide the surgical fixation of distal radius fractures. The method registers the fracture fragments to a volumetric intensity-based statistical anatomical atlas of distal radius, reconstructed from human cadavers and patient data, using a few intra-operative X-ray fluoroscopy images of the fracture. No pre-operative Computed Tomography (CT) images are required, hence radiation exposure to patients is substantially reduced. Intra-operatively, each bone fragment is roughly segmented from the X-ray images by a surgeon, and a corresponding segmentation volume is created from the back-projections of the 2D segmentations. An optimization procedure positions each segmentation volume at the appropriate pose on the atlas, while simultaneously deforming the atlas such that the overlap of the 2D projection of the atlas with individual fragments in the segmented regions is maximized. Our simulation results shows that this method can accurately identify the pose of large fragments using only two X-ray views, but for small fragments, more than two X-rays may be needed. The method does not assume any prior knowledge about the shape of the bone and the number of fragments, thus it is also potentially suitable for the fixation of other types of multi-fragment fractures.
Surface-based determination of the pelvic coordinate system
Lorenz Fieten, Jörg Eschweiler, Stefan Heger, et al.
In total hip replacement (THR) one technical factor influencing the risk of dislocation is cup orientation. Computer-assisted surgery systems allow for cup navigation in anatomy-based reference frames. The pelvic coordinate system most used for cup navigation in THR is based on the mid-sagittal plane (MSP) and the anterior pelvic plane (APP). From a geometrical point of view, the MSP can be considered as a mirror plane, whereas the APP can be considered as a tangent plane comprising the anterior superior iliac spines (ASIS) and the pubic tubercles. In most systems relying on the pelvic coordinate system, the most anterior points of the ASIS and the pubic tubercles are selected manually. As manual selection of landmark points is a tedious, time-consuming and error-prone task, a surface-based approach for combined MSP and APP computation is presented in this paper: Homologous points defining the MSP and the landmark points defining the APP are selected automatically from surface patches. It is investigated how MSP computation can benefit from APP computation and vice versa, and clinical perspectives of combined MSP and APP computation are discussed. Experimental results on computed tomography data show that the surface-based approach can improve accuracy.
Intraoperative localization of brachytherapy implants using intensity-based registration
In prostate brachytherapy, a transrectal ultrasound (TRUS) will show the prostate boundary but not all the implanted seeds, while fluoroscopy will show all the seeds clearly but not the boundary. We propose an intensity-based registration between TRUS images and the implant reconstructed from fluoroscopy as a means of achieving accurate intra-operative dosimetry. The TRUS images are first filtered and compounded, and then registered to the fluoroscopy model via mutual information. A training phantom was implanted with 48 seeds and imaged. Various ultrasound filtering techniques were analyzed, and the best results were achieved with the Bayesian combination of adaptive thresholding, phase congruency, and compensation for the non-uniform ultrasound beam profile in the elevation and lateral directions. The average registration error between corresponding seeds relative to the ground truth was 0.78 mm. The effect of false positives and false negatives in ultrasound were investigated by masking true seeds in the fluoroscopy volume or adding false seeds. The registration error remained below 1.01 mm when the false positive rate was 31%, and 0.96 mm when the false negative rate was 31%. This fully automated method delivers excellent registration accuracy and robustness in phantom studies, and promises to demonstrate clinically adequate performance on human data as well.
A deformation model for non-rigid registration of the kidney
Rowena E. Ong, Courtenay L. Glisson, S. Duke Herrell, et al.
The development of an image-guided renal surgery system may aid tumor resection during partial nephrectomies. This system would require the registration of pre-operative kidney CT or MR scans to the physical kidney; however, the amount of non-rigid deformation occurring during surgery and whether it can be corrected for in an image-guided system is unknown. One possible source of non-rigid deformation is a change in pressure within the kidney: during surgery, clamping of the renal artery and vein results in a loss of perfusion, such that the subsequent cutting of the kidney and fluid outflow may cause a decrease in intrarenal pressure. In this work, we attempt to characterize the deformation due to cutting of the kidney and subsequent changes in intrarenal pressure. To accomplish this, we perfused a resected porcine kidney at a physiologically realistic pressure, clamped the renal vessels, and cut the kidney using a tracked scalpel. The resulting deformation was tracked in a CT scanner using 15-20 glass bead fiducials attached to the kidney surface. A modified form of Biot's consolidation model was used to simulate the deformation, and the accuracy was assessed by calculating the target registration error and image similarity.
Real-time estimation of FLE for point-based registration
In image-guide surgery, optimizing the accuracy in localizing the surgical tools within the virtual reality environment or 3D image is vitally important, significant effort has been spent reducing the measurement errors at the point of interest or target. This target registration error (TRE) is often defined by a root-mean-square statistic which reduces the vector data to a single term that can be minimized. However, lost in the data reduction is the directionality of the error which, can be modelled using a 3D covariance matrix. Recently, we developed a set of expressions that modeled the TRE statistics for point-based registrations as a function of the fiducial marker geometry, target location and the fiducial localizer error (FLE). Unfortunately, these expressions are only as good as the definition of the FLE. In order to close the gap, we have subsequently developed a closed form expression that estimates the FLE as a function of the estimated fiducial registration error (FRE, the error between the measured fiducials and the best fit locations of those fiducials). The FRE covariance matrix is estimated using a sliding window technique and used as input into the closed form expression to estimate the FLE. The estimated FLE can then used to estimate the TRE which, can be given to the surgeon to permit the procedure to be designed such that the errors associated with the point-based registrations are minimized.
Computer-aided method for automated selection of optimal imaging plane for measurement of total cerebral blood flow by MRI
A computer-aided method for finding an optimal imaging plane for simultaneous measurement of the arterial blood inflow through the 4 vessels leading blood to the brain by phase contrast magnetic resonance imaging is presented. The method performance is compared with manual selection by two observers. The skeletons of the 4 vessels for which centerlines are generated are first extracted. Then, a global direction of the relatively less curved internal carotid arteries is calculated to determine the main flow direction. This is then used as a reference direction to identify segments of the vertebral arteries that strongly deviates from the main flow direction. These segments are then used to identify anatomical landmarks for improved consistency of the imaging plane selection. An optimal imaging plane is then identified by finding a plane with the smallest error value, which is defined as the sum of the angles between the plane's normal and the vessel centerline's direction at the location of the intersections. Error values obtained using the automated and the manual methods were then compared using 9 magnetic resonance angiography (MRA) data sets. The automated method considerably outperformed the manual selection. The mean error value with the automated method was significantly lower than the manual method, 0.09±0.07 vs. 0.53±0.45, respectively (p<.0001, Student's t-test). Reproducibility of repeated measurements was analyzed using Bland and Altman's test, the mean 95% limits of agreements for the automated and manual method were 0.01~0.02 and 0.43~0.55 respectively.
Iterative solution for rigid-body point-based registration with anisotropic weighting
Rigid-body, point-based registration is commonly used for image-guided systems. Fiducial markers that can be localized in image and physical space are attached to patient anatomy. The fiducial marker locations in the two spaces are used to obtain the physical-to-image registration. It is a common practice to obtain physical positions via optical systems, whose localization error is anisotropic. Furthermore, the positions are often reckoned relative to a coordinate reference frame (CRF) that is rigidly attached to the patient. The use of a CRF enables patient movement relative to the tracking system, but it tends to exacerbate the anisotropy. It is common practice to ignore the localization anisotropy and employ a closed-form solution, which is available for isotropic weighting but not for anisotropic weighting. Iterative methods are available for anisotropic weighting but are quite complex. We present a new iterative algorithm for anisotropic weighting that is simple, intuitive, and has only one adjustable parameter. We show using simulations that our algorithm is more accurate than the isotropic solution for anisotropic localization error. In particular, we show that the new algorithm reduces target registration error when anisotropic localization error is present. When all the localization errors are isotropic, the new algorithm performs as well as the closed-form solution.