Proceedings Volume 9415

Medical Imaging 2015: Image-Guided Procedures, Robotic Interventions, and Modeling

Robert J. Webster III, Ziv R. Yaniv
cover
Proceedings Volume 9415

Medical Imaging 2015: Image-Guided Procedures, Robotic Interventions, and Modeling

Robert J. Webster III, Ziv R. Yaniv
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 1 May 2015
Contents: 13 Sessions, 91 Papers, 1 Presentations
Conference: SPIE Medical Imaging 2015
Volume Number: 9415

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9415
  • Cardiac Procedures
  • Endoscopy/Laparoscopy
  • Cranial Procedures
  • Treatment Planning and Robotic Systems
  • Registration
  • Ultrasound Image Guidance: Joint Session with Conferences 9415 and 9419
  • Tracking and Organ Motion Modeling
  • Segmentation
  • Intraoperative Imaging and Visualization
  • Keynote and 2D/3D Registration
  • Abdominal and Pelvic Procedures
  • Poster Session
Front Matter: Volume 9415
icon_mobile_dropdown
Front Matter: Volume 9415
This PDF file contains the front matter associated with SPIE Proceedings Volume 9415, including the Title Page, Copyright information, Table of Contents, Invited Panel Discussion, and Conference Committee listing.
Cardiac Procedures
icon_mobile_dropdown
Integration of a biomechanical simulation for mitral valve reconstruction into a knowledge-based surgery assistance system
Nicolai Schoch, Sandy Engelhardt, Norbert Zimmermann, et al.
A mitral valve reconstruction (MVR) is a complex operation in which the functionality of incompetent mitral valves is re-established by applying surgical techniques. This work deals with predictive biomechanical simulations of operation scenarios for an MVR, and the simulation's integration into a knowledge-based surgery assistance system. We present a framework for the definition of the corresponding surgical workflow, which combines semantically enriched surgical expert knowledge with a biomechanical simulation. Using an ontology, 'surgical rules' which describe decision and assessment criteria for surgical decision-making are represented in a knowledge base. Through reasoning these 'rules' can then be applied on patient-specific data in order to be converted into boundary conditions for the biomechanical soft tissue simulation, which is based on the Finite Elements Method (FEM). The simulation, which is implemented in the open-source C++ FEM software HiFlow3, is controlled via the Medical Simulation Markup Language (MSML), and makes use of High Performance Computing (HPC) methods to cope with real-time requirements in surgery. The simulation results are presented to surgeons to assess the quality of the virtual reconstruction and the consequential remedial effects on the mitral valve and its functionality. The whole setup has the potential to support the intraoperative decision-making process during MVR where the surgeon usually has to make fundamental decision under time pressure.
Dynamic heart phantom with functional mitral and aortic valves
Claire Vannelli, John Moore, Jonathan McLeod, et al.
Cardiac valvular stenosis, prolapse and regurgitation are increasingly common conditions, particularly in an elderly population with limited potential for on-pump cardiac surgery. NeoChord©, MitraClip© and numerous stent-based transcatheter aortic valve implantation (TAVI) devices provide an alternative to intrusive cardiac operations; performed while the heart is beating, these procedures require surgeons and cardiologists to learn new image-guidance based techniques. Developing these visual aids and protocols is a challenging task that benefits from sophisticated simulators. Existing models lack features needed to simulate off-pump valvular procedures: functional, dynamic valves, apical and vascular access, and user flexibility for different activation patterns such as variable heart rates and rapid pacing. We present a left ventricle phantom with these characteristics. The phantom can be used to simulate valvular repair and replacement procedures with magnetic tracking, augmented reality, fluoroscopy and ultrasound guidance. This tool serves as a platform to develop image-guidance and image processing techniques required for a range of minimally invasive cardiac interventions. The phantom mimics in vivo mitral and aortic valve motion, permitting realistic ultrasound images of these components to be acquired. It also has a physiological realistic left ventricular ejection fraction of 50%. Given its realistic imaging properties and non-biodegradable composition—silicone for tissue, water for blood—the system promises to reduce the number of animal trials required to develop image guidance applications for valvular repair and replacement. The phantom has been used in validation studies for both TAVI image-guidance techniques1, and image-based mitral valve tracking algorithms2.
Beating heart mitral valve repair with integrated ultrasound imaging
A. Jonathan McLeod, John T. Moore, Terry M. Peters
Beating heart valve therapies rely extensively on image guidance to treat patients who would be considered inoperable with conventional surgery. Mitral valve repair techniques including the MitrClip, NeoChord, and emerging transcatheter mitral valve replacement techniques rely on transesophageal echocardiography for guidance. These images are often difficult to interpret as the tool will cause shadowing artifacts that occlude tissue near the target site. Here, we integrate ultrasound imaging directly into the NeoChord device. This provides an unobstructed imaging plane that can visualize the valve lea ets as they are engaged by the device and can aid in achieving both a proper bite and spacing between the neochordae implants. A proof of concept user study in a phantom environment is performed to provide a proof of concept for this device.
Endocardial left ventricle feature tracking and reconstruction from tri-plane trans-esophageal echocardiography data
Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and "on-the-fly" computer-assisted assessment of ejection fraction for cardiac function monitoring.Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and on-the- y" computer-assisted assessment of ejection fraction for cardiac function monitoring.
The effect of elastic modulus on ablation catheter contact area
Cardiac ablation consists of navigating a catheter into the heart and delivering RF energy to electrically isolate tissue regions that generate or propagate arrhythmia. Besides the challenges of accurate and precise targeting of the arrhythmic sites within the beating heart, limited information is currently available to the cardiologist regarding intricate electrodetissue contact, which directly impacts the quality of produced lesions. Recent advances in ablation catheter design provide intra-procedural estimates of tissue-catheter contact force, but the most direct indicator of lesion quality for any particular energy level and duration is the tissue-catheter contact area, and that is a function of not only force, but catheter pose and material elasticity as well.

In this experiment, we have employed real-time ultrasound (US) imaging to determine the complete interaction between the ablation electrode and tissue to accurately estimate contact, which will help to better understand the effect of catheter pose and position relative to the tissue. By simultaneously recording tracked position, force reading and US image of the ablation catheter, the differing material properties of polyvinyl alcohol cryogel[1] phantoms are shown to produce varying amounts of tissue depression and contact area (implying varying lesion quality) for equivalent force readings. We have shown that the elastic modulus significantly affects the surface-contact area between the catheter and tissue at any level of contact force. Thus we provide evidence that a prescribed level of catheter force may not always provide sufficient contact area to produce an effective ablation lesion in the prescribed ablation time.
Endoscopy/Laparoscopy
icon_mobile_dropdown
Multimodal system for the planning and guidance of bronchoscopy
William E. Higgins, Ronnarit Cheirsilp, Xiaonan Zang, et al.
Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system’s potential.
Accuracy validation of an image guided laparoscopy system for liver resection
Stephen Thompson, Johannes Totz, Yi Song, et al.
We present an analysis of the registration component of a proposed image guidance system for image guided liver surgery, using contrast enhanced CT. The analysis is performed on a visually realistic liver phantom and in-vivo porcine data. A robust registration process that can be deployed clinically is a key component of any image guided surgery system. It is also essential that the accuracy of the registration can be quantified and communicated to the surgeon. We summarise the proposed guidance system and discuss its clinical feasibility. The registration combines an intuitive manual alignment stage, surface reconstruction from a tracked stereo laparoscope and a rigid iterative closest point registration to register the intra-operative liver surface to the liver surface derived from CT. Testing of the system on a liver phantom shows that subsurface landmarks can be localised to an accuracy of 2.9 mm RMS. Testing during five porcine liver surgeries demonstrated that registration can be performed during surgery, with an error of less than 10 mm RMS for multiple surface landmarks.
Rendering-based video-CT registration with physical constraints for image-guided endoscopic sinus surgery
We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.
Image-based tracking of the suturing needle during laparoscopic interventions
S. Speidel, A. Kroehnert, S. Bodenstedt, et al.
One of the most complex and difficult tasks for surgeons during minimally invasive interventions is suturing. A prerequisite to assist the suturing process is the tracking of the needle. The endoscopic images provide a rich source of information which can be used for needle tracking. In this paper, we present an image-based method for markerless needle tracking. The method uses a color-based and geometry-based segmentation to detect the needle. Once an initial needle detection is obtained, a region of interest enclosing the extracted needle contour is passed on to a reduced segmentation. It is evaluated with in vivo images from da Vinci interventions.
Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model
Yuanzheng Gong, Danying Hu, Blake Hannaford, et al.
The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
Electrical impedance map (EIM) for margin assessment during robot-assisted laparoscopic prostatectomy (RALP) using a microendoscopic probe
Aditya Mahara, Shadab Khan, Alan R. Schned, et al.
Positive surgical margins (PSMs) found following prostate cancer surgery are a significant risk factor for post-operative disease recurrence. Noxious adjuvant radiation and chemical-based therapies are typically offered to men with PSMs. Unfortunately, no real-time intraoperative technology is currently available to guide surgeons to regions of suspicion during the initial prostatectomy where immediate surgical excisions could be used to reduce the chance of PSMs. A microendoscopic electrical impedance sensing probe was developed with the intention of providing real-time feedback regarding margin status to surgeons during robot-assisted laparoscopic prostatectomy (RALP) procedures. A radially configured 17-electrode microendoscopic probe was designed, constructed, and initially evaluated through use of gelatin-based phantoms and an ex vivo human prostate specimen. Impedance measurements are recorded at 10 frequencies (10 kHz - 100 kHz) using a high-speed FPGA-based electrical impedance tomography (EIT) system. Tetrapolar impedances are recorded from a number of different electrode configurations strategically chosen to sense tissue in a pre-defined sector underlying the probe face. A circular electrical impedance map (EIM) with several color-coded pie-shaped sectors is created to represent the impedance values of the probed tissue. Gelatin phantom experiments show an obvious distinction in the impedance maps between high and low impedance regions. Similarly, the EIM generated from the ex vivo prostate case shows distinguishing features between cancerous and benign regions. Based on successful development of this probe and these promising initial results, EIMs of additional prostate specimens are being collected to further evaluate this approach for intraoperative surgical margin assessment during RALP procedures.
Cranial Procedures
icon_mobile_dropdown
Thalamic nuclei segmentation in clinical 3T T1-weighted Images using high-resolution 7T shape models
Yuan Liu, Pierre-François D'Haese, Allen T. Newton, et al.
Accurate and reliable identification of thalamic nuclei is important for surgical interventions and neuroanatomical studies. This is a challenging task due to their small sizes and low intra-thalamic contrast in standard T1-weighted or T2- weighted images. Previously proposed techniques rely on diffusion imaging or functional imaging. These require additional scanning and suffer from the low resolution and signal-to-noise ratio in these images. In this paper, we aim to directly segment the thalamic nuclei in standard 3T T1-weighted images using shape models. We manually delineate the structures in high-field MR images and build high resolution shape models from a group of subjects. We then investigate if the nuclei locations can be inferred from the whole thalamus. To do this, we hierarchically fit joint models. We start from the entire thalamus and fit a model that captures the relation between the thalamus and large nuclei groups. This allows us to infer the boundaries of these nuclei groups and we repeat the process until all nuclei are segmented. We validate our method in a leave-one-out fashion with seven subjects by comparing the shape-based segmentations on 3T images to the manual contours. Results we have obtained for major nuclei (dice coefficients ranging from 0.57 to 0.88 and mean surface errors from 0.29mm to 0.72mm) suggest the feasibility of using such joint shape models for localization. This may have a direct impact on surgeries such as Deep Brain Stimulation procedures that require the implantation of stimulating electrodes in specific thalamic nuclei.
Three-dimensional curvilinear device reconstruction from two fluoroscopic views
Charlotte Delmas, Marie-Odile Berger, Erwan Kerrien, et al.
In interventional radiology, navigating devices under the sole guidance of fluoroscopic images inside a complex architecture of tortuous and narrow vessels like the cerebral vascular tree is a difficult task. Visualizing the device in 3D could facilitate this navigation. For curvilinear devices such as guide-wires and catheters, a 3D reconstruction may be achieved using two simultaneous fluoroscopic views, as available on a biplane acquisition system. The purpose of this paper is to present a new automatic three-dimensional curve reconstruction method that has the potential to reconstruct complex 3D curves and does not require a perfect segmentation of the endovascular device. Using epipolar geometry, our algorithm translates the point correspondence problem into a segment correspondence problem. Candidate 3D curves can be formed and evaluated independently after identifying all possible combinations of compatible 3D segments. Correspondence is then inherently solved by looking in 3D space for the most coherent curve in terms of continuity and curvature. This problem can be cast into a graph problem where the most coherent curve corresponds to the shortest path of a weighted graph. We present quantitative results of curve reconstructions performed from numerically simulated projections of tortuous 3D curves extracted from cerebral vascular trees affected with brain arteriovenous malformations as well as fluoroscopic image pairs of a guide-wire from both phantom and clinical sets. Our method was able to select the correct 3D segments in 97.5% of simulated cases thus demonstrating its ability to handle complex 3D curves and can deal with imperfect 2D segmentation.
Localizing and tracking electrodes using stereovision in epilepsy cases
Xiaoyao Fan, Songbai Ji, David W. Roberts, et al.
In epilepsy cases, subdural electrodes are often implanted to acquire intracranial EEG (iEEG) for seizure localization and resection planning. However, the electrodes may shift significantly between implantation and resection, during the time that the patient is monitored for iEEG recording. As a result, the accuracy of surgical planning based on electrode locations at the time of resection can be compromised. Previous studies have only quantified the electrode shift with respect to the skull, but not with respect to the cortical surface, because tracking cortical shift between surgeries is challenging. In this study, we use an intraoperative stereovision (iSV) system to visualize and localize the cortical surface as well as electrodes, record three-dimensional (3D) locations of the electrodes in MR space at the time of implantation and resection, respectively, and quantify the raw displacements, i.e., with respect to the skull. Furthermore, we track the cortical surface and quantify the shift between surgeries using an optical flow (OF) based motion-tracking algorithm. Finally, we compute the electrode shift with respect to the cortical surface by subtracting the cortical shift from raw measured displacements. We illustrate the method using one patient example. In this particular patient case, the results show that the electrodes not only shifted significantly with respect to the skull (8.79 ± 3.00 mm in the lateral direction, ranging from 2.88 mm to 12.87 mm), but also with respect to the cortical surface (7.20 ± 3.58 mm), whereas the cortical surface did not shift significantly in the lateral direction between surgeries (2.23 ± 0.76 mm).
Real-time surgery simulation of intracranial aneurysm clipping with patient-specific geometries and haptic feedback
Wolfgang Fenz, Johannes Dirnberger
Providing suitable training for aspiring neurosurgeons is becoming more and more problematic. The increasing popularity of the endovascular treatment of intracranial aneurysms leads to a lack of simple surgical situations for clipping operations, leaving mainly the complex cases, which present even experienced surgeons with a challenge. To alleviate this situation, we have developed a training simulator with haptic interaction allowing trainees to practice virtual clipping surgeries on real patient-specific vessel geometries. By using specialized finite element (FEM) algorithms (fast finite element method, matrix condensation) combined with GPU acceleration, we can achieve the necessary frame rate for smooth real-time interaction with the detailed models needed for a realistic simulation of the vessel wall deformation caused by the clamping with surgical clips. Vessel wall geometries for typical training scenarios were obtained from 3D-reconstructed medical image data, while for the instruments (clipping forceps, various types of clips, suction tubes) we use models provided by manufacturer Aesculap AG. Collisions between vessel and instruments have to be continuously detected and transformed into corresponding boundary conditions and feedback forces, calculated using a contact plane method. After a training, the achieved result can be assessed based on various criteria, including a simulation of the residual blood flow into the aneurysm. Rigid models of the surgical access and surrounding brain tissue, plus coupling a real forceps to the haptic input device further increase the realism of the simulation.
Application and histology-driven refinement of active contour models to functional region and nerve delineation: towards a digital brainstem atlas
This paper presents a methodology for the digital formatting of a printed atlas of the brainstem and the delineation of cranial nerves from this digital atlas. It also describes on-going work on the 3D resampling and refinement of the 2D functional regions and nerve contours. In MRI-based anatomical modeling for neurosurgery planning and simulation, the complexity of the functional anatomy entails a digital atlas approach, rather than less descriptive voxel or surface-based approaches. However, there is an insufficiency of descriptive digital atlases, in particular of the brainstem. Our approach proceeds from a series of numbered, contour-based sketches coinciding with slices of the brainstem featuring both closed and open contours. The closed contours coincide with functionally relevant regions, whereby our objective is to fill in each corresponding label, which is analogous to painting numbered regions in a paint-by-numbers kit. Any open contour typically coincides with a cranial nerve. This 2D phase is needed in order to produce densely labeled regions that can be stacked to produce 3D regions, as well as identifying the embedded paths and outer attachment points of cranial nerves. Cranial nerves are modeled using an explicit contour based technique called 1-Simplex. The relevance of cranial nerves modeling of this project is two-fold: i) this atlas will fill a void left by the brain segmentation communities, as no suitable digital atlas of the brainstem exists, and ii) this atlas is necessary to make explicit the attachment points of major nerves (except I and II) having a cranial origin. Keywords: digital atlas, contour models, surface models
Treatment Planning and Robotic Systems
icon_mobile_dropdown
Comparison of tablet-based strategies for incision planning in laser microsurgery
Andreas Schoob, Stefan Lekon, Dennis Kundrat, et al.
Recent research has revealed that incision planning in laser surgery deploying stylus and tablet outperforms state-of-the-art micro-manipulator-based laser control. Providing more detailed quantitation regarding that approach, a comparative study of six tablet-based strategies for laser path planning is presented. Reference strategy is defined by monoscopic visualization and continuous path drawing on a graphics tablet. Further concepts deploying stereoscopic or a synthesized laser view, point-based path definition, real-time teleoperation or a pen display are compared with the reference scenario. Volunteers were asked to redraw and ablate stamped lines on a sample. Performance is assessed by measuring planning accuracy, completion time and ease of use. Results demonstrate that significant differences exist between proposed concepts. The reference strategy provides more accurate incision planning than the stereo or laser view scenario. Real-time teleoperation performs best with respect to completion time without indicating any significant deviation in accuracy and usability. Point-based planning as well as the pen display provide most accurate planning and increased ease of use compared to the reference strategy. As a result, combining the pen display approach with point-based planning has potential to become a powerful strategy because of benefiting from improved hand-eye-coordination on the one hand and from a simple but accurate technique for path definition on the other hand. These findings as well as the overall usability scale indicating high acceptance and consistence of proposed strategies motivate further advanced tablet-based planning in laser microsurgery.
Automatic electrode configuration selection for image-guided cochlear implant programming
Cochlear implants (CIs) are neural prosthetics that stimulate the auditory nerve pathways within the cochlea using an implanted electrode array to restore hearing. After implantation, the CI is programmed by an audiologist who determines which electrodes are active, i.e., the electrode configuration, and selects other stimulation settings. Recent clinical studies by our group have shown that hearing outcomes can be significantly improved by using an image-guided electrode configuration selection technique we have designed. Our goal in this work is to automate the electrode configuration selection step with the long term goal of developing a fully automatic system that can be translated to the clinic. Until now, the electrode configuration selection step has been performed by an expert with the assistance of image analysis-based estimates of the electrode-neural interface. To automatically determine the electrode configuration, we have designed an optimization approach and propose the use of a cost function with feature terms designed to interpret the image analysis data in a similar fashion as the expert. Further, we have designed an approach to select parameters in the cost function using our database of existing electrode configuration plans as training data. The results we present show that our automatic approach results in electrode configurations that are better or equally as good as manually selected configurations in over 80% of the cases tested. This method represents a crucial step towards clinical translation of our image-guided cochlear implant programming system.
Nonholonomic catheter path reconstruction using electromagnetic tracking
Elodie Lugez, Hossein Sadjadi, Selim G. Akl, et al.
Catheter path reconstruction is a necessary step in many clinical procedures, such as cardiovascular interventions and high-dose-rate brachytherapy. To overcome limitations of standard imaging modalities, electromagnetic tracking has been employed to reconstruct catheter paths. However, tracking errors pose a challenge in accurate path reconstructions. We address this challenge by means of a filtering technique incorporating the electromagnetic measurements with the nonholonomic motion constraints of the sensor inside a catheter. The nonholonomic motion model of the sensor within the catheter and the electromagnetic measurement data were integrated using an extended Kalman filter. The performance of our proposed approach was experimentally evaluated using the Ascension’s 3D Guidance trakStar electromagnetic tracker. Sensor measurements were recorded during insertions of an electromagnetic sensor (model 55) along ten predefined ground truth paths. Our method was implemented in MATLAB and applied to the measurement data. Our reconstruction results were compared to raw measurements as well as filtered measurements provided by the manufacturer. The mean of the root-mean-square (RMS) errors along the ten paths was 3.7 mm for the raw measurements, and 3.3 mm with manufacturer’s filters. Our approach effectively reduced the mean RMS error to 2.7 mm. Compared to other filtering methods, our approach successfully improved the path reconstruction accuracy by exploiting the sensor’s nonholonomic motion constraints in its formulation. Our approach seems promising for a variety of clinical procedures involving reconstruction of a catheter path.
Methods for intraoperative, sterile pose-setting of patient-specific microstereotactic frames
Benjamin Vollmann, Samuel Müller, Dennis Kundrat, et al.
This work proposes new methods for a microstereotactic frame based on bone cement fixation. Microstereotactic frames are under investigation for minimal invasive temporal bone surgery, e.g. cochlear implantation, or for deep brain stimulation, where products are already on the market. The correct pose of the microstereotactic frame is either adjusted outside or inside the operating room and the frame is used for e.g. drill or electrode guidance. We present a patientspecific, disposable frame that allows intraoperative, sterile pose-setting. Key idea of our approach is bone cement between two plates that cures while the plates are positioned with a mechatronics system in the desired pose. This paper includes new designs of microstereotactic frames, a system for alignment and first measurements to analyze accuracy and applicable load.
Robot-assisted, ultrasound-guided minimally invasive navigation tool for brachytherapy and ablation therapy: initial assessment
Srikanth Bhattad, Abelardo Escoto, Richard Malthaner, et al.
Brachytherapy and thermal ablation are relatively new approaches in robot-assisted minimally invasive interventions for treating malignant tumors. Ultrasound remains the most favored choice for imaging feedback, the benefits being cost effectiveness, radiation free, and easy access in an OR. However it does not generally provide high contrast, noise free images. Distortion occurs when the sound waves pass through a medium that contains air and/or when the target organ is deep within the body. The distorted images make it quite difficult to recognize and localize tumors and surgical tools. Often tools, such as a bevel-tipped needle, deflect from its path during insertion, making it difficult to detect the needle tip using a single perspective view. The shifting of the target due to cardiac and/or respiratory motion can add further errors in reaching the target. This paper describes a comprehensive system that uses robot dexterity to capture 2D ultrasound images in various pre-determined modes for generating 3D ultrasound images and assists in maneuvering a surgical tool. An interactive 3D virtual reality environment is developed that visualizes various artifacts present in the surgical site in real-time. The system helps to avoid image distortion by grabbing images from multiple positions and orientation to provide a 3D view. Using the methods developed for this application, an accuracy of 1.3 mm was achieved in target attainment in an in-vivo experiment subjected to tissue motion. An accuracy of 1.36 mm and 0.93 mm respectively was achieved for the ex-vivo experiments with and without external induced motion. An ablation monitor widget that visualizes the changes during the complete ablation process and enables evaluation of the process in its entirety is integrated.
Generating patient-specific pulmonary vascular models for surgical planning
Daniel Murff, Jennifer Co-Vu, Walter G. O'Dell
Each year in the U.S., 7.4 million surgical procedures involving the major vessels are performed. Many of our patients require multiple surgeries, and many of the procedures include “surgical exploration”. Procedures of this kind come with a significant amount of risk, carrying up to a 17.4% predicted mortality rate. This is especially concerning for our target population of pediatric patients with congenital abnormalities of the heart and major pulmonary vessels. This paper offers a novel approach to surgical planning which includes studying virtual and physical models of pulmonary vasculature of an individual patient before operation obtained from conventional 3D X-ray computed tomography (CT) scans of the chest. These models would provide clinicians with a non-invasive, intricately detailed representation of patient anatomy, and could reduce the need for invasive planning procedures such as exploratory surgery. Researchers involved in the AirPROM project have already demonstrated the utility of virtual and physical models in treatment planning of the airways of the chest. Clinicians have acknowledged the potential benefit from such a technology. A method for creating patient-derived physical models is demonstrated on pulmonary vasculature extracted from a CT scan with contrast of an adult human. Using a modified version of the NIH ImageJ program, a series of image processing functions are used to extract and mathematically reconstruct the vasculature tree structures of interest. An auto-generated STL file is sent to a 3D printer to create a physical model of the major pulmonary vasculature generated from 3D CT scans of patients.
Registration
icon_mobile_dropdown
Scaphoid fracture fixation: localization of bones through statistical model to ultrasound registration
Emran Mohammad Abu Anas, Abtin Rasoulian, Paul St. John, et al.
Percutaneous treatment of scaphoid fractures has found increasing interest in recent years as it promises to minimize soft-tissue damage, and minimizes the risk of infections and the loss of the joint stability. However, as this procedure is mostly performed on 2D fluoroscopic images, the accurate localization of the scaphoid bone for fracture fixation renders extremely challenging. In this work, we thus propose the integration of a statistical wrist model with 3D intraoperative ultrasound for accurate localization of the scaphoid bone. We utilize a previously developed statistical wrist model and register it to bone surfaces in ultrasound images using a probabilistic approach that involves expectation-maximization. We utilize local phase symmetry to detect features in noisy ultrasound images; in addition, we use shadow information in ultrasound images to enhance and set apart bone from other features. Feasibility experiments are performed by registering the wrist model to 3D ultrasound volumes of two different wrists at two different wrist positions. And the result indicates a potential of the proposed technique for localization of the scaphoid bone in ultrasound images.
Comparison of optimization strategy and similarity metric in atlas-to-subject registration using statistical deformation model
Y. Otake, R. J. Murphy, R. B. Grupp, et al.
A robust atlas-to-subject registration using a statistical deformation model (SDM) is presented. The SDM uses statistics of voxel-wise displacement learned from pre-computed deformation vectors of a training dataset. This allows an atlas instance to be directly translated into an intensity volume and compared with a patient’s intensity volume. Rigid and nonrigid transformation parameters were simultaneously optimized via the Covariance Matrix Adaptation – Evolutionary Strategy (CMA-ES), with image similarity used as the objective function. The algorithm was tested on CT volumes of the pelvis from 55 female subjects. A performance comparison of the CMA-ES and Nelder-Mead downhill simplex optimization algorithms with the mutual information and normalized cross correlation similarity metrics was conducted. Simulation studies using synthetic subjects were performed, as well as leave-one-out cross validation studies. Both studies suggested that mutual information and CMA-ES achieved the best performance. The leave-one-out test demonstrated 4.13 mm error with respect to the true displacement field, and 26,102 function evaluations in 180 seconds, on average.
Incorporating target registration error into robotic bone milling
Michael A. Siebold, Neal P. Dillon, Robert J. Webster III, et al.
Robots have been shown to be useful in assisting surgeons in a variety of bone drilling and milling procedures. Examples include commercial systems for joint repair or replacement surgeries, with in vitro feasibility recently shown for mastoidectomy. Typically, the robot is guided along a path planned on a CT image that has been registered to the physical anatomy in the operating room, which is in turn registered to the robot. The registrations often take advantage of the high accuracy of fiducial registration, but, because no real-world registration is perfect, the drill guided by the robot will inevitably deviate from its planned path. The extent of the deviation can vary from point to point along the path because of the spatial variation of target registration error. The allowable deviation can also vary spatially based on the necessary safety margin between the drill tip and various nearby anatomical structures along the path. Knowledge of the expected spatial distribution of registration error can be obtained from theoretical models or experimental measurements and used to modify the planned path. The objective of such modifications is to achieve desired probabilities for sparing specified structures. This approach has previously been studied for drilling straight holes but has not yet been generalized to milling procedures, such as mastoidectomy, in which cavities of more general shapes must be created. In this work, we present a general method for altering any path to achieve specified probabilities for any spatial arrangement of structures to be protected. We validate the method via numerical simulations in the context of mastoidectomy.
Adaptive deformable image registration of inhomogeneous tissues
Jing Ren
Physics based deformable registration can provide physically consistent image match of deformable soft tissues. In order to help radiologist/surgeons to determine the status of malicious tumors, we often need to accurately align the regions with embedded tumors. This is a very challenging task since the tumor and the surrounding tissues have very different tissue properties such as stiffness and elasticity. In order to address this problem, based on minimum strain energy principle in elasticity theory, we propose to partition the whole region of interest into smaller sub-regions and dynamically adjust weights of vessel segments and bifurcation points in each sub-region in the registration objective function. Our previously proposed fast vessel registration is used as a component in the inner loop. We have validated the proposed method using liver MR images from human subjects. The results show that our method can detect the large registration errors and improve the registration accuracy in the neighborhood of the tumors and guarantee the registration errors to be within acceptable accuracy. The proposed technique has the potential to significantly improve the registration capability and the quality of clinical diagnosis and treatment planning.
Validation of model-based deformation correction in image-guided liver surgery via tracked intraoperative ultrasound: preliminary method and results
Soft tissue deformation represents a significant error source in current surgical navigation systems used for open hepatic procedures. While numerous algorithms have been proposed to rectify the tissue deformation that is encountered during open liver surgery, clinical validation of the proposed methods has been limited to surface based metrics and sub-surface validation has largely been performed via phantom experiments. Tracked intraoperative ultrasound (iUS) provides a means to digitize sub-surface anatomical landmarks during clinical procedures. The proposed method involves the validation of a deformation correction algorithm for open hepatic image-guided surgery systems via sub-surface targets digitized with tracked iUS. Intraoperative surface digitizations were acquired via a laser range scanner and an optically tracked stylus for the purposes of computing the physical-to-image space registration within the guidance system and for use in retrospective deformation correction. Upon completion of surface digitization, the organ was interrogated with a tracked iUS transducer where the iUS images and corresponding tracked locations were recorded. After the procedure, the clinician reviewed the iUS images to delineate contours of anatomical target features for use in the validation procedure. Mean closest point distances between the feature contours delineated in the iUS images and corresponding 3-D anatomical model generated from the preoperative tomograms were computed to quantify the extent to which the deformation correction algorithm improved registration accuracy. The preliminary results for two patients indicate that the deformation correction method resulted in a reduction in target error of approximately 50%.
Ultrasound Image Guidance: Joint Session with Conferences 9415 and 9419
icon_mobile_dropdown
Needle detection in ultrasound using the spectral properties of the displacement field: a feasibility study
Parmida Beigi, Tim Salcudean, Robert Rohling, et al.
This paper presents a new needle detection technique for ultrasound guided interventions based on the spectral properties of small displacements arising from hand tremour or intentional motion. In a block-based approach, the displacement map is computed for each block of interest versus a reference frame, using an optical flow technique. To compute the flow parameters, the Lucas-Kanade approach is used in a multiresolution and regularized form. A least-squares fit is used to estimate the flow parameters from the overdetermined system of spatial and temporal gradients. Lateral and axial components of the displacement are obtained for each block of interest at consecutive frames. Magnitude-squared spectral coherency is derived between the median displacements of the reference block and each block of interest, to determine the spectral correlation. In vivo images were obtained from the tissue near the abdominal aorta to capture the extreme intrinsic body motion and insertion images were captured from a tissue-mimicking agar phantom. According to the analysis, both the involuntary and intentional movement of the needle produces coherent displacement with respect to a reference window near the insertion site. Intrinsic body motion also produces coherent displacement with respect to a reference window in the tissue; however, the coherency spectra of intrinsic and needle motion are distinguishable spectrally. Blocks with high spectral coherency at high frequencies are selected, estimating a channel for needle trajectory. The needle trajectory is detected from locally thresholded absolute displacement map within the initial estimate. Experimental results show the RMS localization accuracy of 1:0 mm, 0:7 mm, and 0:5 mm for hand tremour, vibrational and rotational needle movements, respectively.
Active point out-of-plane ultrasound calibration
Alexis Cheng, Xiaoyu Guo, Haichong K. Zhang, et al.
Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.
Tracking and Organ Motion Modeling
icon_mobile_dropdown
Calibration of a needle tracking device with fiber Bragg grating sensors
Koushik K. Mandal, Francois Parent, Sylvain Martel, et al.
Accurate needle placement is essential in percutaneous procedures such as radiofrequency ablation (RFA) of liver tumors. Use of real-time navigation of an interventional needle can improve targeting accuracy and yield precise measurements of the needle tip inside the body. An emerging technology based on Fiber Bragg Grating (FBG) sensors has demonstrated the potential of estimating shapes at high frequencies (up to 20 kHz), fast enough for real-time applications. In this paper, we present a calibration procedure for this novel needle tracking technology using strain measurements obtained from fiber Bragg gratings (FBGs). Three glass fibers equipped with two FBGs each were incorporated into a 19G needle. The 3D needle shape is reconstructed based on a polynomial fitting of strain measurements obtained from the fibers. The real-time information provided by the needle tip position and shape allows tracking of the needle deflections during tissue insertion. An experimental setup was designed to yield a calibration that is insensitive to ambient temperature fluctuations and robust to slight external disturbances. We compare the shape of the 3D reconstructed needle to measurements obtained from camera images, as well as assess needle tip tracking accuracy on a ground-truth phantom. Initial results show that the tracking errors for the needle tip are under 1mm, while 3D shape deflections are minimal near the needle tip. The accuracy is appropriate for applications such as RFA of liver tumors.
Virtual rigid body: a new optical tracking paradigm in image-guided interventions
Alexis Cheng, David S. Lee, Nishikant Deshmukh, et al.
Tracking technology is often necessary for image-guided surgical interventions. Optical tracking is one the options, but it suffers from line of sight and workspace limitations. Optical tracking is accomplished by attaching a rigid body marker, having a pattern for pose detection, onto a tool or device. A larger rigid body results in more accurate tracking, but at the same time large size limits its usage in a crowded surgical workspace. This work presents a prototype of a novel optical tracking method using a virtual rigid body (VRB). We define the VRB as a 3D rigid body marker in the form of pattern on a surface generated from a light source. Its pose can be recovered by observing the projected pattern with a stereo-camera system. The rigid body's size is no longer physically limited as we can manufacture small size light sources. Conventional optical tracking also requires line of sight to the rigid body. VRB overcomes these limitations by detecting a pattern projected onto the surface. We can project the pattern onto a region of interest, allowing the pattern to always be in the view of the optical tracker. This helps to decrease the occurrence of occlusions. This manuscript describes the method and results compared with conventional optical tracking in an experiment setup using known motions. The experiments are done using an optical tracker and a linear-stage, resulting in targeting errors of 0.38mm±0.28mm with our method compared to 0.23mm±0.22mm with conventional optical markers. Another experiment that replaced the linear stage with a robot arm resulted in rotational errors of 0.50±0.31° and 2.68±2.20° and the translation errors of 0.18±0.10 mm and 0.03±0.02 mm respectively.
Integration of fiber optical shape sensing with medical visualization for minimal-invasive interventions
Torben Paetz, Christian Waltermann, Martin Angelmahr, et al.
We present a fiber optical shape sensing system that allows to track the shape of a standard telecom fiber with fiber Bragg grating. The shape sensing information is combined with a medical visualization platform to visualize the shape sensing information together with medical images and post-processing results like 3D models, vessel graphs, or segmentation results. The framework has a modular nature to use it for various medical applications like catheter or needle based interventions. The technology has potential in the medical area as it is MR-compatible and can easily be integrated in catheters and needles due to its small size.
4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy
Salam Dhou, Martina Hurwitz, Pankaj Mishra, et al.
A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.
Surgical tool detection and tracking in retinal microsurgery
Mohamed Alsheakhali, Mehmet Yigitsoy, Abouzar Eslami, et al.
Visual tracking of surgical instruments is an essential part of eye surgery, and plays an important role for the surgeons as well as it is a key component of robotics assistance during the operation time. The difficulty of detecting and tracking medical instruments in-vivo images comes from its deformable shape, changes in brightness, and the presence of the instrument shadow. This paper introduces a new approach to detect the tip of surgical tool and its width regardless of its head shape and the presence of the shadows or vessels. The approach relies on integrating structural information about the strong edges from the RGB color model, and the tool location-based information from L*a*b color model. The probabilistic Hough transform was applied to get the strongest straight lines in the RGB-images, and based on information from the L* and a*, one of these candidates lines is selected as the edge of the tool shaft. Based on that line, the tool slope, the tool centerline and the tool tip could be detected. The tracking is performed by keeping track of the last detected tool tip and the tool slope, and filtering the Hough lines within a box around the last detected tool tip based on the slope differences. Experimental results demonstrate the high accuracy achieved in term of detecting the tool tip position, the tool joint point position, and the tool centerline. The approach also meets the real time requirements.
A biomechanical approach for in vivo lung tumor motion prediction during external beam radiation therapy
Elham Karami, Stewart Gaede, Ting-Yim Lee, et al.
Lung Cancer is the leading cause of cancer death in both men and women. Among various treatment methods currently being used in the clinic, External Beam Radiation Therapy (EBRT) is used widely not only as the primary treatment method, but also in combination with chemotherapy and surgery. However, this method may lack desirable dosimetric accuracy because of respiration induced tumor motion. Recently, biomechanical modeling of the respiratory system has become a popular approach for tumor motion prediction and compensation. This approach requires reasonably accurate data pertaining to thoracic pressure variation, diaphragm position and biomechanical properties of the lung tissue in order to predict the lung tissue deformation and tumor motion. In this paper, we present preliminary results of an in vivo study obtained from a Finite Element Model (FEM) of the lung developed to predict tumor motion during respiration.
Can coffee improve image guidance?
Raul Wirz, Ray A. Lathrop, Isuru S. Godage, et al.
Anecdotally, surgeons sometimes observe large errors when using image guidance in endonasal surgery. We hypothesize that one contributing factor is the possibility that operating room personnel might accidentally bump the optically tracked rigid body attached to the patient after registration has been performed. In this paper we explore the registration error at the skull base that can be induced by simulated bumping of the rigid body, and find that large errors can occur when simulated bumps are applied to the rigid body. To address this, we propose a new fixation method for the rigid body based on granular jamming (i.e. using particles like ground coffee). Our results show that our granular jamming fixation prototype reduces registration error by 28%-68% (depending on bump direction) in comparison to a standard Brainlab reference headband.
Segmentation
icon_mobile_dropdown
Deep learning for automatic localization, identification, and segmentation of vertebral bodies in volumetric MR images
Amin Suzani, Abtin Rasoulian, Alexander Seitel, et al.
This paper proposes an automatic method for vertebra localization, labeling, and segmentation in multi-slice Magnetic Resonance (MR) images. Prior work in this area on MR images mostly requires user interaction while our method is fully automatic. Cubic intensity-based features are extracted from image voxels. A deep learning approach is used for simultaneous localization and identification of vertebrae. The localized points are refined by local thresholding in the region of the detected vertebral column. Thereafter, a statistical multi-vertebrae model is initialized on the localized vertebrae. An iterative Expectation Maximization technique is used to register the vertebral body of the model to the image edges and obtain a segmentation of the lumbar vertebral bodies. The method is evaluated by applying to nine volumetric MR images of the spine. The results demonstrate 100% vertebra identification and a mean surface error of below 2.8 mm for 3D segmentation. Computation time is less than three minutes per high-resolution volumetric image.
Grading remodeling severity in asthma based on airway wall thickening index and bronchoarterial ratio measured with MSCT
Catalin Fetita, Pierre-Yves Brillet, Christopher Brightling, et al.
Defining therapeutic protocols in asthma and monitoring patient response require a more in-depth knowledge on the disease severity and treatment outcome based on quantitative indicators. This paper aims at grading severity in asthma based on objective morphological measurements obtained in automated fashion from 3-D multi-slice computed tomography (MSCT) image datasets. These measures attempt to capture and quantify the airway remodeling process involved in asthma, both at the level of the airway wall thickness and airway lumen. Two morphological changes are thus targeted here, (1) the airway wall thickening measured as a global index characterizing the increase of wall thickness above a normal value of wall-to-lumen-radius ratio, and (2) the bronchoarterial ratio index assessed globally from numerous locations in the lungs. The combination of these indices provides a grading of the severity of the remodeling process in asthma which correlates with the known phenotype of the patients investigated. Preliminary application to assess the patient response in thermoplasty trials is also considered from the point of view of the defined indices.
Evaluation metrics for bone segmentation in ultrasound
Tracked ultrasound is a safe alternative to X-ray for imaging bones. The interpretation of bony structures is challenging as ultrasound has no specific intensity characteristic of bones. Several image segmentation algorithms have been devised to identify bony structures. We propose an open-source framework that would aid in the development and comparison of such algorithms by quantitatively measuring segmentation performance in the ultrasound images. True-positive and false-negative metrics used in the framework quantify algorithm performance based on correctly segmented bone and correctly segmented boneless regions. Ground-truth for these metrics are defined manually and along with the corresponding automatically segmented image are used for the performance analysis. Manually created ground truth tests were generated to verify the accuracy of the analysis. Further evaluation metrics for determining average performance per slide and standard deviation are considered. The metrics provide a means of evaluating accuracy of frames along the length of a volume. This would aid in assessing the accuracy of the volume itself and the approach to image acquisition (positioning and frequency of frame). The framework was implemented as an open-source module of the 3D Slicer platform. The ground truth tests verified that the framework correctly calculates the implemented metrics. The developed framework provides a convenient way to evaluate bone segmentation algorithms. The implementation fits in a widely used application for segmentation algorithm prototyping. Future algorithm development will benefit by monitoring the effects of adjustments to an algorithm in a standard evaluation framework.
Fast and intuitive segmentation of gyri of the human brain
Florian Weiler, Horst K. Hahn
The cortical surface of the human brain consists of a large number of folds forming valleys and ridges, the gyri and sulci. Often, it is desirable to perform a segmentation of a brain image into these underlying structures in order to assess parameters relative to these functional components. Typical examples for this include measurements of cortical thickness for individual functional areas, or the correlation of functional areas derived from fMRI data to corresponding anatomical areas seen in structural imaging. In this paper, we present a novel interactive technique, that allows for fast and intuitive segmentation of these functional areas from T1-weighted MR images of the brain. Our segmentation approach is based exclusively on morphological image processing operations, eliminating the requirement for explicit reconstruction of the brains surface.
Body-wide anatomy recognition in PET/CT images
Huiqian Wang, Jayaram K. Udupa, Dewey Odhner, et al.
With the rapid growth of positron emission tomography/computed tomography (PET/CT)-based medical applications, body-wide anatomy recognition on whole-body PET/CT images becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem and seldom studied due to unclear anatomy reference frame and low spatial resolution of PET images as well as low contrast and spatial resolution of the associated low-dose CT images. We previously developed an automatic anatomy recognition (AAR) system [15] whose applicability was demonstrated on diagnostic computed tomography (CT) and magnetic resonance (MR) images in different body regions on 35 objects. The aim of the present work is to investigate strategies for adapting the previous AAR system to low-dose CT and PET images toward automated body-wide disease quantification. Our adaptation of the previous AAR methodology to PET/CT images in this paper focuses on 16 objects in three body regions – thorax, abdomen, and pelvis – and consists of the following steps: collecting whole-body PET/CT images from existing patient image databases, delineating all objects in these images, modifying the previous hierarchical models built from diagnostic CT images to account for differences in appearance in low-dose CT and PET images, automatically locating objects in these images following object hierarchy, and evaluating performance. Our preliminary evaluations indicate that the performance of the AAR approach on low-dose CT images achieves object localization accuracy within about 2 voxels, which is comparable to the accuracies achieved on diagnostic contrast-enhanced CT images. Object recognition on low-dose CT images from PET/CT examinations without requiring diagnostic contrast-enhanced CT seems feasible.
Intraoperative Imaging and Visualization
icon_mobile_dropdown
Atlas and feature based 3D pathway visualization enhancement for skull base pre-operative fast planning from head CT
Nava Aghdasi, Yangming Li, Angelique Berens, et al.
Minimally invasive neuroendoscopic surgery provides an alternative to open craniotomy for many skull base lesions. These techniques provides a great benefit to the patient through shorter ICU stays, decreased post-operative pain and quicker return to baseline function. However, density of critical neurovascular structures at the skull base makes planning for these procedures highly complex. Furthermore, additional surgical portals are often used to improve visualization and instrument access, which adds to the complexity of pre-operative planning. Surgical approach planning is currently limited and typically involves review of 2D axial, coronal, and sagittal CT and MRI images. In addition, skull base surgeons manually change the visualization effect to review all possible approaches to the target lesion and achieve an optimal surgical plan. This cumbersome process relies heavily on surgeon experience and it does not allow for 3D visualization. In this paper, we describe a rapid pre-operative planning system for skull base surgery using the following two novel concepts: importance-based highlight and mobile portal. With this innovation, critical areas in the 3D CT model are highlighted based on segmentation results. Mobile portals allow surgeons to review multiple potential entry portals in real-time with improved visualization of critical structures located inside the pathway. To achieve this we used the following methods: (1) novel bone-only atlases were manually generated, (2) orbits and the center of the skull serve as features to quickly pre-align the patient’s scan with the atlas, (3) deformable registration technique was used for fine alignment, (4) surgical importance was assigned to each voxel according to a surgical dictionary, and (5) pre-defined transfer function was applied to the processed data to highlight important structures. The proposed idea was fully implemented as independent planning software and additional data are used for verification and validation. The experimental results show: (1) the proposed methods provided greatly improved planning efficiency while optimal surgical plans were successfully achieved, (2) the proposed methods successfully highlighted important structures and facilitated planning, (3) the proposed methods require shorter processing time than classical segmentation algorithms, and (4) these methods can be used to improve surgical safety for surgical robots.
Design and first implementation of business process visualization for a task manager supporting the workflow in an operating room
An operating room is a stressful work environment. Nevertheless, all involved persons have to work safely as there is no space for mistakes. To ensure a high level of concentration and seamless interaction, all involved persons have to know their own tasks and the tasks of their colleagues. The entire team must work synchronously at all times. To optimize the overall workflow, a task manager supporting the team was developed. In parallel, a common conceptual design of a business process visualization was developed, which makes all relevant information accessible in real-time during a surgery. In this context an overview of all processes in the operating room was created and different concepts for the graphical representation of these user-dependent processes were developed. This paper describes the concept of the task manager as well as the general concept in the field of surgery.
Quantitative wavelength analysis and image classification for intraoperative cancer diagnosis with hyperspectral imaging
Guolan Lu, Xulei Qin, Dongsheng Wang, et al.
Complete surgical removal of tumor tissue is essential for postoperative prognosis after surgery. Intraoperative tumor imaging and visualization are an important step in aiding surgeons to evaluate and resect tumor tissue in real time, thus enabling more complete resection of diseased tissue and better conservation of healthy tissue. As an emerging modality, hyperspectral imaging (HSI) holds great potential for comprehensive and objective intraoperative cancer assessment. In this paper, we explored the possibility of intraoperative tumor detection and visualization during surgery using HSI in the wavelength range of 450 nm - 900 nm in an animal experiment. We proposed a new algorithm for glare removal and cancer detection on surgical hyperspectral images, and detected the tumor margins in five mice with an average sensitivity and specificity of 94.4% and 98.3%, respectively. The hyperspectral imaging and quantification method have the potential to provide an innovative tool for image-guided surgery.
Methods for a fusion of optical coherence tomography and stereo camera image data
Jan Bergmeier, Dennis Kundrat, Andreas Schoob, et al.
This work investigates combination of Optical Coherence Tomography and two cameras, observing a microscopic scene. Stereo vision provides realistic images, but is limited in terms of penetration depth. Optical Coherence Tomography (OCT) enables access to subcutaneous structures, but 3D-OCT volume data do not give the surgeon a familiar view. The extension of the stereo camera setup with OCT imaging combines the benefits of both modalities. In order to provide the surgeon with a convenient integration of OCT into the vision interface, we present an automated image processing analysis of OCT and stereo camera data as well as combined imaging as augmented reality visualization. Therefore, we care about OCT image noise, perform segmentation as well as develop proper registration objects and methods. The registration between stereo camera and OCT results in a Root Mean Square error of 284 μm as average of five measurements. The presented methods are fundamental for fusion of both imaging modalities. Augmented reality is shown as application of the results. Further developments lead to fused visualization of subcutaneous structures, as information of OCT images, into stereo vision.
Self-calibration of cone-beam CT geometry using 3D-2D image registration: development and application to tasked-based imaging with a robotic C-arm
S. Ouadah, J. W. Stayman, G. Gang, et al.
Purpose: Robotic C-arm systems are capable of general noncircular orbits whose trajectories can be driven by the particular imaging task. However obtaining accurate calibrations for reconstruction in such geometries can be a challenging problem. This work proposes a method to perform a unique geometric calibration of an arbitrary C-arm orbit by registering 2D projections to a previously acquired 3D image to determine the transformation parameters representing the system geometry.

Methods: Experiments involved a cone-beam CT (CBCT) bench system, a robotic C-arm, and three phantoms. A robust 3D-2D registration process was used to compute the 9 degree of freedom (DOF) transformation between each projection and an existing 3D image by maximizing normalized gradient information with a digitally reconstructed radiograph (DRR) of the 3D volume. The quality of the resulting “self-calibration” was evaluated in terms of the agreement with an established calibration method using a BB phantom as well as image quality in the resulting CBCT reconstruction.

Results: The self-calibration yielded CBCT images without significant difference in spatial resolution from the standard (“true”) calibration methods (p-value >0.05 for all three phantoms), and the differences between CBCT images reconstructed using the “self” and “true” calibration methods were on the order of 10-3 mm-1. Maximum error in magnification was 3.2%, and back-projection ray placement was within 0.5 mm.

Conclusion: The proposed geometric “self” calibration provides a means for 3D imaging on general noncircular orbits in CBCT systems for which a geometric calibration is either not available or not reproducible. The method forms the basis of advanced “task-based” 3D imaging methods now in development for robotic C-arms.
A multimodal imaging framework for enhanced robot-assisted partial nephrectomy guidance
Robot-assisted laparoscopic partial nephrectomies (RALPN) are performed to treat patients with locally confined renal carcinoma. There are well-documented benefits to performing partial (opposed to radical) kidney resections and to using robot-assisted laparoscopic (opposed to open) approaches. However, there are challenges in identifying tumor margins and critical benign structures including blood vessels and collecting systems during current RALPN procedures. The primary objective of this effort is to couple multiple image and data streams together to augment visual information currently provided to surgeons performing RALPN and ultimately ensure complete tumor resection and minimal damage to functional structures (i.e. renal vasculature and collecting systems). To meet this challenge we have developed a framework and performed initial feasibility experiments to couple pre-operative high-resolution anatomic images with intraoperative MRI, ultrasound (US) and optical-based surface mapping and kidney tracking. With these registered images and data streams, we aim to overlay the high-resolution contrast-enhanced anatomic (CT or MR) images onto the surgeon’s view screen for enhanced guidance. To date we have integrated the following components of our framework: 1) a method for tracking an intraoperative US probe to extract the kidney surface and a set of embedded kidney markers, 2) a method for co-registering intraoperative US scans with pre-operative MR scans, and 3) a method for deforming pre-op scans to match intraoperative scans. These components have been evaluated through phantom studies to demonstrate protocol feasibility.
Keynote and 2D/3D Registration
icon_mobile_dropdown
Known-component 3D-2D registration for image guidance and quality assurance in spine surgery pedicle screw placement
A. Uneri, J. W. Stayman, T. De Silva, et al.
Purpose. To extend the functionality of radiographic / fluoroscopic imaging systems already within standard spine surgery workflow to: 1) provide guidance of surgical device analogous to an external tracking system; and 2) provide intraoperative quality assurance (QA) of the surgical product.

Methods. Using fast, robust 3D-2D registration in combination with 3D models of known components (surgical devices), the 3D pose determination was solved to relate known components to 2D projection images and 3D preoperative CT in near-real-time. Exact and parametric models of the components were used as input to the algorithm to evaluate the effects of model fidelity. The proposed algorithm employs the covariance matrix adaptation evolution strategy (CMA-ES) to maximize gradient correlation (GC) between measured projections and simulated forward projections of components. Geometric accuracy was evaluated in a spine phantom in terms of target registration error at the tool tip (TREx), and angular deviation (TREΦ) from planned trajectory.

Results. Transpedicle surgical devices (probe tool and spine screws) were successfully guided with TREx<2 mm and TREΦ <0.5° given projection views separated by at least >30° (easily accommodated on a mobile C-arm). QA of the surgical product based on 3D-2D registration demonstrated the detection of pedicle screw breach with TREx<1 mm, demonstrating a trend of improved accuracy correlated to the fidelity of the component model employed.

Conclusions. 3D-2D registration combined with 3D models of known surgical components provides a novel method for near-real-time guidance and quality assurance using a mobile C-arm without external trackers or fiducial markers. Ongoing work includes determination of optimal views based on component shape and trajectory, improved robustness to anatomical deformation, and expanded preclinical testing in spine and intracranial surgeries.
Device and methods for "gold standard" registration of clinical 3D and 2D cerebral angiograms
Hennadii Madan, Boštjan Likar, Franjo Pernuš, et al.
Translation of any novel and existing 3D-2D image registration methods into clinical image-guidance systems is limited due to lack of their objective validation on clinical image datasets. The main reason is that, besides the calibration of the 2D imaging system, a reference or ”gold standard” registration is very difficult to obtain on clinical image datasets. In the context of cerebral endovascular image-guided interventions (EIGIs), we present a calibration device in the form of a headband with integrated fiducial markers and, secondly, propose an automated pipeline comprising 3D and 2D image processing, analysis and annotation steps, the result of which is a retrospective calibration of the 2D imaging system and an optimal, i.e., “gold standard” registration of 3D and 2D images. The device and methods were used to create the ”gold standard” on 15 datasets of 3D and 2D cerebral angiograms, whereas each dataset was acquired on a patient undergoing EIGI for either aneurysm coiling or embolization of arteriovenous malformation. The use of the device integrated seamlessly in the clinical workflow of EIGI. While the automated pipeline eliminated all manual input or interactive image processing, analysis or annotation. In this way, the time to obtain the ”gold standard” was reduced from 30 to less than one minute and the “gold standard” of 3D-2D registration on all 15 datasets of cerebral angiograms was obtained with a sub-0.1 mm accuracy.
Twenty-five years of error (Presentation Recording)
Today, the phrase, “Target Registration Error”, typically shortened to TRE, is an integral part of the vernacular of both surgical guidance and image registration, but it was not always so. This terminology, along with “Fiducial Registration Error” and “Fiducial Localization Error” was developed circa 1990 to facilitate the communication of information among researchers who were contending with the errors that arise when one view of a patient is aligned with another, particularly when that alignment is based on fiducial markers. The work required to develop a theoretical understanding of these errors and to develop algorithms and experimental methods to probe them has involved many people and many institutions, and it continues today. This twenty-five year effort is the subject of this address, but we will not dwell on the details, almost all of which have been presented first at this very same symposium. Instead we will focus on the backstory. It is a story of people and events, of lab rivalry and cooperation, of heroes and villains, of sour reviews and sweet vindication, of disappointment when things keep going wrong, and gratification when they finally go right. And it even includes a murder mystery. This address is meant to be entertaining, but it is hoped that it might also send an encouraging message to those researchers, particularly students, who are having troubles of their own. And that message is that setbacks and criticism today do not mean that success won’t come tomorrow.
Abdominal and Pelvic Procedures
icon_mobile_dropdown
Data fusion for planning target volume and isodose prediction in prostate brachytherapy
Saman Nouranian, Mahdi Ramezani, S. Sara Mahdavi, et al.
In low-dose prostate brachytherapy treatment, a large number of radioactive seeds is implanted in and adjacent to the prostate gland. Planning of this treatment involves the determination of a Planning Target Volume (PTV), followed by defining the optimal number of seeds, needles and their coordinates for implantation. The two major planning tasks, i.e. PTV determination and seed definition, are associated with inter- and intra-expert variability. Moreover, since these two steps are performed in sequence, the variability is accumulated in the overall treatment plan. In this paper, we introduce a model based on a data fusion technique that enables joint determination of PTV and the minimum Prescribed Isodose (mPD) map. The model captures the correlation between different information modalities consisting of transrectal ultrasound (TRUS) volumes, PTV and isodose contours. We take advantage of joint Independent Component Analysis (jICA) as a linear decomposition technique to obtain a set of joint components that optimally describe such correlation. We perform a component stability analysis to generate a model with stable parameters that predicts the PTV and isodose contours solely based on a new patient TRUS volume. We propose a framework for both modeling and prediction processes and evaluate it on a dataset of 60 brachytherapy treatment records. We show PTV prediction error of 10:02±4:5% and the V100 isodose overlap of 97±3:55% with respect to the clinical gold standard.
Optimizing MRI-targeted fusion prostate biopsy: the effect of systematic error and anisotropy on tumor sampling
Peter R. Martin, Derek W. Cool M.D., Cesare Romagnoli M.D., et al.
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy aims to reduce the 21–47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician’s desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system’s lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.
Navigated marker placement for motion compensation in radiotherapy
A. Winterstein, K. März, A. M. Franz, et al.
Radiotherapy is frequently used to treat unoperated or partially resected tumors. Tumor movement, e.g. caused by respiration, is a major challenge in this context. Markers can be implanted around the tumor prior to radiation therapy for accurate tracking of tumor movement. However, accurate placement of these markers while keeping a secure margin around the target and while taking into account critical structures is a difficult task. Computer-assisted needle insertion has been an active field of research in the past decades. However, the challenge of navigated marker placement for motion compensated radiotherapy has not yet been addressed. This work presents a system to support marker implantation for radiotherapy under consideration of safety margins and optimal marker configuration. It is designed to allow placement of markers both percutaneously and during an open liver surgery. To this end, we adapted the previously proposed EchoTrack system which integrates ultrasound (US) imaging and electromagnetic (EM) tracking in a single mobile modality. The potential of our new marker insertion concept was evaluated in a phantom study by inserting sets of three markers around dedicated targets (n=22) simultaneously spacing the markers evenly around the target as well as placing the markers in a defined distance to the target. In all cases the markers were successfully placed in a configuration fulfilling the predefined criteria. This includes a minimum distance of 18.9 ± 2.4 mm between marker and tumor as well as a divergence of 2.1 ± 1.5 mm from the planned marker positions. We conclude that our system has high potential to facilitate the placement of markers in suitable configurations for surgeons without extensive experience in needle punctions as high quality configurations were obtained even by medical non-experts.
Image guidance improves localization of sonographically occult colorectal liver metastases
Universe Leung, Amber L. Simpson, Lauryn B. Adams, et al.
Assessing the therapeutic benefit of surgical navigation systems is a challenging problem in image-guided surgery. The exact clinical indications for patients that may benefit from these systems is not always clear, particularly for abdominal surgery where image-guidance systems have failed to take hold in the same way as orthopedic and neurosurgical applications. We report interim analysis of a prospective clinical trial for localizing small colorectal liver metastases using the Explorer system (Path Finder Technologies, Nashville, TN). Colorectal liver metastases are small lesions that can be difficult to identify with conventional intraoperative ultrasound due to echogeneity changes in the liver as a result of chemotherapy and other preoperative treatments. Interim analysis with eighteen patients shows that 9 of 15 (60%) of these occult lesions could be detected with image guidance. Image guidance changed intraoperative management in 3 (17%) cases. These results suggest that image guidance is a promising tool for localization of small occult liver metastases and that the indications for image-guided surgery are expanding.
Poster Session
icon_mobile_dropdown
Medical image segmentation using object atlas versus object cloud models
Medical image segmentation is crucial for quantitative organ analysis and surgical planning. Since interactive segmentation is not practical in a production-mode clinical setting, automatic methods based on 3D object appearance models have been proposed. Among them, approaches based on object atlas are the most actively investigated. A key drawback of these approaches is that they require a time-costly image registration process to build and deploy the atlas. Object cloud models (OCM) have been introduced to avoid registration, considerably speeding up the whole process, but they have not been compared to object atlas models (OAM). The present paper fills this gap by presenting a comparative analysis of the two approaches in the task of individually segmenting nine anatomical structures of the human body. Our results indicate that OCM achieve a statistically significant better accuracy for seven anatomical structures, in terms of Dice Similarity Coefficient and Average Symmetric Surface Distance.
Interactive non-uniformity correction and intensity standardization of MR images
Yubing Tong, Jayaram K. Udupa, Dewey Odhner, et al.
Image non-uniformity and intensity non-standardness are two major hurdles encountered in human and computer interpretation and analysis of magnetic resonance (MR) images. Automated methods for image non-uniformity correction (NC) and intensity standardization (IS) may fail because solutions for them require identifying regions representing the same tissue type for several different tissues, and the automatic strategies, irrespective of the approach, may fail in this task. This paper presents interactive strategies to overcome this problem: interactive NC and interactive IS. The methods require sample tissue regions to be specified for several different types of tissues. Interactive NC estimates the degree of non-uniformity at each voxel in a given image, builds a global function for non-uniformity correction, and then corrects the image to improve quality. Interactive IS includes two steps: a calibration step and a transformation step. In the first step, tissue intensity signatures of each tissue from a few subjects are utilized to set up key landmarks in a standardized intensity space. In the second step, a piecewise linear intensity mapping function is built between the same tissue signatures derived from the given image and those in the standardized intensity space to transform the intensity of the given image into standardized intensity. Preliminary results on abdominal T1-weighted and T2-weighted MR images of 20 subjects show that interactive NC and IS are feasible and can significantly improve image quality over automatic methods. Interactive IS for MR images combined with interactive NC can substantially improve numeric characterization of tissues.
Evaluation of input devices for teleoperation of concentric tube continuum robots for surgical tasks
Carolin Fellmann, Daryoush Kashi, Jessica Burgner-Kahrs
For those minimally invasive surgery where conventional surgical instruments cannot reach the surgical site due to their straight structure and rigidity, concentric tube continuum robots are a promising technology because of their small size (comparable to a needle) and maneuverability. These flexible, compliant manipulators can easily access hard to reach anatomical structures, e.g. by turning around corners. By teleoperating the robot the surgeon stays in direct control at any time. In this paper, three off-the-shelf input devices are considered for teleoperation of a concentric tube continuum robot: a 3D mouse, a gamepad, and a 3 degrees of freedom haptic input device. Three tasks which mimic relevant surgical maneuvers are performed by 12 subjects using each input device: reaching specific locations, picking and placing objects from one location to another, and approaching the surgical site through a restricted pathway. We present quantitative results (task completion time, accuracy, etc.), a statistical analysis, and empirical results (questionnaires). Overall, the performance of subjects using the 3D mouse was superior to the performance using the other input devices. The subjective ranking of the 3D mouse by the subjects confirms this result.
Additive manufacturing of patient-specific tubular continuum manipulators
Ernar Amanov, Thien-Dang Nguyen, Jessica Burgner-Kahrs
Tubular continuum robots, which are composed of multiple concentric, precurved, elastic tubes, provide more dexterity than traditional surgical instruments at the same diameter. The tubes can be precurved such that the resulting manipulator fulfills surgical task requirements. Up to now the only material used for the component tubes of those manipulators is NiTi, a super-elastic shape-memory alloy of nickel and titan. NiTi is a cost-intensive material and fabrication processes are complex, requiring (proprietary) technology, e.g. for shape setting. In this paper, we evaluate component tubes made of 3 different thermoplastic materials (PLA, PCL and nylon) using fused filament fabrication technology (3D printing). This enables quick and cost-effective production of custom, patient-specific continuum manipulators, produced on site on demand. Stress-strain and deformation characteristics are evaluated experimentally for 16 fabricated tubes of each thermoplastic with diameters and shapes equivalent to those of NiTi tubes. Tubes made of PCL and nylon exhibit properties comparable to those made of NiTi. We further demonstrate a tubular continuum manipulator composed of 3 nylon tubes in a transnasal, transsphenoidal skull base surgery scenario in vitro.
Towards the development of a spring-based continuum robot for neurosurgery
Yeongjin Kim, Shing Shin Cheng, Jaydev P. Desai
Brain tumor is usually life threatening due to the uncontrolled growth of abnormal cells native to the brain or the spread of tumor cells from outside the central nervous system to the brain. The risks involved in carrying out surgery within such a complex organ can cause severe anxiety in cancer patients. However, neurosurgery, which remains one of the more effective ways of treating brain tumors focused in a confined volume, can have a tremendously increased success rate if the appropriate imaging modality is used for complete tumor removal. Magnetic resonance imaging (MRI) provides excellent soft-tissue contrast and is the imaging modality of choice for brain tumor imaging. MRI combined with continuum soft robotics has immense potential to be the revolutionary treatment technique in the field of brain cancer. It eliminates the concern of hand tremor and guarantees a more precise procedure. One of the prototypes of Minimally Invasive Neurosurgical Intracranial Robot (MINIR-II), which can be classified as a continuum soft robot, consists of a snake-like body made of three segments of rapid prototyped plastic springs. It provides improved dexterity with higher degrees of freedom and independent joint control. It is MRI-compatible, allowing surgeons to track and determine the real-time location of the robot relative to the brain tumor target. The robot was manufactured in a single piece using rapid prototyping technology at a low cost, allowing it to disposable after each use. MINIR-II has two DOFs at each segment with both joints controlled by two pairs of MRI-compatible SMA spring actuators. Preliminary motion tests have been carried out using vision-tracking method and the robot was able to move to different positions based on user commands.
Reconstruction of surfaces from planar contours through contour interpolation
Kyle Sunderland, Boyeong Woo, Csaba Pinter, et al.
Segmented structures such as targets or organs at risk are typically stored as 2D contours contained on evenly spaced cross sectional images (slices). Contour interpolation algorithms are implemented in radiation oncology treatment planning software to turn 2D contours into a 3D surface, however the results differ between algorithms, causing discrepancies in analysis. Our goal was to create an accurate and consistent contour interpolation algorithm that can handle issues such as keyhole contours, rapid changes, and branching. This was primarily motivated by radiation therapy research using the open source SlicerRT extension for the 3D Slicer platform. The implemented algorithm triangulates the mesh by minimizing the length of edges spanning the contours with dynamic programming. The first step in the algorithm is removing keyholes from contours. Correspondence is then found between contour layers and branching patterns are determined. The final step is triangulating the contours and sealing the external contours. The algorithm was tested on contours segmented on computed tomography (CT) images. Some cases such as inner contours, rapid changes in contour size, and branching were handled well by the algorithm when encountered individually. There were some special cases in which the simultaneous occurrence of several of these problems in the same location could cause the algorithm to produce suboptimal mesh. An open source contour interpolation algorithm was implemented in SlicerRT for reconstructing surfaces from planar contours. The implemented algorithm was able to generate qualitatively good 3D mesh from the set of 2D contours for most tested structures.
Intraoperative on-the-fly organ-mosaicking for laparoscopic surgery
S. Bodenstedt, D. Reichard, S. Suwelack, et al.
The goal of computer-assisted surgery is to provide the surgeon with guidance during an intervention using augmented reality (AR). To display preoperative data correctly, soft tissue deformations that occur during surgery have to be taken into consideration. Optical laparoscopic sensors, such as stereo endoscopes, can produce a 3D reconstruction of single stereo frames for registration. Due to the small field of view and the homogeneous structure of tissue, reconstructing just a single frame in general will not provide enough detail to register and update preoperative data due to ambiguities. In this paper, we propose and evaluate a system that combines multiple smaller reconstructions from different viewpoints to segment and reconstruct a large model of an organ. By using GPU-based methods we achieve near real-time performance. We evaluated the system on an ex-vivo porcine liver (4.21mm± 0.63) and on two synthetic silicone livers (3.64mm ± 0.31 and 1.89mm ± 0.19) using three different methods for estimating the camera pose (no tracking, optical tracking and a combination).
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
Xinyang Liu, He Su, Sukryool Kang, et al.
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Towards real-time remote processing of laparoscopic video
Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.
A novel method and workflow for stereotactic surgery with a mobile intraoperative CT imaging device
xCAT®, (Xoran Technologies, LLC., Ann Arbor, MI) is a CT imaging device that has been used for minimally invasive surgeries. Designed with flat panel and cone-beam imaging technique, it provides a fast, low-dose CT imaging alternative for diagnosis and examination purposes at hospitals. With its unique compact and mobile characteristics, it allows scanning inside crowded operating rooms (OR). The xCAT allows acquisition of images in the OR that show the most recent morphology during the procedure. This can potentially improve outcomes of surgical procedures such as deep brain stimulation (DBS) and other neurosurgeries, since brain displacement and deformation (brain shift) often occur between pre-operative imaging and electrode placement during surgery. However, the small gantry size of the compact scanner obstructs scanning of patients with stereotactic frames or skull clamp. In this study, we explored a novel method, in which we first utilized the xCAT to obtain CT images with fiducial markers, registered the stereotactic frame with those markers, and finally, target measurements were calculated and set up on the frame. The new procedure workflow provides a means to use CT images obtained inside of OR for stereotactic surgery and can be used in current intraoperative settings. Our phantom validation study in lab shows that the procedure workflow with this method is easy to conduct.
Sparse reconstruction of liver cirrhosis from monocular mini-laparoscopic sequences
Jan Marek Marcinczak, Sven Painer, Rolf-Rainer Grigat
Mini-laparoscopy is a technique which is used by clinicians to inspect the liver surface with ultra-thin laparoscopes. However, so far no quantitative measures based on mini-laparoscopic sequences are possible. This paper presents a Structure from Motion (SfM) based methodology to do 3D reconstruction of liver cirrhosis from mini-laparoscopic videos. The approach combines state-of-the-art tracking, pose estimation, outlier rejection and global optimization to obtain a sparse reconstruction of the cirrhotic liver surface. Specular reflection segmentation is included into the reconstruction framework to increase the robustness of the reconstruction. The presented approach is evaluated on 15 endoscopic sequences using three cirrhotic liver phantoms. The median reconstruction accuracy ranges from 0.3 mm to 1 mm.
Development and clinical application of surgical navigation system for laparoscopic hepatectomy
Yuichiro Hayashi, Tsuyoshi Igami, Tomoaki Hirose, et al.
This paper describes a surgical navigation system for laparoscopic surgery and its application to laparoscopic hepatectomy. The proposed surgical navigation system presents virtual laparoscopic views using a 3D positional tracker and preoperative CT images. We use an electromagnetic tracker for obtaining positional information of a laparoscope and a forceps. The point-pair matching registration method is performed for aligning coordinate systems between the 3D positional tracker and the CT images. Virtual laparoscopic views corresponding to the laparoscope position are generated from the obtained positional information, the registration results, and the CT images using a volume rendering method. We performed surgical navigation using the proposed system during laparoscopic hepatectomy for fourteen cases. The proposed system could generate virtual laparoscopic views in synchronization with the laparoscope position during surgery.
A MR-TRUS registration method for ultrasound-guided prostate interventions
Xiaofeng Yang, Peter Rossi, Hui Mao, et al.
In this paper, we reported a MR-TRUS prostate registration method that uses a subject-specific prostate strain model to improve MR-targeted, US-guided prostate interventions (e.g., biopsy and radiotherapy). The proposed algorithm combines a subject-specific prostate strain model with a Bspline transformation to register the prostate gland of the MRI to the TRUS images. The prostate strain model was obtained through US elastography and a 3D strain map of the prostate was generated. The B-spline transformation was calculated by minimizing Euclidean distance between MR and TRUS prostate surfaces. This prostate stain map was used to constrain the B-spline-based transformation to predict and compensate for the internal prostate-gland deformation. This method was validated with a prostate-phantom experiment and a pilot study of 5 prostate-cancer patients. For the phantom study, the mean target registration error (TRE) was 1.3 mm. MR-TRUS registration was also successfully performed for 5 patients with a mean TRE less than 2 mm. The proposed registration method may provide an accurate and robust means of estimating internal prostate-gland deformation, and could be valuable for prostate-cancer diagnosis and treatment.
3D/2D image registration using weighted histogram of gradient directions
Soheil Ghafurian, Ilker Hacihaliloglu, Dimitris N. Metaxas, et al.
Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to ±90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.
A metric for evaluation of deformable image registration
Akihiro Takemura, Hironori Kojima, Shinichi Ueda, et al.
We propose a new metric, local uncertainty (LU) for the evaluation of deformable image registration (DIR) for dose accumulation in radiotherapy. LU measures the uncertainty of placement of each voxel in an image set after a DIR. The underlying concept of LU is that the distance between a focused voxel and a surrounding voxel on an image feature such as an edge is unchanged locally when the organ that includes these voxels is deformed. A candidate for the focused voxel after DIR can be calculated from three surrounding voxels and their distances. The positions of the candidates of the focused voxel calculated from several groups of any three surrounding voxels would vary. The variation of candidate positions indicates uncertainty in the focused voxel position. Thus, the standard deviation of candidate positions is treated as an LU value. The LU can be calculated in uniform signal regions. Assessment of DIR results in such regions is important for dose accumulation. The LU calculation was applied to a pair of computed tomography (CT) head and neck examinations after DIR. These CT examinations were for initial radiotherapy planning, and re-planning for a treatment course where the tumor underwent shrinkage during treatment. We generated an LU image showing high LU values in the shrinking tumor region and low LU values in undeformable bone. We have proposed the LU as a new metric for DIR.
An anatomically oriented breast model for MRI
Dominik Kutra, Martin Bergtholdt, Jörg Sabczynski, et al.
Breast cancer is the most common cancer in women in the western world. In the breast cancer care-cycle, MRIis e.g. employed in lesion characterization and therapy assessment. Reading of a single three dimensional image or comparing a multitude of such images in a time series is a time consuming task. Radiological reporting is done manually by translating the spatial position of a finding in an image to a generic representation in the form of a breast diagram, outlining quadrants or clock positions. Currently, registration algorithms are employed to aid with the reading and interpretation of longitudinal studies by providing positional correspondence. To aid with the reporting of findings, knowledge about the breast anatomy has to be introduced to translate from patient specific positions to a generic representation. In our approach we fit a geometric primitive, the semi-super-ellipsoid to patient data. Anatomical knowledge is incorporated by fixing the tip of the super-ellipsoid to the mammilla position and constraining its center-point to a reference plane defined by landmarks on the sternum. A coordinate system is then constructed by linearly scaling the fitted super-ellipsoid, defining a unique set of parameters to each point in the image volume. By fitting such a coordinate system to a different image of the same patient, positional correspondence can be generated. We have validated our method on eight pairs of baseline and follow-up scans (16 breasts) that were acquired for the assessment of neo-adjuvant chemotherapy. On average, the location predicted and the actual location of manually set landmarks are within a distance of 5.6 mm. Our proposed method allows for automatic reporting simply by uniformly dividing the super-ellipsoid around its main axis.
Surface-based registration of liver in ultrasound and CT
Ehsan Dehghan, Kongkuo Lu, Pingkun Yan, et al.
Ultrasound imaging is an attractive modality for real-time image-guided interventions. Fusion of US imaging with a diagnostic imaging modality such as CT shows great potential in minimally invasive applications such as liver biopsy and ablation. However, significantly different representation of liver in US and CT turns this image fusion into a challenging task, in particular if some of the CT scans may be obtained without contrast agents. The liver surface, including the diaphragm immediately adjacent to it, typically appears as a hyper-echoic region in the ultrasound image if the proper imaging window and depth setting are used. The liver surface is also well visualized in both contrast and non-contrast CT scans, thus making the diaphragm or liver surface one of the few attractive common features for registration of US and non-contrast CT. We propose a fusion method based on point-to-volume registration of liver surface segmented in CT to a processed electromagnetically (EM) tracked US volume. In this approach, first, the US image is pre-processed in order to enhance the liver surface features. In addition, non-imaging information from the EM-tracking system is used to initialize and constrain the registration process. We tested our algorithm in comparison with a manually corrected vessel-based registration method using 8 pairs of tracked US and contrast CT volumes. The registration method was able to achieve an average deviation of 12.8mm from the ground truth measured as the root mean square Euclidean distance for control points distributed throughout the US volume. Our results show that if the US image acquisition is optimized for imaging of the diaphragm, high registration success rates are achievable.
Smooth extrapolation of unknown anatomy via statistical shape models
R. B. Grupp, H. Chiang, Y. Otake, et al.
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
Collision detection and modeling of rigid and deformable objects in laparoscopic simulator
Mary-Clare Dy, Kazuyoshi Tagawa, Hiromi T. Tanaka, et al.
Laparoscopic simulators are viable alternatives for surgical training and rehearsal. Haptic devices can also be incorporated with virtual reality simulators to provide additional cues to the users. However, to provide realistic feedback, the haptic device must be updated by 1kHz. On the other hand, realistic visual cues, that is, the collision detection and deformation between interacting objects must be rendered at least 30 fps. Our current laparoscopic simulator detects the collision between a point on the tool tip, and on the organ surfaces, in which haptic devices are attached on actual tool tips for realistic tool manipulation. The triangular-mesh organ model is rendered using a mass spring deformation model, or finite element method-based models. In this paper, we investigated multi-point-based collision detection on the rigid tool rods. Based on the preliminary results, we propose a method to improve the collision detection scheme, and speed up the organ deformation reaction. We discuss our proposal for an efficient method to compute simultaneous multiple collision between rigid (laparoscopic tools) and deformable (organs) objects, and perform the subsequent collision response, with haptic feedback, in real-time.
Surgical instrument similarity metrics and tray analysis for multi-sensor instrument identification
Bernhard Glaser, Tobias Schellenberg, Stefan Franke, et al.
A robust identification of the instrument currently used by the surgeon is crucial for the automatic modeling and analysis of surgical procedures. Various approaches for intra-operative surgical instrument identification have been presented, mostly based on radio-frequency identification (RFID) or endoscopic video analysis. A novel approach is to identify the instruments on the instrument table of the scrub nurse with a combination of video and weight information. In a previous article, we successfully followed this approach and applied it to multiple instances of an ear, nose and throat (ENT) procedure and the surgical tray used therein. In this article, we present a metric for the suitability of the instruments of a surgical tray for identification by video and weight analysis and apply it to twelve trays of four different surgical domains (abdominal surgery, neurosurgery, orthopedics and urology). The used trays were digitized at the central sterile services department of the hospital. The results illustrate that surgical trays differ in their suitability for the approach. In general, additional weight information can significantly contribute to the successful identification of surgical instruments. Additionally, for ten different surgical instruments, ten exemplars of each instrument were tested for their weight differences. The samples indicate high weight variability in instruments with identical brand and model number. The results present a new metric for approaches aiming towards intra-operative surgical instrument detection and imply consequences for algorithms exploiting video and weight information for identification purposes.
Which pivot calibration?
Estimating the location of a tracked tool’s tip relative to its Dynamic Reference Frame (DRF) and localizing a specific point in a tracking system’s coordinate frame are fundamental tasks in image-guided procedures. The most common approach to estimating these values is by pivoting a tool around a fixed point. The transformations from the tracking system’s frame to the tool's DRF are the input. The output is the translation from the DRF to the tool’s tip and the translation from the tracker’s frame to the pivoting point. While the input and output are unique, there are multiple mathematical formulations for performing this estimation task. The question is, are these formulations equivalent in terms of precision and accuracy? In this work we empirically evaluate three common formulations, a geometry based sphere fitting formulation and two algebraic formulations. In addition we evaluate robust variants of these formulations using the RANSAC framework. Our evaluation shows that the algebraic formulations yield estimates that are more precise and accurate than the sphere fitting formulation. Using the Vicra optical tracking system from Northern Digital Inc., we observed that the algebraic approaches have a mean(std) precision of 0.25(0.11)mm localizing the pivoting point relative to the tracked DRF, and yield a fiducial registration error with a mean(std) 0.15(0.08)mm when registering a precisely constructed divot phantom to the localized points in the tracking system's frame. The sphere fitting formulation yielded less precise and accurate results with a mean(std) of 0.35(0.21)mm for precision and 0.25(0.14)mm for accuracy. The robust versions of these formulations yield similar results even when the data is contaminated with 30% outliers.
A model-free method for annotating on vascular structure in volume rendered images
Wei He, Yanfang Li, Weili Shi, et al.
The precise annotation of vessel is desired in computer-assisted systems to help surgeons identify each vessel branch. A method has been reported that annotates vessels on volume rendered images by rendering their names on them using a two-pass rendering process. In the reported method, however, cylinder surface models of the vessels should be generated for writing vessels names. In fact, vessels are not actual cylinders, so the surfaces of the vessels cannot be simulated by such models accurately. This paper presents a model-free method for annotating vessels on volume rendered images by rendering their names on them using the two-pass rendering process: surface rendering and volume rendering. In the surface rendering process, docking points of vessel names are estimated by using such properties as centerlines, running directions, and vessel regions which are obtained in preprocess. Then the vessel names are pasted on the vessel surfaces at the docking points. In the volume rendering process, volume image is rendered using a fast volume rendering algorithm with depth buffer of image rendered in the surface rendering process. Finally, those rendered images are blended into an image as a result. In order to confirm the proposed method, a visualizing system for the automated annotation of abdominal arteries is performed. The experimental results show that vessel names can be drawn on the corresponding vessel in the volume rendered images correctly. The proposed method has enormous potential to be adopted to annotate other organs which cannot be modeled using regular geometrical surface.
Line fiducial material and thickness considerations for ultrasound calibration
Golafsoun Ameri, A. Jonathan McLeod, John S. H. Baxter, et al.
Ultrasound calibration is a necessary procedure in many image-guided interventions, relating the position of tools and anatomical structures in the ultrasound image to a common coordinate system. This is a necessary component of augmented reality environments in image-guided interventions as it allows for a 3D visualization where other surgical tools outside the imaging plane can be found. Accuracy of ultrasound calibration fundamentally affects the total accuracy of this interventional guidance system. Many ultrasound calibration procedures have been proposed based on a variety of phantom materials and geometries. These differences lead to differences in representation of the phantom on the ultrasound image which subsequently affect the ability to accurately and automatically segment the phantom. For example, taut wires are commonly used as line fiducials in ultrasound calibration. However, at large depths or oblique angles, the fiducials appear blurred and smeared in ultrasound images making it hard to localize their cross-section with the ultrasound image plane. Intuitively, larger diameter phantoms with lower echogenicity are more accurately segmented in ultrasound images in comparison to highly reflective thin phantoms. In this work, an evaluation of a variety of calibration phantoms with different geometrical and material properties for the phantomless calibration procedure was performed. The phantoms used in this study include braided wire, plastic straws, and polyvinyl alcohol cryogel tubes with different diameters. Conventional B-mode and synthetic aperture images of the phantoms at different positions were obtained. The phantoms were automatically segmented from the ultrasound images using an ellipse fitting algorithm, the centroid of which is subsequently used as a fiducial for calibration. Calibration accuracy was evaluated for these procedures based on the leave-one-out target registration error. It was shown that larger diameter phantoms with lower echogenicity are more accurately segmented in comparison to highly reflective thin phantoms. This improvement in segmentation accuracy leads to a lower fiducial localization error, which ultimately results in low target registration error. This would have a profound effect on calibration procedures and the feasibility of different calibration procedures in the context of image-guided procedures.
Live ultrasound volume reconstruction using scout scanning
Amelie Meyer, Andras Lasso, Tamas Ungi, et al.
Ultrasound-guided interventions often necessitate scanning of deep-seated anatomical structures that may be hard to visualize. Visualization can be improved using reconstructed 3D ultrasound volumes. High-resolution 3D reconstruction of a large area during clinical interventions is challenging if the region of interest is unknown. We propose a two-stage scanning method allowing the user to perform quick low-resolution scouting followed by high-resolution live volume reconstruction. Scout scanning is accomplished by stacking 2D tracked ultrasound images into a low-resolution volume. Then, within a region of interest defined in the scout scan, live volume reconstruction can be performed by continuous scanning until sufficient image density is achieved. We implemented the workflow as a module of the open-source 3D Slicer application, within the SlicerIGT extension and building on the PLUS toolkit. Scout scanning is performed in a few seconds using 3 mm spacing to allow region of interest definition. Live reconstruction parameters are set to provide good image quality (0.5 mm spacing, hole filling enabled) and feedback is given during live scanning by regularly updated display of the reconstructed volume. Use of scout scanning may allow the physician to identify anatomical structures. Subsequent live volume reconstruction in a region of interest may assist in procedures such as targeting needle interventions or estimating brain shift during surgery.
Benchmarking of state-of-the-art needle detection algorithms in 3D ultrasound data volumes
Ultrasound-guided needle interventions are widely practiced in medical diagnostics and therapy, i.e. for biopsy guidance, regional anesthesia or for brachytherapy. Needle guidance using 2D ultrasound can be very challenging due to the poor needle visibility and the limited field of view. Since 3D ultrasound transducers are becoming more widely used, needle guidance can be improved and simplified with appropriate computer-aided analyses. In this paper, we compare two state-of-the-art 3D needle detection techniques: a technique based on line filtering from literature and a system employing Gabor transformation. Both algorithms utilize supervised classification to pre-select candidate needle voxels in the volume and then fit a model of the needle on the selected voxels. The major differences between the two approaches are in extracting the feature vectors for classification and selecting the criterion for fitting. We evaluate the performance of the two techniques using manually-annotated ground truth in several ex-vivo situations of different complexities, containing three different needle types with various insertion angles. This extensive evaluation provides better understanding on the limitations and advantages of each technique under different acquisition conditions, which is leading to the development of improved techniques for more reliable and accurate localization. Benchmarking results that the Gabor features are better capable of distinguishing the needle voxels in all datasets. Moreover, it is shown that the complete processing chain of the Gabor-based method outperforms the line filtering in accuracy and stability of the detection results.
B-Mode ultrasound pose recovery via surgical fiducial segmentation and tracking
Alessandro Asoni, Michael Ketcha, Nathanael Kuo, et al.
Ultrasound Doppler imaging may be used to detect blood clots after surgery, a common problem. However, this requires consistent probe positioning over multiple time instances and therefore significant sonographic expertise. Analysis of ultrasound B-mode images of a fiducial implanted at the surgical site offers a landmark to guide a user to the same location repeatedly. We demonstrate that such an implanted fiducial may be successfully detected and tracked to calculate pose and guide a clinician consistently to the site of surgery, potentially reducing the ultrasound experience required for point of care monitoring.
Validation of percutaneous puncture trajectory during renal access using 4D ultrasound reconstruction
Pedro L. Rodrigues, Nuno F. Rodrigues, Jaime C. Fonseca, et al.
An accurate percutaneous puncture is essential for disintegration and removal of renal stones. Although this procedure has proven to be safe, some organs surrounding the renal target might be accidentally perforated. This work describes a new intraoperative framework where tracked surgical tools are superimposed within 4D ultrasound imaging for security assessment of the percutaneous puncture trajectory (PPT). A PPT is first generated from the skin puncture site towards an anatomical target, using the information retrieved by electromagnetic motion tracking sensors coupled to surgical tools. Then, 2D ultrasound images acquired with a tracked probe are used to reconstruct a 4D ultrasound around the PPT under GPU processing. Volume hole-filling was performed in different processing time intervals by a tri-linear interpolation method. At spaced time intervals, the volume of the anatomical structures was segmented to ascertain if any vital structure is in between PPT and might compromise the surgical success. To enhance the volume visualization of the reconstructed structures, different render transfer functions were used. Results: Real-time US volume reconstruction and rendering with more than 25 frames/s was only possible when rendering only three orthogonal slice views. When using the whole reconstructed volume one achieved 8-15 frames/s. 3 frames/s were reached when one introduce the segmentation and detection if some structure intersected the PPT. The proposed framework creates a virtual and intuitive platform that can be used to identify and validate a PPT to safely and accurately perform the puncture in percutaneous nephrolithotomy.
3D-printed surface mould applicator for high-dose-rate brachytherapy
Mark Schumacher, Andras Lasso, Ian Cumming, et al.
In contemporary high-dose-rate brachytherapy treatment of superficial tumors, catheters are placed in a wax mould. The creation of current wax models is a difficult and time consuming proces.The irradiation plan can only be computed post-construction and requires a second CT scan. In case no satisfactory dose plan can be created, the mould is discarded and the process is repeated. The objective of this work was to develop an automated method to replace suboptimal wax moulding. We developed a method to design and manufacture moulds that guarantee to yield satisfactory dosimetry. A 3D-printed mould with channels for the catheters designed from the patient’s CT and mounted on a patient-specific thermoplastic mesh mask. The mould planner was implemented as an open-source module in the 3D Slicer platform. Series of test moulds were created to accommodate standard brachytherapy catheters of 1.70mm diameter. A calibration object was used to conclude that tunnels with a diameter of 2.25mm, minimum 12mm radius of curvature, and 1.0mm open channel gave the best fit for this printer/catheter combination. Moulds were created from the CT scan of thermoplastic mesh masks of actual patients. The patient-specific moulds have been visually verified to fit on the thermoplastic meshes. The masks were visually shown to fit onto the thermoplastic meshes, next the resulting dosimetry will have to be compared with treatment plans and dosimetry achieved with conventional wax moulds in order to validate our 3D printed moulds.
Method for evaluation of predictive models of microwave ablation via post-procedural clinical imaging
Jarrod A. Collins, Daniel Brown M.D., T. Peter Kingham M.D., et al.
Development of a clinically accurate predictive model of microwave ablation (MWA) procedures would represent a significant advancement and facilitate an implementation of patient-specific treatment planning to achieve optimal probe placement and ablation outcomes. While studies have been performed to evaluate predictive models of MWA, the ability to quantify the performance of predictive models via clinical data has been limited to comparing geometric measurements of the predicted and actual ablation zones. The accuracy of placement, as determined by the degree of spatial overlap between ablation zones, has not been achieved. In order to overcome this limitation, a method of evaluation is proposed where the actual location of the MWA antenna is tracked and recorded during the procedure via a surgical navigation system. Predictive models of the MWA are then computed using the known position of the antenna within the preoperative image space. Two different predictive MWA models were used for the preliminary evaluation of the proposed method: (1) a geometric model based on the labeling associated with the ablation antenna and (2) a 3-D finite element method based computational model of MWA using COMSOL. Given the follow-up tomographic images that are acquired at approximately 30 days after the procedure, a 3-D surface model of the necrotic zone was generated to represent the true ablation zone. A quantification of the overlap between the predicted ablation zones and the true ablation zone was performed after a rigid registration was computed between the pre- and post-procedural tomograms. While both model show significant overlap with the true ablation zone, these preliminary results suggest a slightly higher degree of overlap with the geometric model.
Needle position estimation from sub-sampled k-space data for MRI-guided interventions
Sebastian Schmitt, Morwan Choli, Heinrich M. Overhoff
MRI-guided interventions have gained much interest. They profit from intervention synchronous data acquisition and image visualization. Due to long data acquisition durations, ergonomic limitations may occur. For a trueFISP MRI-data acquisition sequence, a time sparing sub-sampling strategy has been developed that is adapted to amagnetic needle detection. A symmetrical and contrast rich susceptibility needle artifact, i.e. an approximately rectangular gray scale profile is assumed. The 1-D-Fourier transformed of a rectangular function is a sinc-function. Its periodicity is exploited by sampling only along a few orthogonal trajectories in k-space. Because a needle moves during intervention, its tip region resembles a rectangle in a time-difference image that is reconstructed from such sub-sampled k-spaces acquired at different time stamps. In different phantom experiments, a needle was pushed forward along a reference trajectory, which was determined from a needle holders geometric parameters. In addition, the trajectory of the needle tip was estimated by the method described above. Only ca. 4 to 5% of the entire k-space data was used for needle tip estimation. The misalignment of needle orientation and needle tip position, i.e. the differences between reference and estimated values, is small and even in its worst case less than 2 mm. The results show that the method is applicable under nearly real conditions. Next steps are addressed to the validation of the method for clinical data.
Intraoperative visualization and assessment of electromagnetic tracking error
Vinyas Harish, Tamas Ungi, Andras Lasso, et al.
Electromagnetic tracking allows for increased flexibility in designing image-guided interventions, however it is well understood that electromagnetic tracking is prone to error. Visualization and assessment of the tracking error should take place in the operating room with minimal interference with the clinical procedure. The goal was to achieve this ideal in an open-source software implementation in a plug and play manner, without requiring programming from the user. We use optical tracking as a ground truth. An electromagnetic sensor and optical markers are mounted onto a stylus device, pivot calibrated for both trackers. Electromagnetic tracking error is defined as difference of tool tip position between electromagnetic and optical readings. Multiple measurements are interpolated into the thin-plate B-spline transform visualized in real time using 3D Slicer. All tracked devices are used in a plug and play manner through the open-source SlicerIGT and PLUS extensions of the 3D Slicer platform. Tracking error was measured multiple times to assess reproducibility of the method, both with and without placing ferromagnetic objects in the workspace. Results from exhaustive grid sampling and freehand sampling were similar, indicating that a quick freehand sampling is sufficient to detect unexpected or excessive field distortion in the operating room. The software is available as a plug-in for the 3D Slicer platforms. Results demonstrate potential for visualizing electromagnetic tracking error in real time for intraoperative environments in feasibility clinical trials in image-guided interventions.
Combining marker-less patient setup and respiratory motion monitoring using low cost 3D camera technology
F. Tahavori, E. Adams, M. Dabbs, et al.
Patient set-up misalignment/motion can be a significant source of error within external beam radiotherapy, leading to unwanted dose to healthy tissues and sub-optimal dose to the target tissue. Such inadvertent displacement or motion of the target volume may be caused by treatment set-up error, respiratory motion or an involuntary movement potentially decreasing therapeutic benefit. The conventional approach to managing abdominal-thoracic patient set-up is via skin markers (tattoos) and laser-based alignment. Alignment of the internal target volume with its position in the treatment plan can be achieved using Deep Inspiration Breath Hold (DIBH) in conjunction with marker-based respiratory motion monitoring.

We propose a marker-less single system solution for patient set-up and respiratory motion management based on low cost 3D depth camera technology (such as the Microsoft Kinect). In this new work we assess this approach in a study group of six volunteer subjects. Separate simulated treatment mimic treatment "fractions" or set-ups are compared for each subject, undertaken using conventional laser-based alignment and with intrinsic depth images produced by Kinect. Microsoft Kinect is also compared with the well-known RPM system for respiratory motion management in terms of monitoring free-breathing and DIBH. Preliminary results suggest that Kinect is able to produce mm-level surface alignment and a comparable DIBH respiratory motion management when compared to the popular RPM system. Such an approach may also yield significant benefits in terms of patient throughput as marker alignment and respiratory motion can be automated in a single system.
Quantification of intraventricular blood clot in MR-guided focused ultrasound surgery
Maggie Hess, Thomas Looi, Andras Lasso, et al.
Intraventricular hemorrhage (IVH) affects nearly 15% of preterm infants. It can lead to ventricular dilation and cognitive impairment. To ablate IVH clots, MR-guided focused ultrasound surgery (MRgFUS) is investigated. This procedure requires accurate, fast and consistent quantification of ventricle and clot volumes. We developed a semi-autonomous segmentation (SAS) algorithm for measuring changes in the ventricle and clot volumes. Images are normalized, and then ventricle and clot masks are registered to the images. Voxels of the registered masks and voxels obtained by thresholding the normalized images are used as seed points for competitive region growing, which provides the final segmentation. The user selects the areas of interest for correspondence after thresholding and these selections are the final seeds for region growing. SAS was evaluated on an IVH porcine model. SAS was compared to ground truth manual segmentation (MS) for accuracy, efficiency, and consistency. Accuracy was determined by comparing clot and ventricle volumes produced by SAS and MS, and comparing contours by calculating 95% Hausdorff distances between the two labels. In Two-One-Sided Test, SAS and MS were found to be significantly equivalent (p < 0.01). SAS on average was found to be 15 times faster than MS (p < 0.01). Consistency was determined by repeated segmentation of the same image by both SAS and manual methods, SAS being significantly more consistent than MS (p < 0.05). SAS is a viable method to quantify the IVH clot and the lateral brain ventricles and it is serving in a large-scale porcine study of MRgFUS treatment of IVH clot lysis.
Targeting of deep-brain structures in nonhuman primates using MR and CT Images
Antong Chen, Catherine Hines, Belma Dogdas, et al.
In vivo gene delivery in central nervous systems of nonhuman primates (NHP) is an important approach for gene therapy and animal model development of human disease. To achieve a more accurate delivery of genetic probes, precise stereotactic targeting of brain structures is required. However, even with assistance from multi-modality 3D imaging techniques (e.g. MR and CT), the precision of targeting is often challenging due to difficulties in identification of deep brain structures, e.g. the striatum which consists of multiple substructures, and the nucleus basalis of meynert (NBM), which often lack clear boundaries to supporting anatomical landmarks. Here we demonstrate a 3D-image-based intracranial stereotactic approach applied toward reproducible intracranial targeting of bilateral NBM and striatum of rhesus. For the targeting we discuss the feasibility of an atlas-based automatic approach. Delineated originally on a high resolution 3D histology-MR atlas set, the NBM and the striatum could be located on the MR image of a rhesus subject through affine and nonrigid registrations. The atlas-based targeting of NBM was compared with the targeting conducted manually by an experienced neuroscientist. Based on the targeting, the trajectories and entry points for delivering the genetic probes to the targets could be established on the CT images of the subject after rigid registration. The accuracy of the targeting was assessed quantitatively by comparison between NBM locations obtained automatically and manually, and finally demonstrated qualitatively via post mortem analysis of slices that had been labelled via Evan Blue infusion and immunohistochemistry.
Analysis of left atrial respiratory and cardiac motion for cardiac ablation therapy
M. E. Rettmann, D. R. Holmes III, S. B. Johnson, et al.
Cardiac ablation therapy is often guided by models built from preoperative computed tomography (CT) or magnetic resonance imaging (MRI) scans. One of the challenges in guiding a procedure from a preoperative model is properly synching the preoperative models with cardiac and respiratory motion through computational motion models. In this paper, we describe a methodology for evaluating cardiac and respiratory motion in the left atrium and pulmonary veins of a beating canine heart. Cardiac catheters were used to place metal clips within and near the pulmonary veins and left atrial appendage under fluoroscopic and ultrasound guidance and a contrast-enhanced, 64-slice multidetector CT scan was collected with the clips in place. Each clip was segmented from the CT scan at each of the five phases of the cardiac cycle at both end-inspiration and end-expiration. The centroid of each segmented clip was computed and used to evaluate both cardiac and respiratory motion of the left atrium. A total of three canine studies were completed, with 4 clips analyzed in the first study, 5 clips in the second study, and 2 clips in the third study. Mean respiratory displacement was 0.2±1.8 mm in the medial/lateral direction, 4.7±4.4 mm in the anterior/posterior direction (moving anterior on inspiration), and 9.0±5.0 mm superior/inferior (moving inferior with inspiration). At end inspiration, the mean left atrial cardiac motion at the clip locations was 1.5±1.3 mm in the medial/lateral direction, and 2.1±2.0 mm in the anterior/posterior and 1.3±1.2 mm superior/inferior directions. At end expiration, the mean left atrial cardiac motion at the clip locations was 2.0±1.5mm in the medial/lateral direction, 3.0±1.8mm in the anterior/posterior direction, and 1.5±1.5 mm in the superior/inferior directions.
Register cardiac fiber orientations from 3D DTI volume to 2D ultrasound image of rat hearts
Xulei Qin, Silun Wang, Ming Shen, et al.
Two-dimensional (2D) ultrasound or echocardiography is one of the most widely used examinations for the diagnosis of cardiac diseases. However, it only supplies the geometric and structural information of the myocardium. In order to supply more detailed microstructure information of the myocardium, this paper proposes a registration method to map cardiac fiber orientations from three-dimensional (3D) magnetic resonance diffusion tensor imaging (MR-DTI) volume to the 2D ultrasound image. It utilizes a 2D/3D intensity based registration procedure including rigid, log-demons, and affine transformations to search the best similar slice from the template volume. After registration, the cardiac fiber orientations are mapped to the 2D ultrasound image via fiber relocations and reorientations. This method was validated by six images of rat hearts ex vivo. The evaluation results indicated that the final Dice similarity coefficient (DSC) achieved more than 90% after geometric registrations; and the inclination angle errors (IAE) between the mapped fiber orientations and the gold standards were less than 15 degree. This method may provide a practical tool for cardiologists to examine cardiac fiber orientations on ultrasound images and have the potential to supply additional information for diagnosis of cardiac diseases.
Simulated evaluation of an intraoperative surface modeling method for catheter ablation by a real phantom simulation experiment
Deyu Sun, Maryam E. Rettmann, Douglas Packer, et al.
In this work, we propose a phantom experiment method to quantitatively evaluate an intraoperative left-atrial modeling update method. In prior work, we proposed an update procedure which updates the preoperative surface model with information from real-time tracked 2D ultrasound. Prior studies did not evaluate the reconstruction using an anthropomorphic phantom. In this approach, a silicone heart phantom (based on a high resolution human atrial surface model reconstructed from CT images) was made as simulated atriums. A surface model of the left atrium of the phantom was deformed by a morphological operation – simulating the shape difference caused by organ deformation between pre-operative scanning and intra-operative guidance. During the simulated procedure, a tracked ultrasound catheter was inserted into right atrial phantom – scanning the left atrial phantom in a manner mimicking the cardiac ablation procedure. By merging the preoperative model and the intraoperative ultrasound images, an intraoperative left atrial model was reconstructed. According to results, the reconstruction error of the modeling method is smaller than the initial geometric difference caused by organ deformation. As the area of the left atrial phantom scanned by ultrasound increases, the reconstruction error of the intraoperative surface model decreases. The study validated the efficacy of the modeling method.