Proceedings Volume 9036

Medical Imaging 2014: Image-Guided Procedures, Robotic Interventions, and Modeling

cover
Proceedings Volume 9036

Medical Imaging 2014: Image-Guided Procedures, Robotic Interventions, and Modeling

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 23 April 2014
Contents: 14 Sessions, 98 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2014
Volume Number: 9036

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9036
  • Abdominal Procedures
  • Laparoscopy/Endoscopy/Bronchoscopy/Colonoscopy
  • Novel Intraoperative Imaging and Visualization
  • Respiratory and Cardiac Motion Compensation
  • Segmentation
  • Registration
  • Keynote and Bench to Bedside
  • Robotics and Tracking
  • Simulation and Modeling
  • Pelvic Procedures
  • Ultrasound Image Guidance: Joint Session with Conferences 9036 and 9040
  • Cardiac Procedures
  • Poster Session
Front Matter: Volume 9036
icon_mobile_dropdown
Front Matter: Volume 9036
This PDF file contains the front matter associated with SPIE Proceedings Volume 9036, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Abdominal Procedures
icon_mobile_dropdown
Innovative approach for in-vivo ablation validation on multimodal images
O. Shahin, G. Karagkounis, D. Carnegie, et al.
Radiofrequency ablation (RFA) is an important therapeutic procedure for small hepatic tumors. To make sure that the target tumor is effectively treated, RFA monitoring is essential. While several imaging modalities can observe the ablation procedure, it is not clear how ablated lesions on the images correspond to actual necroses. This uncertainty contributes to the high local recurrence rates (up to 55%) after radiofrequency ablative therapy. This study investigates a novel approach to correlate images of ablated lesions with actual necroses. We mapped both intraoperative images of the lesion and a slice through the actual necrosis in a common reference frame. An electromagnetic tracking system was used to accurately match lesion slices from different imaging modalities. To minimize the liver deformation effect, the tracking reference frame was defined inside the tissue by anchoring an electromagnetic sensor adjacent to the lesion. A validation test was performed using a phantom and proved that the end-to-end accuracy of the approach was within 2mm. In an in-vivo experiment, intraoperative magnetic resonance imaging (MRI) and ultrasound (US) ablation images were correlated to gross and histopathology. The results indicate that the proposed method can accurately correlate invivo ablations on different modalities. Ultimately, this will improve the interpretation of the ablation monitoring and reduce the recurrence rates associated with RFA.
Model-based formalization of medical knowledge for context-aware assistance in laparoscopic surgery
Darko Katić, Anna-Laura Wekerle, Fabian Gärtner, et al.
The increase of technological complexity in surgery has created a need for novel man-machine interaction techniques. Specifically, context-aware systems which automatically adapt themselves to the current circumstances in the OR have great potential in this regard. To create such systems, models of surgical procedures are vital, as they allow analyzing the current situation and assessing the context. For this purpose, we have developed a Surgical Process Model based on Description Logics. It incorporates general medical background knowledge as well as intraoperatively observed situational knowledge. The representation consists of three parts: the Background Knowledge Model, the Preoperative Process Model and the Integrated Intraoperative Process Model. All models depend on each other and create a concise view on the surgery. As a proof of concept, we applied the system to a specific intervention, the laparoscopic distal pancreatectomy.
Software-assisted post-interventional assessment of radiofrequency ablation
Christian Rieder, Benjamin Geisler, Philipp Bruners, et al.
Radiofrequency ablation (RFA) is becoming a standard procedure for minimally invasive tumor treatment in clinical practice. Due to its common technical procedure, low complication rate, and low cost, RFA has become an alternative to surgical resection in the liver. To evaluate the therapy success of RFA, thorough follow-up imaging is essential. Conventionally, shape, size, and position of tumor and coagulation are visually compared in a side-by-side manner using pre- and post-interventional images. To objectify the verification of the treatment success, a novel software assistant allowing for fast and accurate comparison of tumor and coagulation is proposed.

In this work, the clinical value of the proposed assessment software is evaluated. In a retrospective clinical study, 39 cases of hepatic tumor ablation are evaluated using the prototype software and conventional image comparison by four radiologists with different levels of experience. The cases are randomized and evaluated in two sessions to avoid any recall-bias. Self-confidence of correct diagnosis (local recurrence vs. no local recurrence) on a six-point scale is given for each case by the radiologists. Sensitivity, specificity, positive and negative predictive values as well as receiver operating curves are calculated for both methods. It is shown that the software-assisted method allows physicians to correctly identify local tumor recurrence with a higher percentage than the conventional method (sensitivity: 0.6 vs. 0.35), whereas the percentage of correctly identified successful ablations is slightly reduced (specificity: 0.83 vs. 0.89).
Anatomical parameterization for volumetric meshing of the liver
A coordinate system describing the interior of organs is a powerful tool for a systematic localization of injured tissue. If the same coordinate values are assigned to specific anatomical landmarks, the coordinate system allows integration of data across different medical image modalities. Harmonic mappings have been used to produce parametric coordinate systems over the surface of anatomical shapes, given their flexibility to set values at specific locations through boundary conditions. However, most of the existing implementations in medical imaging restrict to either anatomical surfaces, or the depth coordinate with boundary conditions is given at sites of limited geometric diversity. In this paper we present a method for anatomical volumetric parameterization that extends current harmonic parameterizations to the interior anatomy using information provided by the volume medial surface. We have applied the methodology to define a common reference system for the liver shape and functional anatomy. This reference system sets a solid base for creating anatomical models of the patient's liver, and allows comparing livers from several patients in a common framework of reference.
Preliminary clinical trial in percutaneous nephrolithotomy using a real-time navigation system for percutaneous kidney access
Pedro L. Rodrigues, António H. J. Moreira, Nuno F. Rodrigues, et al.
Background: Precise needle puncture of renal calyces is a challenging and essential step for successful percutaneous nephrolithotomy. This work tests and evaluates, through a clinical trial, a real-time navigation system to plan and guide percutaneous kidney puncture. Methods: A novel system, entitled i3DPuncture, was developed to aid surgeons in establishing the desired puncture site and the best virtual puncture trajectory, by gathering and processing data from a tracked needle with optical passive markers. In order to navigate and superimpose the needle to a preoperative volume, the patient, 3D image data and tracker system were previously registered intraoperatively using seven points that were strategically chosen based on rigid bone structures and nearby kidney area. In addition, relevant anatomical structures for surgical navigation were automatically segmented using a multi-organ segmentation algorithm that clusters volumes based on statistical properties and minimum description length criterion. For each cluster, a rendering transfer function enhanced the visualization of different organs and surrounding tissues. Results: One puncture attempt was sufficient to achieve a successful kidney puncture. The puncture took 265 seconds, and 32 seconds were necessary to plan the puncture trajectory. The virtual puncture path was followed correctively until the needle tip reached the desired kidney calyceal. Conclusions: This new solution provided spatial information regarding the needle inside the body and the possibility to visualize surrounding organs. It may offer a promising and innovative solution for percutaneous punctures.
Laparoscopy/Endoscopy/Bronchoscopy/Colonoscopy
icon_mobile_dropdown
Construction of a multimodal CT-video chest model
Bronchoscopy enables a number of minimally invasive chest procedures for diseases such as lung cancer and asthma. For example, using the bronchoscope’s continuous video stream as a guide, a physician can navigate through the lung airways to examine general airway health, collect tissue samples, or administer a disease treatment. In addition, physicians can now use new image-guided intervention (IGI) systems, which draw upon both three-dimensional (3D) multi-detector computed tomography (MDCT) chest scans and bronchoscopic video, to assist with bronchoscope navigation. Unfortunately, little use is made of the acquired video stream, a potentially invaluable source of information. In addition, little effort has been made to link the bronchoscopic video stream to the detailed anatomical information given by a patient’s 3D MDCT chest scan. We propose a method for constructing a multimodal CT-video model of the chest. After automatically computing a patient’s 3D MDCT-based airway-tree model, the method next parses the available video data to generate a positional linkage between a sparse set of key video frames and airway path locations. Next, a fusion/mapping of the video’s color mucosal information and MDCT-based endoluminal surfaces is performed. This results in the final multimodal CT-video chest model. The data structure constituting the model provides a history of those airway locations visited during bronchoscopy. It also provides for quick visual access to relevant sections of the airway wall by condensing large portions of endoscopic video into representative frames containing important structural and textural information. When examined with a set of interactive visualization tools, the resulting fused data structure provides a rich multimodal data source. We demonstrate the potential of the multimodal model with both phantom and human data.
Visual tracking of da Vinci instruments for laparoscopic surgery
S. Speidel, E. Kuhn, S. Bodenstedt, et al.
Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.
Computer-assisted polyp matching between optical colonoscopy and CT colonography: a phantom study
Holger R. Roth, Thomas E. Hampshire, Emma Helbren, et al.
Potentially precancerous polyps detected with CT colonography (CTC) need to be removed subsequently, using an optical colonoscope (OC). Due to large colonic deformations induced by the colonoscope, even very experienced colonoscopists find it difficult to pinpoint the exact location of the colonoscope tip in relation to polyps reported on CTC. This can cause unduly prolonged OC examinations that are stressful for the patient, colonoscopist and supporting staff. We developed a method, based on monocular 3D reconstruction from OC images, that automatically matches polyps observed in OC with polyps reported on prior CTC. A matching cost is computed, using rigid point-based registration between surface point clouds extracted from both modalities. A 3D printed and painted phantom of a 25 cm long transverse colon segment was used to validate the method on two medium sized polyps. Results indicate that the matching cost is smaller at the correct corresponding polyp between OC and CTC: the value is 3.9 times higher at the incorrect polyp, comparing the correct match between polyps to the incorrect match. Furthermore, we evaluate the matching of the reconstructed polyp from OC with other colonic endoluminal surface structures such as haustral folds and show that there is a minimum at the correct polyp from CTC. Automated matching between polyps observed at OC and prior CTC would facilitate the biopsy or removal of true-positive pathology or exclusion of false-positive CTC findings, and would reduce colonoscopy false-negative (missed) polyps. Ultimately, such a method might reduce healthcare costs, patient inconvenience and discomfort.
Adaptive fiducial-free registration using multiple point selection for real-time electromagnetically navigated endoscopy
This paper proposes an adaptive fiducial-free registration method that uses a multiple point selection strategy based on sensor orientation and endoscope radius information. To develop a flexible endoscopy navigation system, since we use an electromagnetic tracker with positional sensors to estimate bronchoscope movements, we must synchronize such tracker and pre-operative image coordinate systems using either marker-based or fiducial-free registration methods. Fiducial-free methods assume that bronchoscopes are operated along bronchial centerlines. Unfortunately, such an assumption is easily violated during interventions. To address such a tough assumption, we utilize an adaptive strategy that generates multiple points in terms of sensor measurements and bronchoscope radius information. From these generated points, we adaptively choose the optimal point, which is the closest to its assigned bronchial centerline, to perform registration. The experimental results from phantom validation demonstrate that our proposed adaptive strategy significantly improved the fiducial-free registration accuracy from at least 5.4 to 2.2 mm compared to current available methods.
Geometric estimation of intestinal contraction for motion tracking of video capsule endoscope
Liang Mi, Guanqun Bao, Kaveh Pahlavan
Wireless video capsule endoscope (VCE) provides a noninvasive method to examine the entire gastrointestinal (GI) tract, especially small intestine, where other endoscopic instruments can barely reach. VCE is able to continuously provide clear pictures in short fixed intervals, and as such researchers have attempted to use image processing methods to track the video capsule in order to locate the abnormalities inside the GI tract. To correctly estimate the speed of the motion of the endoscope capsule, the radius of the intestinal track must be known a priori. Physiological factors such as intestinal contraction, however, dynamically change the radius of the small intestine, which could bring large errors in speed estimation. In this paper, we are aiming to estimate the radius of the contracted intestinal track. First a geometric model is presented for estimating the radius of small intestine based on the black hole on endoscopic images. To validate our proposed model, a 3-dimentional virtual testbed that emulates the intestinal contraction is then introduced in details. After measuring the size of the black holes on the test images, we used our model to esimate the radius of the contracted intestinal track. Comparision between analytical results and the emulation model parameters has verified that our proposed method could preciously estimate the radius of the contracted small intestine based on endoscopic images.
Motion magnification for endoscopic surgery
A. Jonathan McLeod, John S. H. Baxter, Sandrine de Ribaupierre, et al.
Endoscopic and laparoscopic surgeries are used for many minimally invasive procedures but limit the visual and haptic feedback available to the surgeon. This can make vessel sparing procedures particularly challenging to perform. Previous approaches have focused on hardware intensive intraoperative imaging or augmented reality systems that are difficult to integrate into the operating room. This paper presents a simple approach in which motion is visually enhanced in the endoscopic video to reveal pulsating arteries. This is accomplished by amplifying subtle, periodic changes in intensity coinciding with the patient’s pulse. This method is then applied to two procedures to illustrate its potential. The first, endoscopic third ventriculostomy, is a neurosurgical procedure where the floor of the third ventricle must be fenestrated without injury to the basilar artery. The second, nerve-sparing robotic prostatectomy, involves removing the prostate while limiting damage to the neurovascular bundles. In both procedures, motion magnification can enhance subtle pulsation in these structures to aid in identifying and avoiding them.
Novel Intraoperative Imaging and Visualization
icon_mobile_dropdown
Reconstruction and feature selection for desorption electrospray ionization mass spectroscopy imagery
Yi Gao, Liangjia Zhu, Isaiah Norton, et al.
Desorption electrospray ionization mass spectrometry (DESI-MS) provides a highly sensitive imaging technique for differentiating normal and cancerous tissue at the molecular level. This can be very useful, especially under intra-operative conditions where the surgeon has to make crucial decision about the tumor boundary. In such situations, the time it takes for imaging and data analysis becomes a critical factor. Therefore, in this work we utilize compressive sensing to perform the sparse sampling of the tissue, which halves the scanning time. Furthermore, sparse feature selection is performed, which not only reduces the dimension of data from about 104 to less than 50, and thus significantly shortens the analysis time. This procedure also identifies biochemically important molecules for further pathological analysis. The methods are validated on brain and breast tumor data sets.
Automatic standard plane adjustment on mobile C-Arm CT images of the calcaneus using atlas-based feature registration
Michael Brehler, Joseph Görres, Ivo Wolf, et al.
Intraarticular fractures of the calcaneus are routinely treated by open reduction and internal fixation followed by intraoperative imaging to validate the repositioning of bone fragments. C-Arm CT offers surgeons the possibility to directly verify the alignment of the fracture parts in 3D. Although the device provides more mobility, there is no sufficient information about the device-to-patient orientation for standard plane reconstruction. Hence, physicians have to manually align the image planes in a position that intersects with the articular surfaces. This can be a time-consuming step and imprecise adjustments lead to diagnostic errors. We address this issue by introducing novel semi-/automatic methods for adjustment of the standard planes on mobile C-Arm CT images. With the semi-automatic method, physicians can quickly adjust the planes by setting six points based on anatomical landmarks. The automatic method reconstructs the standard planes in two steps, first SURF keypoints (2D and newly introduced pseudo-3D) are generated for each image slice; secondly, these features are registered to an atlas point set and the parameters of the image planes are transformed accordingly. The accuracy of our method was evaluated on 51 mobile C-Arm CT images from clinical routine with manually adjusted standard planes by three physicians of different expertise. The average time of the experts (46s) deviated from the intermediate user (55s) by 9 seconds. By applying 2D SURF key points 88% of the articular surfaces were intersected correctly by the transformed standard planes with a calculation time of 10 seconds. The pseudo-3D features performed even better with 91% and 8 seconds.
Mechanically assisted 3D ultrasound for pre-operative assessment and guiding percutaneous treatment of focal liver tumors
Hamid Sadeghi Neshat, Jeffery Bax, Kevin Barker, et al.
Image-guided percutaneous ablation is the standard treatment for focal liver tumors deemed inoperable and is commonly used to maintain eligibility for patients on transplant waitlists. Radiofrequency (RFA), microwave (MWA) and cryoablation technologies are all delivered via one or a number of needle-shaped probes inserted directly into the tumor. Planning is mostly based on contrast CT/MRI. While intra-procedural CT is commonly used to confirm the intended probe placement, 2D ultrasound (US) remains the main, and in some centers the only imaging modality used for needle guidance. Corresponding intraoperative 2D US with planning and other intra-procedural imaging modalities is essential for accurate needle placement. However, identification of matching features of interest among these images is often challenging given the limited field-of-view (FOV) and low quality of 2D US images. We have developed a passive tracking arm with a motorized scan-head and software tools to improve guiding capabilities of conventional US by large FOV 3D US scans that provides more anatomical landmarks that can facilitate registration of US with both planning and intra-procedural images. The tracker arm is used to scan the whole liver with a high geometrical accuracy that facilitates multi-modality landmark based image registration. Software tools are provided to assist with the segmentation of the ablation probes and tumors, find the 2D view that best shows the probe(s) from a 3D US image, and to identify the corresponding image from planning CT scans. In this paper, evaluation results from laboratory testing and a phase 1 clinical trial for planning and guiding RFA and MWA procedures using the developed system will be presented. Early clinical results show a comparable performance to intra-procedural CT that suggests 3D US as a cost-effective alternative with no side-effects in centers where CT is not available.
Optoacoustic sensing for target detection inside cylindrical catheters
Behnoosh Tavakoli, Xiaoyu Guo, Russell H. Taylor, et al.
Optoacoustic sensing is a hybrid technique that combines the advantages of high sensing depth of ultrasound with contrast of optical absorption. In this study a miniature optoacoustic probe that can characterize the target properties located at the distal end of a catheter is investigated. The probe includes an optical fiber to illuminate the target with the pulsed laser light and a hydrophone to detect the generated optoacoustic signal. The probe is designed for the forwardsensing and therefore the acoustic signal propagates along the tube before being detected. Due to the circular geometry, the waves inside the tube are highly complex. A three dimensional numerical simulation is performed to model the optoacoustic wave generation and propagation inside the water filled cylindrical tubes. The effect of the boundary condition, tube diameter and target size on the detected signal is systematically evaluated. A prototype of the probe is made and tested for detecting an absorbing target inside a 2mm diameter tube submerged in water. The preliminary experimental results corresponding to the simulation is acquired. Although many different medical applications for this miniature probe may exist, our main focus is on detecting the occlusion inside the ventricular shunts. These catheters are used to divert the excess cerebrospinal fluid to the absorption site and regulate inter cranial pressure of hydrocephalous patients. Unfortunately the malfunction rate of these catheters due to blockage is very high. This sensing tool could locate the occluding tissue non-invasively and can potentially characterize the occlusion composites by scanning at different wavelengths of the light.
Polarization-sensitive multispectral tissue characterization for optimizing intestinal anastomosis
Jaepyeong Cha, Brian Triana, Azad Shademan, et al.
A novel imaging system that recommends potential suture placement for anastomosis to surgeons is developed. This is achieved by a multispectral imaging system coupled with polarizers and image analysis software. We performed preliminary imaging of ex vivo porcine intestine to evaluate the system. Vulnerable tissue regions including blood vessels were successfully identified and segmented. Thickness of different tissue areas is visualized. Strategies towards optimal points for suture placements have been discussed. Preliminary data suggest our imaging platform and analysis algorithm may be useful in avoiding blood vessels, identifying optimal regions for suture placements to perform safer operations in possibly reduced time.
Respiratory and Cardiac Motion Compensation
icon_mobile_dropdown
Optical surface scanning for respiratory motion monitoring in radiotherapy: a feasibility study
Susanne Lise Bekke, Faisal Mahmood, Jakob Helt-Hansen, et al.
Purpose. We evaluated the feasibility of a surface scanning system (Catalyst) for respiratory motion monitoring of breast cancer patients treated with radiotherapy in deep inspiration breath-hold (DIBH). DIBH is used to reduce the radiation dose to the heart and lung. In contrast to RPM, a competing marker-based system, Catalyst does not require an objectmarker on the patient’s skin.

Materials and Methods. Experiment 1: a manikin was used to simulate sinusoidal breathing. The amplitude, period and baseline (signal value at end-expiration) were estimated with RPM and Catalyst. Experiment 2 and 3: the Quasar phantom was used to study if the angle of the monitored surface affects the amplitude of the recorded signal.

Results. Experiment 1: we observed comparable period estimates for both systems. The amplitudes were 8 ± 0.1 mm (Catalyst) and 4.9 ± 0.1 mm (RPM). Independent check with in-room lasers showed an amplitude of approximately 8 mm, supporting Catalyst measurements. Large baseline errors were seen with RPM. Experiment 2: RPM underestimated the amplitude if the object-marker was angled during vertical motion. This result explains the amplitude underestimation by RPM seen in Experiment 1. Experiment 3: an increased (fixed) surface angle during breathing motion resulted in an overestimated amplitude with RPM, while the amplitude estimated by Catalyst was unaffected.

Conclusion. Our study showed that Catalyst can be used as a better alternative to the RPM. With Catalyst, the amplitude estimates are more accurate and do not depend on the angle of the tracked surface, and the baseline errors are smaller.
Statistical analysis of surrogate signals to incorporate respiratory motion variability into radiotherapy treatment planning
Matthias Wilms, Jan Ehrhardt, René Werner, et al.
Respiratory motion and its variability lead to location uncertainties in radiation therapy (RT) of thoracic and abdominal tumors. Current approaches for motion compensation in RT are usually driven by respiratory surrogate signals, e.g., spirometry. In this contribution, we present an approach for statistical analysis, modeling and subsequent simulation of surrogate signals on a cycle-by-cycle basis. The simulated signals represent typical patient-specific variations of, e.g., breathing amplitude and cycle period. For the underlying statistical analysis, all breathing cycles of an observed signal are consistently parameterized using approximating B-spline curves. Statistics on breathing cycles are then performed by using the parameters of the B-spline approximations. Assuming that these parameters follow a multivariate Gaussian distribution, realistic time-continuous surrogate signals of arbitrary length can be generated and used to simulate the internal motion of tumors and organs based on a patient-specific diffeomorphic correspondence model. As an example, we show how this approach can be employed in RT treatment planning to calculate tumor appearance probabilities and to statistically assess the impact of respiratory motion and its variability on planned dose distributions.
Marker-less respiratory motion modeling using the Microsoft Kinect for Windows
Patient respiratory motion is a major problem during external beam radiotherapy of the thoracic and abdominal regions due to the associated organ and target motion. In addition, such motion introduces uncertainty in both radiotherapy planning and delivery and may potentially vary between the planning and delivery sessions. The aim of this work is to examine subject-specific external respiratory motion and its associated drift from an assumed average cycle which is the basis for many respiratory motion compensated applications including radiotherapy treatment planning and delivery. External respiratory motion data were acquired from a group of 20 volunteers using a marker-less 3D depth camera, Kinect for Windows. The anterior surface encompassing thoracic and abdominal regions were subject to principal component analysis (PCA) to investigate dominant variations. The first principal component typically describes more than 70% of the motion data variance in the thoracic and abdominal surfaces. Across all of the subjects used in this study, 58% of subjects demonstrate largely abdominal breathing and 33% exhibited largely thoracic dominated breathing. In most cases there is observable drift in respiratory motion during the 300s capture period, which is visually demonstrated using Kernel Density Estimation. This study demonstrates that for this cohort of apparently healthy volunteers, there is significant respiratory motion drift in most cases, in terms of amplitude and relative displacement between the thoracic and abdominal respiratory components. This has implications for the development of effective motion compensation methodology.
Separating complex compound patient motion tracking data using independent component analysis
C. Lindsay, K. Johnson, M. A. King
In SPECT imaging, motion from respiration and body motion can reduce image quality by introducing motion-related artifacts. A minimally-invasive way to track patient motion is to attach external markers to the patient’s body and record their location throughout the imaging study. If a patient exhibits multiple movements simultaneously, such as respiration and body-movement, each marker location data will contain a mixture of these motions. Decomposing this complex compound motion into separate simplified motions can have the benefit of applying a more robust motion correction to the specific type of motion. Most motion tracking and correction techniques target a single type of motion and either ignore compound motion or treat it as noise. Few methods account for compound motion exist, but they fail to disambiguate super-position in the compound motion (i.e. inspiration in addition to body movement in the positive anterior/posterior direction). We propose a new method for decomposing the complex compound patient motion using an unsupervised learning technique called Independent Component Analysis (ICA). Our method can automatically detect and separate different motions while preserving nuanced features of the motion without the drawbacks of previous methods. Our main contributions are the development of a method for addressing multiple compound motions, the novel use of ICA in detecting and separating mixed independent motions, and generating motion transform with 12 DOFs to account for twisting and shearing. We show that our method works with clinical datasets and can be employed to improve motion correction in single photon emission computed tomography (SPECT) images.
Segmentation
icon_mobile_dropdown
Surgical screw segmentation for mobile C-arm CT devices
Joseph Görres, Michael Brehler, Jochen Franke, et al.
Calcaneal fractures are commonly treated by open reduction and internal fixation. An anatomical reconstruction of involved joints is mandatory to prevent cartilage damage and premature arthritis. In order to avoid intraarticular screw placements, the use of mobile C-arm CT devices is required. However, for analyzing the screw placement in detail, a time-consuming human-computer interaction is necessary to navigate through 3D images and therefore to view a single screw in detail. Established interaction procedures of repeatedly positioning and rotating sectional planes are inconvenient and impede the intraoperative assessment of the screw positioning. To simplify the interaction with 3D images, we propose an automatic screw segmentation that allows for an immediate selection of relevant sectional planes. Our algorithm consists of three major steps. At first, cylindrical characteristics are determined from local gradient structures with the help of RANSAC. In a second step, a DBScan clustering algorithm is applied to group similar cylinder characteristics. Each detected cluster represents a screw, whose determined location is then refined by a cylinder-to-image registration in a third step. Our evaluation with 309 screws in 50 images shows robust and precise results. The algorithm detected 98% (303) of the screws correctly. Thirteen clusters led to falsely identified screws. The mean distance error for the screw tip was 0.8 ± 0.8 mm and for the screw head 1.2 ± 1 mm. The mean orientation error was 1.4 ± 1.2 degrees.
A compact method for prostate zonal segmentation on multiparametric MRIs
Y. Chi, H. Ho, Y. M. Law, et al.
Automatic segmentation of the prostate zones has great potential of improving the accuracy of lesion detection during the image-guided prostate interventions. In this paper, we present a novel compact method to segment the prostate and its zones using multi-parametric magnetic resonance imaging (MRI) and the anatomical priors. The proposed method comprises of a prostate tissue representation using Gaussian mixture model (GMM), a prostate localization using the mean shift with the kernel of the prostate atlas and a prostate partition using the probabilistic valley between zones. The proposed method was tested on four sets of multi-parametric MRIs. The average Dice coefficient resulted from the segmentation of the prostate is 0.80 ± 0.03, the central zone 0.83 ± 0.04, and the peripheral zone 0.52 ± 0.09. The average computing time of the online segmentation is 1 min and 10 s per datasets on a PC with 2.4 GHz and 4.0 GB RAM. The proposed method is fast and has the potential to be used in clinical practices.
Segmentation of risk structures for otologic surgery using the Probabilistic Active Shape Model (PASM)
Meike Becker, Matthias Kirschner, Georgios Sakas
Our research project investigates a multi-port approach for minimally-invasive otologic surgery. For planning such a surgery, an accurate segmentation of the risk structures is crucial. However, the segmentation of these risk structures is a challenging task: The anatomical structures are very small and some have a complex shape, low contrast and vary both in shape and appearance. Therefore, prior knowledge is needed which is why we apply model-based approaches. In the present work, we use the Probabilistic Active Shape Model (PASM), which is a more flexible and specific variant of the Active Shape Model (ASM), to segment the following risk structures: cochlea, semicircular canals, facial nerve, chorda tympani, ossicles, internal auditory canal, external auditory canal and internal carotid artery. For the evaluation we trained and tested the algorithm on 42 computed tomography data sets using leave-one-out tests. Visual assessment of the results shows in general a good agreement of manual and algorithmic segmentations. Further, we achieve a good Average Symmetric Surface Distance while the maximum error is comparatively large due to low contrast at start and end points. Last, we compare the PASM to the standard ASM and show that the PASM leads to a higher accuracy.
Semi-automatic segmentation of vertebral bodies in volumetric MR images using a statistical shape+pose model
Amin Suzani, Abtin Rasoulian, Sidney Fels, et al.
Segmentation of vertebral structures in magnetic resonance (MR) images is challenging because of poor con­trast between bone surfaces and surrounding soft tissue. This paper describes a semi-automatic method for segmenting vertebral bodies in multi-slice MR images. In order to achieve a fast and reliable segmentation, the method takes advantage of the correlation between shape and pose of different vertebrae in the same patient by using a statistical multi-vertebrae anatomical shape+pose model. Given a set of MR images of the spine, we initially reduce the intensity inhomogeneity in the images by using an intensity-correction algorithm. Then a 3D anisotropic diffusion filter smooths the images. Afterwards, we extract edges from a relatively small region of the pre-processed image with a simple user interaction. Subsequently, an iterative Expectation Maximization tech­nique is used to register the statistical multi-vertebrae anatomical model to the extracted edge points in order to achieve a fast and reliable segmentation for lumbar vertebral bodies. We evaluate our method in terms of speed and accuracy by applying it to volumetric MR images of the spine acquired from nine patients. Quantitative and visual results demonstrate that the method is promising for segmentation of vertebral bodies in volumetric MR images.
Registration
icon_mobile_dropdown
Piecewise-rigid 2D-3D registration for pose estimation of snake-like manipulator using an intraoperative x-ray projection
Y. Otake, R. J. Murphy, M. D. Kutzer, et al.
Background: Snake-like dexterous manipulators may offer significant advantages in minimally-invasive surgery in areas not reachable with conventional tools. Precise control of a wire-driven manipulator is challenging due to factors such as cable deformation, unknown internal (cable friction) and external forces, thus requiring correcting the calibration intraoperatively by determining the actual pose of the manipulator. Method: A method for simultaneously estimating pose and kinematic configuration of a piecewise-rigid object such as a snake-like manipulator from a single x-ray projection is presented. The method parameterizes kinematics using a small number of variables (e.g., 5), and optimizes them simultaneously with the 6 degree-of-freedom pose parameter of the base link using an image similarity between digitally reconstructed radiographs (DRRs) of the manipulator’s attenuation model and the real x-ray projection. Result: Simulation studies assumed various geometric magnifications (1.2–2.6) and out-of-plane angulations (0°–90°) in a scenario of hip osteolysis treatment, which demonstrated the median joint angle error was 0.04° (for 2.0 magnification, ±10° out-of-plane rotation). Average computation time was 57.6 sec with 82,953 function evaluations on a mid-range GPU. The joint angle error remained lower than 0.07° while out-of-plane rotation was 0°–60°. An experiment using video images of a real manipulator demonstrated a similar trend as the simulation study except for slightly larger error around the tip attributed to accumulation of errors induced by deformation around each joint not modeled with a simple pin joint. Conclusions: The proposed approach enables high precision tracking of a piecewise-rigid object (i.e., a series of connected rigid structures) using a single projection image by incorporating prior knowledge about the shape and kinematic behavior of the object (e.g., each rigid structure connected by a pin joint parameterized by a low degree polynomial basis). Potential applications of the proposed approach include pose estimation of vertebrae in spine and a series of electrodes in coronary sinus catheter. Improvement of GPU performance is expected to further augment computational speed.
Deformable registration for image-guided spine surgery: preserving rigid body vertebral morphology in free-form transformations
Purpose: Deformable registration of preoperative and intraoperative images facilitates accurate localization of target and critical anatomy in image-guided spine surgery. However, conventional deformable registration fails to preserve the morphology of rigid bone anatomy and can impart distortions that confound high-precision intervention. We propose a constrained registration method that preserves rigid morphology while allowing deformation of surrounding soft tissues. Method: The registration method aligns preoperative 3D CT to intraoperative cone-beam CT (CBCT) using free-form deformation (FFD) with penalties on rigid body motion imposed according to a simple intensity threshold. The penalties enforced 3 properties of a rigid transformation – namely, constraints on affinity (AC), orthogonality (OC), and properness (PC). The method also incorporated an injectivity constraint (IC) to preserve topology. Physical experiments (involving phantoms, an ovine spine, and a human cadaver) as well as digital simulations were performed to evaluate the sensitivity to registration parameters, preservation of rigid body morphology, and overall registration accuracy of constrained FFD in comparison to conventional unconstrained FFD (denoted uFFD) and Demons registration. Result: FFD with orthogonality and injectivity constraints (denoted FFD+OC+IC) demonstrated improved performance compared to uFFD and Demons. Affinity and properness constraints offered little or no additional improvement. The FFD+OC+IC method preserved rigid body morphology at near-ideal values of zero dilatation (D = 0.05, compared to 0.39 and 0.56 for uFFD and Demons, respectively) and shear (S = 0.08, compared to 0.36 and 0.44 for uFFD and Demons, respectively). Target registration error (TRE) was similarly improved for FFD+OC+IC (0.7 mm), compared to 1.4 and 1.8 mm for uFFD and Demons. Results were validated in human cadaver studies using CT and CBCT images, with FFD+OC+IC providing excellent preservation of rigid morphology and equivalent or improved TRE. Conclusions: A promising method for deformable registration in CBCT-guided spine surgery has been identified incorporating a constrained FFD to preserve bone morphology. The approach overcomes distortions intrinsic to unconstrained FFD and could better facilitate high-precision image-guided spine surgery.
Hyperspectral imaging for cancer surgical margin delineation: registration of hyperspectral and histological images
Guolan Lu, Luma Halig, Dongsheng Wang, et al.
The determination of tumor margins during surgical resection remains a challenging task. A complete removal of malignant tissue and conservation of healthy tissue is important for the preservation of organ function, patient satisfaction, and quality of life. Visual inspection and palpation is not sufficient for discriminating between malignant and normal tissue types. Hyperspectral imaging (HSI) technology has the potential to noninvasively delineate surgical tumor margin and can be used as an intra-operative visual aid tool. Since histological images provide the ground truth of cancer margins, it is necessary to warp the cancer regions in ex vivo histological images back to in vivo hyperspectral images in order to validate the tumor margins detected by HSI and to optimize the imaging parameters. In this paper, principal component analysis (PCA) is utilized to extract the principle component bands of the HSI images, which is then used to register HSI images with the corresponding histological image. Affine registration is chosen to model the global transformation. A B-spline free form deformation (FFD) method is used to model the local non-rigid deformation. Registration experiment was performed on animal hyperspectral and histological images. Experimental results from animals demonstrated the feasibility of the hyperspectral imaging method for cancer margin detection.
Dynamic tracking of a deformable tissue based on 3D-2D MR-US image registration
Bahram Marami, Shahin Sirouspour, Aaron Fenster, et al.
Real-time registration of pre-operative magnetic resonance (MR) or computed tomography (CT) images with intra-operative Ultrasound (US) images can be a valuable tool in image-guided therapies and interventions. This paper presents an automatic method for dynamically tracking the deformation of a soft tissue based on registering pre-operative three-dimensional (3D) MR images to intra-operative two-dimensional (2D) US images. The registration algorithm is based on concepts in state estimation where a dynamic finite element (FE)- based linear elastic deformation model correlates the imaging data in the spatial and temporal domains. A Kalman-like filtering process estimates the unknown deformation states of the soft tissue using the deformation model and a measure of error between the predicted and the observed intra-operative imaging data. The error is computed based on an intensity-based distance metric, namely, modality independent neighborhood descriptor (MIND), and no segmentation or feature extraction from images is required. The performance of the proposed method is evaluated by dynamically deforming 3D pre-operative MR images of a breast phantom tissue based on real-time 2D images obtained from an US probe. Experimental results on different registration scenarios showed that deformation tracking converges in a few iterations. The average target registration error on the plane of 2D US images for manually selected fiducial points was between 0.3 and 1.5 mm depending on the size of deformation.
Target registration error for rigid shape-based registration with heteroscedastic noise
Burton Ma, Joy Choi, Hong Ming Huai
We propose an analytic equation for approximating expected root mean square (RMS) target registration error (TRE) for rigid shape-based registration where measured noisy data points are matched to a rigid shape. The noise distribution of the data points is assumed to be zero-mean, independent, and non-identical; i.e., the noise covariance may be different for each data point. The equation was derived by extending a previously published spatial stiffness model of registration. The equation was validated by performing registration experiments with both synthetic registration data and data collected using an optically tracked pointing stylus. The synthetic registration data were generated from the surface of an ellipsoid. The optically tracked data were collected from three plastic replicas of human radii and registered to isosurface models of the radii computed from CT scans. Noise covariances for the data points were computed by considering the pose of the tracked stylus, the positions of the individual fiducial markers on the stylus coordinate reference frame, and the calibrated position of the stylus tip; these quantities and an estimate of the fiducial localization covariance of the tracking system were used as inputs to a previously published algorithm for estimating the covariance of TRE for point-based (fiducial) registration. Registration simulations were performed using a modified version of the iterated closest point algorithm and the resulting RMS TREs were compared to the values predicted by our analytic equation.
Registration of liver images to minimally invasive intraoperative surface and subsurface data
Yifei Wu, D. Caleb Rucker, Rebekah H. Conley, et al.
Laparoscopic liver resection is increasingly being performed with results comparable to open cases while incurring less trauma and reducing recovery time. The tradeoff is increased difficulty due to limited visibility and restricted freedom of movement. Image-guided surgical navigation systems have the potential to help localize anatomical features to improve procedural safety and achieve better surgical resection outcome. Previous research has demonstrated that intraoperative surface data can be used to drive a finite element tissue mechanics organ model such that high resolution preoperative scans are registered and visualized in the context of the current surgical pose. In this paper we present an investigation of using sparse data as imposed by laparoscopic limitations to drive a registration model. Non-contact laparoscopicallyacquired surface swabbing and mock-ultrasound subsurface data were used within the context of a nonrigid registration methodology to align mock deformed intraoperative surface data to the corresponding preoperative liver model as derived from pre-operative image segmentations. The mock testing setup to validate the potential of this approach used a tissue-mimicking liver phantom with a realistic abdomen-port patient configuration. Experimental results demonstrates a range of target registration errors (TRE) on the order of 5mm were achieving using only surface swab data, while use of only subsurface data yielded errors on the order of 6mm. Registrations using a combination of both datasets achieved TRE on the order of 2.5mm and represent a sizeable improvement over either dataset alone.
Keynote and Bench to Bedside
icon_mobile_dropdown
Engineering therapeutic processes: from research to commodity
Three of the most important forces driving medical care are: patient specificity, treatment specificity and the move from discovery to design. Engineers while trained in specificity, efficiency, and design are often not trained in either biology or medical processes. Yet they are increasing critical to medical care. For example, modern medical imaging at US hospitals generates 1 exabyte (10^18 bytes) of data per year clearly beyond unassisted human analysis. It is not desirable to involve engineers in the acquisition, storage and analysis of this data, it is essential. While in the past we have nibbled around the edges of medical care, it is time and perhaps past time to insert ourselves more squarely into medical processes, making them more efficient, more specific and more robust. This requires engineers who understand biology and physicians who are willing to step away from classic medical thinking to try new approaches. But once the idea is proven in a laboratory, it must move into use and then into common practice. This requires additional engineering to make the process robust to noisy data and imprecise practices as well as workflow analysis to get the new technique into operating and treatment rooms. True innovation and true translation will require physicians, engineers, other medical stakeholders and even corporate involvement to take a new, important idea and move it not just to a patient but to all patients.
Integration of intraoperative stereovision imaging for brain shift visualization during image-guided cranial procedures
Timothy J. Schaewe, Xiaoyao Fan, Songbai Ji, et al.
Dartmouth and Medtronic Navigation have established an academic-industrial partnership to develop, validate, and evaluate a multi-modality neurosurgical image-guidance platform for brain tumor resection surgery that is capable of updating the spatial relationships between preoperative images and the current surgical field. A stereovision system has been developed and optimized for intraoperative use through integration with a surgical microscope and an image-guided surgery system. The microscope optics and stereovision CCD sensors are localized relative to the surgical field using optical tracking and can efficiently acquire stereo image pairs from which a localized 3D profile of the exposed surface is reconstructed. This paper reports the first demonstration of intraoperative acquisition, reconstruction and visualization of 3D stereovision surface data in the context of an industry-standard image-guided surgery system. The integrated system is capable of computing and presenting a stereovision-based update of the exposed cortical surface in less than one minute. Alternative methods for visualization of high-resolution, texture-mapped stereovision surface data are also investigated with the objective of determining the technical feasibility of direct incorporation of intraoperative stereo imaging into future iterations of Medtronic’s navigation platform.
Stereoscopic augmented reality using ultrasound volume rendering for laparoscopic surgery in children
In laparoscopic surgery, live video provides visualization of the exposed organ surfaces in the surgical field, but is unable to show internal structures beneath those surfaces. The laparoscopic ultrasound is often used to visualize the internal structures, but its use is limited to intermittent confirmation because of the need for an extra hand to maneuver the ultrasound probe. Other limitations of using ultrasound are the difficulty of interpretation and the need for an extra port. The size of the ultrasound transducer may also be too large for its usage in small children. In this paper, we report on an augmented reality (AR) visualization system that features continuous hands-free volumetric ultrasound scanning of the surgical anatomy and video imaging from a stereoscopic laparoscope. The acquisition of volumetric ultrasound image is realized by precisely controlling a back-and-forth movement of an ultrasound transducer mounted on a linear slider. Furthermore, the ultrasound volume is refreshed several times per minute. This scanner will sit outside of the body in the envisioned use scenario and could be even integrated into the operating table. An overlay of the maximum intensity projection (MIP) of ultrasound volume on the laparoscopic stereo video through geometric transformations features an AR visualization system particularly suitable for children, because ultrasound is radiation-free and provides higher-quality images in small patients. The proposed AR representation promises to be better than the AR representation using ultrasound slice data.
Robotics and Tracking
icon_mobile_dropdown
Localization accuracy of sphere fiducials in computed tomography images
Jan-Philipp Kobler, Jesus Díaz Díaz, J. Michael Fitzpatrick, et al.
In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 μm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.
On the accuracy of a video-based drill-guidance solution for orthopedic and trauma surgery: preliminary results
Jessica Magaraggia, Gerhard Kleinszig, Wei Wei, et al.
Over the last years, several methods have been proposed to guide the physician during reduction and fixation of bone fractures. Available solutions often use bulky instrumentation inside the operating room (OR). The latter ones usually consist of a stereo camera, placed outside the operative field, and optical markers directly attached to both the patient and the surgical instrumentation, held by the surgeon. Recently proposed techniques try to reduce the required additional instrumentation as well as the radiation exposure to both patient and physician. In this paper, we present the adaptation and the first implementation of our recently proposed video camera-based solution for screw fixation guidance. Based on the simulations conducted in our previous work, we mounted a small camera on a drill in order to recover its tip position and axis orientation w.r.t our custom-made drill sleeve with attached markers. Since drill-position accuracy is critical, we thoroughly evaluated the accuracy of our implementation. We used an optical tracking system for ground truth data collection. For this purpose, we built a custom plate reference system and attached reflective markers to both the instrument and the plate. Free drilling was then performed 19 times. The position of the drill axis was continuously recovered using both our video camera solution and the tracking system for comparison. The recorded data covered targeting, perforation of the surface bone by the drill bit and bone drilling. The orientation of the instrument axis and the position of the instrument tip were recovered with an accuracy of 1:60 +/- 1:22° and 2:03 +/- 1:36 mm respectively.
In vivo reproducibility of robotic probe placement for an integrated US-CT image-guided radiation therapy system
Muyinatu A. Lediju Bell, H. Tutkun Sen, Iulian Iordachita, et al.
Radiation therapy is used to treat cancer by delivering high-dose radiation to a pre-defined target volume. Ultrasound (US) has the potential to provide real-time, image-guidance of radiation therapy to identify when a target moves outside of the treatment volume (e.g. due to breathing), but the associated probe-induced tissue deformation causes local anatomical deviations from the treatment plan. If the US probe is placed to achieve similar tissue deformations in the CT images required for treatment planning, its presence causes streak artifacts that will interfere with treatment planning calculations. To overcome these challenges, we propose robot-assisted placement of a real ultrasound probe, followed by probe removal and replacement with a geometrically-identical, CT-compatible model probe. This work is the first to investigate in vivo deformation reproducibility with the proposed approach. A dog's prostate, liver, and pancreas were each implanted with three 2.38-mm spherical metallic markers, and the US probe was placed to visualize the implanted markers in each organ. The real and model probes were automatically removed and returned to the same position (i.e. position control), and CT images were acquired with each probe placement. The model probe was also removed and returned with the same normal force measured with the real US probe (i.e. force control). Marker positions in CT images were analyzed to determine reproducibility, and a corollary reproducibility study was performed on ex vivo tissue. In vivo results indicate that tissue deformations with the real probe were repeatable under position control for the prostate, liver, and pancreas, with median 3D reproducibility of 0.3 mm, 0.3 mm, and 1.6 mm, respectively, compared to 0.6 mm for the ex vivo tissue. For the prostate, the mean 3D tissue displacement errors between the real and model probes were 0.2 mm under position control and 0.6 mm under force control, which are both within acceptable radiotherapy treatment margins. The 3D displacement errors between the real and model probes were less acceptable for the liver and pancreas (4.1-6.1 mm), and force control maintained poorer reproducibility than position control.
Workflow assessment of 3T MRI-guided transperineal targeted prostate biopsy using a robotic needle guidance
Sang-Eun Song, Kemal Tuncali, Junichi Tokuda, et al.
Magnetic resonance imaging (MRI) guided transperineal targeted prostate biopsy has become a valuable instrument for detection of prostate cancer in patients with continuing suspicion for aggressive cancer after transrectal ultrasound guided (TRUS) guided biopsy. The MRI-guided procedures are performed using mechanical targeting devices or templates, which suffer from limitations of spatial sampling resolution and/or manual in-bore adjustments. To overcome these limitations, we developed and clinically deployed an MRI-compatible piezoceramic-motor actuated needle guidance device, Smart Template, which allows automated needle guidance with high targeting resolution for use in a wide closed-bore 3-Tesla MRI scanner. One of the main limitations of the MRI-guided procedure is the lengthy procedure time compared to conventional TRUS-guided procedures. In order to optimize the procedure, we assessed workflow of 30 MRI-guided biopsy procedures using the Smart Template with focus on procedure time. An average of 3.4 (range: 2~6) targets were preprocedurally selected per procedure and 2.2 ± 0.8 biopsies were performed for each target with an average insertion attempt of 1.9 ± 0.7 per biopsy. The average technical preparation time was 14 ± 7 min and the in-MRI patient preparation time was 42 ± 7 min. After 21 ± 7 min of initial imaging, 64 ± 12 min of biopsy was performed yielding an average of 10 ± 2 min per tissue sample. The total procedure time occupying the MRI suite was 138 ± 16 min. No noticeable tendency in the length of any time segment was observed over the 30 clinical cases.
A user-friendly automated port placement planning system for laparoscopic robotic surgery
Luis G. Torres, Hamidreza Azimian, Andinet Enquobahrie
Laparoscopic surgery is a minimally invasive surgical approach in which surgical instruments are passed through ports placed at small incisions. This approach can benefit patients by reducing recovery times and scars. Surgeons have gained greater dexterity, accuracy, and vision through adoption of robotic surgical systems. However, in some cases a preselected set of ports cannot be accommodated by the robot; the robot’s arms may cause collisions during the procedure, or the surgical targets may not be reachable through the selected ports. In this case, the surgeon must either make more incisions for more ports, or even abandon the laparoscopic approach entirely. To assist in this, we are building an easytouse system which, given a surgical task and preoperative medical images of the patient, will recommend a suitable port placement plan for the robotic surgery. This work bears two main contributions: 1) a high level user interface that assists the surgeon in operating the complicated underlying planning algorithm; and 2) an interface to assist the surgical team in implementation of the recommended plan in the operating room. We believe that such an automated port placement system would reduce setup time for robotic surgery and reduce the morbidity to patients caused by unsuitable surgical port placement.
Preliminary testing of a compact bone-attached robot for otologic surgery
Neal P. Dillon, Ramya Balachandran, Antoine Motte dit Falisse, et al.
Otologic surgery often involves a mastoidectomy procedure, in which part of the temporal bone is milled away in order to visualize critical structures embedded in the bone and safely access the middle and inner ear. We propose to automate this portion of the surgery using a compact, bone-attached milling robot. A high level of accuracy is required t o avoid damage to vital anatomy along the surgical path, most notably the facial nerve, making this procedure well-suited for robotic intervention. In this study, several of the design considerations are discussed and a robot design and prototype are presented. The prototype is a 4 degrees-of-freedom robot similar to a four-axis milling machine that mounts to the patient's skull. A positioning frame, containing fiducial markers and attachment points for the robot, is rigidly attached to the skull of the patient, and a CT scan is acquired. The target bone volume is manually segmented in the CT by the surgeon and automatically converted to a milling path and robot trajectory. The robot is then attached to the positioning frame and is used to drill the desired volume. The accuracy of the entire system (image processing, planning, robot) was evaluated at several critical locations within or near the target bone volume with a mean free space accuracy result of 0.50 mm or less at all points. A milling test in a phantom material was then performed to evaluate the surgical workflow. The resulting milled volume did not violate any critical structures.
Simulation and Modeling
icon_mobile_dropdown
Breast deformation modelling: comparison of methods to obtain a patient specific unloaded configuration
Björn Eiben, Vasileios Vavourakis, John H. Hipwell, et al.
In biomechanical simulations of the human breast, the analysed geometry is often reconstructed from in vivo medical imaging procedures. For example in dynamic contrast enhanced magnetic resonance imaging, the acquired geometry of the patient's breast when lying in the prone position represents a deformed configuration that is pre-stressed by typical in vivo conditions and gravity. Thus, physically realistic simulations require consideration of this loading and, hence, establishing the undeformed configuration is an important task for accurate and reliable biomechanical modelling of the breast. We compare three different numerical approaches to recover the unloaded configuration from the loaded geometry given patient-specific biomechanical models built from prone and supine MR images. The algorithms compared are:(i) the simple inversion of gravity without the consideration of pre-stresses, (ii) an inversefinite deformation approach and (iii) afixed point type iterative approach which uses only forward simulations. It is shown that the iterative and the inverse approach produce similar zero-gravity estimates, where as the simple inversion of gravity is only appropriate for small or highly constrained deformations.
Intraoperative measurement of indenter-induced brain deformation: a feasibility study
Songbai Ji, Xiaoyao Fan, David W. Roberts, et al.
Accurate measurement of soft tissue material properties is critical for characterizing its biomechanical behaviors but can be challenging especially for the human brain in vivo. In this study, we investigated the feasibility of inducing and detecting cortical surface deformation intraoperatively for patients undergoing open skull neurosurgeries. A custom diskshaped indenter made of high-density tungsten (diameter of 15 mm with a thickness of 6 mm) was used to induce deformation on the brain cortical surface immediately after dural opening. Before and after placing the indenter, sequences (typically 250 frames at 15 frames-per-second, or ~17 seconds) of high-resolution stereo image pairs were acquired to capture the harmonic motion of the exposed cortical surface as due to blood pressure pulsation and respiration. For each sequence with the first left image serving as a baseline, an optical-flow motion-tracking algorithm was used to detect in-sequence cortical surface deformation. The resulting displacements of the exposed features within the craniotomy were spatially averaged to identify the temporal frames corresponding to motion peak magnitudes. Corresponding image pairs were then selected to reconstruct full-field three-dimensional (3D) cortical surfaces before and after indentation, respectively, from which full 3D displacement fields were obtained by registering their projection images. With one clinical patient case, we illustrate the feasibility of the technique in detecting indenter-induced cortical surface deformation in order to allow subsequent processing to determine material properties of the brain in vivo.
Virtual estimates of fastening strength for pedicle screw implantation procedures
Traditional 2D images provide limited use for accurate planning of spine interventions, mainly due to the complex 3D anatomy of the spine and close proximity of nerve bundles and vascular structures that must be avoided during the procedure. Our previously developed clinician-friendly platform for spine surgery planning takes advantage of 3D pre-operative images, to enable oblique reformatting and 3D rendering of individual or multiple vertebrae, interactive templating, and placement of virtual pedicle implants. Here we extend the capabilities of the planning platform and demonstrate how the virtual templating approach not only assists with the selection of the optimal implant size and trajectory, but can also be augmented to provide surrogate estimates of the fastening strength of the implanted pedicle screws based on implant dimension and bone mineral density of the displaced bone substrate. According to the failure theories, each screw withstands a maximum holding power that is directly proportional to the screw diameter (D), the length of the in-bone segm,ent of the screw (L), and the density (i.e., bone mineral density) of the pedicle body. In this application, voxel intensity is used as a surrogate measure of the bone mineral density (BMD) of the pedicle body segment displaced by the screw. We conducted an initial assessment of the developed platform using retrospective pre- and post-operative clinical 3D CT data from four patients who underwent spine surgery, consisting of a total of 26 pedicle screws implanted in the lumbar spine. The Fastening Strength of the planned implants was directly assessed by estimating the intensity - area product across the pedicle volume displaced by the virtually implanted screw. For post-operative assessment, each vertebra was registered to its homologous counterpart in the pre-operative image using an intensity-based rigid registration followed by manual adjustment. Following registration, the Fastening Strength was computed for each displaced bone segment. According to our preliminary clinical study, a comparison between Fastening Strength, displaced bone volume and mean voxel intensity showed similar results (p < 0.1) between the virtually templated plans and the post-operative outcome following the traditional clinical approach. This study has demonstrated the feasibility of the platform in providing estimates the pedicle screw fastening strength via virtual implantation, given the intrinsic vertebral geometry and bone mineral density, enabling the selection of the optimal implant dimension adn trajectory for improved strength.
A cost effective and high fidelity fluoroscopy simulator using the Image-Guided Surgery Toolkit (IGSTK)
Ren Hui Gong, Brad Jenkins, Raymond W. Sze, et al.
The skills required for obtaining informative x-ray fluoroscopy images are currently acquired while trainees provide clinical care. As a consequence, trainees and patients are exposed to higher doses of radiation. Use of simulation has the potential to reduce this radiation exposure by enabling trainees to improve their skills in a safe environment prior to treating patients. We describe a low cost, high fidelity, fluoroscopy simulation system. Our system enables operators to practice their skills using the clinical device and simulated x-rays of a virtual patient. The patient is represented using a set of temporal Computed Tomography (CT) images, corresponding to the underlying dynamic processes. Simulated x-ray images, digitally reconstructed radiographs (DRRs), are generated from the CTs using ray-casting with customizable machine specific imaging parameters. To establish the spatial relationship between the CT and the fluoroscopy device, the CT is virtually attached to a patient phantom and a web camera is used to track the phantom’s pose. The camera is mounted on the fluoroscope’s intensifier and the relationship between it and the x-ray source is obtained via calibration. To control image acquisition the operator moves the fluoroscope as in normal operation mode. Control of zoom, collimation and image save is done using a keypad mounted alongside the device’s control panel. Implementation is based on the Image-Guided Surgery Toolkit (IGSTK), and the use of the graphics processing unit (GPU) for accelerated image generation. Our system was evaluated by 11 clinicians and was found to be sufficiently realistic for training purposes.
Cochlear implant simulator for surgical technique analysis
Rebecca L. Turok, Robert F. Labadie, George B. Wanna, et al.
Cochlear Implant (CI) surgery is a procedure in which an electrode array is inserted into the cochlea. The electrode array is used to stimulate auditory nerve fibers and restore hearing for people with severe to profound hearing loss. The primary goals when placing the electrode array are to fully insert the array into the cochlea while minimizing trauma to the cochlea. Studying the relationship between surgical outcome and various surgical techniques has been difficult since trauma and electrode placement are generally unknown without histology. Our group has created a CI placement simulator that combines an interactive 3D visualization environment with a haptic-feedback-enabled controller. Surgical techniques and patient anatomy can be varied between simulations so that outcomes can be studied under varied conditions. With this system, we envision that through numerous trials we will be able to statistically analyze how outcomes relate to surgical techniques. As a first test of this system, in this work, we have designed an experiment in which we compare the spatial distribution of forces imparted to the cochlea in the array insertion procedure when using two different but commonly used surgical techniques for cochlear access, called round window and cochleostomy access. Our results suggest that CIs implanted using round window access may cause less trauma to deeper intracochlear structures than cochleostomy techniques. This result is of interest because it challenges traditional thinking in the otological community but might offer an explanation for recent anecdotal evidence that suggests that round window access techniques lead to better outcomes.
Pelvic Procedures
icon_mobile_dropdown
Fast prostate segmentation for brachytherapy based on joint fusion of images and labels
Saman Nouranian, Mahdi Ramezani, S. Sara Mahdavi, et al.
Brachytherapy as one of the treatment methods for prostate cancer takes place by implantation of radioactive seeds inside the gland. The standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate which are segmented in order to plan the appropriate seed placement. The segmentation process is usually performed either manually or semi-automatically and is associated with subjective errors because the prostate visibility is limited in ultrasound images. The current segmentation process also limits the possibility of intra-operative delineation of the prostate to perform real-time dosimetry. In this paper, we propose a computationally inexpensive and fully automatic segmentation approach that takes advantage of previously segmented images to form a joint space of images and their segmentations. We utilize joint Independent Component Analysis method to generate a model which is further employed to produce a probability map of the target segmentation. We evaluate this approach on the transrectal ultrasound volume images of 60 patients using a leave-one-out cross-validation approach. The results are compared with the manually segmented prostate contours that were used by clinicians to plan brachytherapy procedures. We show that the proposed approach is fast with comparable accuracy and precision to those found in previous studies on TRUS segmentation.
Evaluating the utility of 3D TRUS image information in guiding intra-procedure registration for motion compensation
Tharindu De Silva, Derek W. Cool, Cesare Romagnoli, et al.
In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.
Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities
Peter R. Martin, Derek W. Cool, Cesare Romagnoli, et al.
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician’s desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P ≥ 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P ≥ 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.
Distinguishing benign confounding treatment changes from residual prostate cancer on MRI following laser ablation
G. Litjens, H. Huisman, R. Elliott, et al.
Laser interstitial thermotherapy (LITT) is a relatively new focal therapy technique for the ablation of localized prostate cancer. However, very little is known about the specific effects of LITT within the ablation zone and the surrounding normal tissue regions. For instance, it is important to be able to assess the extent of residual cancer within the prostate following LITT, which may be masked by thermally induced benign necrotic changes. Fortunately LITT is MRI compatible and hence this allows for quantitatively assessing LITT induced changes via multi-parametric MRI. Of course definite validation of any LITT induced changes on MRI requires confirmation via histopathology. The aim of this study was to quantitatively assess and distinguish the imaging characteristics of prostate cancer and benign confounding treatment changes following LITTon 3 Tesla multi-parametric MRI by carefully mapping the treatment related changes from the ex vivo surgically resected histopathologic specimens onto the pre-operative in vivo imaging. A better understanding of the imaging characteristics of residual disease and successfully ablated tissue might lead to improved treatment monitoring and as such patient prognosis. A unique clinical trial at the Radboud University Medical Center, in which 3 patients underwent a prostatectomy after LITT treatment, yielded ex-vivo histopathologic specimens along with pre- and post-LITT MRI. Using this data we (1) identified the computer extracted MRI signatures associated with treatment effects including benign necrotic changes and residual disease and (2) subsequently evaluated the computer extracted MRI features previously identified in distinguishing LITT induced changes in the ablated area relative to the residual disease. Towards this end first a pathologist annotated the ablated area and the residual disease on the ex-vivo histology and then we transferred the annotations to the post-LITT MRI using semi-automatic elastic registration. The pre- and post-LITT MRI were subsequently registered and computer-derived multi-parametric MRI features extracted to determine differences in feature values between residual disease and successfully ablated tissue to assess treatment response. A scoring metric allowed us to identify those specific computer-extracted MRI features that maximally and differentially expressed between the ablated regions and the residual cancer, on a voxel- by­ voxel basis. Finally, we used a Fuzzy C-Means algorithm to assess the discriminatory power of these selected features. Our results show that specific computer-extracted features from multi-parametric MRI differentially express within the ablated and residual cancer regions, as evidenced by our ability to, on a voxel-by-voxel basis, classify tissue as residual disease. Additionally, we show that change of feature values between pre- and post­-LITT MRI may be useful as a quantitative marker for treatment response (T2-weighted texture and DCE MRI features showed largest differences between residual disease and successfully ablated tissue). Finally, a clustering approach to separate treatment effects and residual disease incorporating both (1) and (2) yielded a maximum area under the ROC curve of 0.97 on a voxel basis across 3 studies.
MRI-guided prostate focal laser ablation therapy using a mechatronic needle guidance system
Jeremy Cepek, Uri Lindner, Sangeet Ghai, et al.
Focal therapy of localized prostate cancer is receiving increased attention due to its potential for providing effective cancer control in select patients with minimal treatment-related side effects. Magnetic resonance imaging (MRI)-guided focal laser ablation (FLA) therapy is an attractive modality for such an approach. In FLA therapy, accurate placement of laser fibers is critical to ensuring that the full target volume is ablated. In practice, error in needle placement is invariably present due to pre- to intra-procedure image registration error, needle deflection, prostate motion, and variability in interventionalist skill. In addition, some of these sources of error are difficult to control, since the available workspace and patient positions are restricted within a clinical MRI bore. In an attempt to take full advantage of the utility of intraprocedure MRI, while minimizing error in needle placement, we developed an MRI-compatible mechatronic system for guiding needles to the prostate for FLA therapy. The system has been used to place interstitial catheters for MRI-guided FLA therapy in eight subjects in an ongoing Phase I/II clinical trial. Data from these cases has provided quantification of the level of uncertainty in needle placement error. To relate needle placement error to clinical outcome, we developed a model for predicting the probability of achieving complete focal target ablation for a family of parameterized treatment plans. Results from this work have enabled the specification of evidence-based selection criteria for the maximum target size that can be confidently ablated using this technique, and quantify the benefit that may be gained with improvements in needle placement accuracy.
EM-navigated catheter placement for gynecologic brachytherapy: an accuracy study
Alireza Mehrtash, Antonio Damato, Guillaume Pernelle, et al.
Gynecologic malignancies, including cervical, endometrial, ovarian, vaginal and vulvar cancers, cause significant mortality in women worldwide. The standard care for many primary and recurrent gynecologic cancers consists of chemoradiation followed by brachytherapy. In high dose rate (HDR) brachytherapy, intracavitary applicators and /or interstitial needles are placed directly inside the cancerous tissue so as to provide catheters to deliver high doses of radiation. Although technology for the navigation of catheters and needles is well developed for procedures such as prostate biopsy, brain biopsy, and cardiac ablation, it is notably lacking for gynecologic HDR brachytherapy. Using a benchtop study that closely mimics the clinical interstitial gynecologic brachytherapy procedure, we developed a method for evaluating the accuracy of image-guided catheter placement. Future bedside translation of this technology offers the potential benefit of maximizing tumor coverage during catheter placement while avoiding damage to the adjacent organs, for example bladder, rectum and bowel. In the study, two independent experiments were performed on a phantom model to evaluate the targeting accuracy of an electromagnetic (EM) tracking system. The procedure was carried out using a laptop computer (2.1GHz Intel Core i7 computer, 8GB RAM, Windows 7 64-bit), an EM Aurora tracking system with a 1.3mm diameter 6 DOF sensor, and 6F (2 mm) brachytherapy catheters inserted through a Syed-Neblett applicator. The 3D Slicer and PLUS open source software were used to develop the system. The mean of the targeting error was less than 2.9mm, which is comparable to the targeting errors in commercial clinical navigation systems.
Ultrasound Image Guidance: Joint Session with Conferences 9036 and 9040
icon_mobile_dropdown
In vivo validation of a 3D ultrasound system for imaging the lateral ventricles of neonates
J. Kishimoto, A. Fenster, N. Chen, et al.
Dilated lateral ventricles in neonates can be due to many different causes, such as brain loss, or congenital malformation; however, the main cause is hydrocephalus, which is the accumulation of fluid within the ventricular system. Hydrocephalus can raise intracranial pressure resulting in secondary brain damage, and up to 25% of patients with severely enlarged ventricles have epilepsy in later life. Ventricle enlargement is clinically monitored using 2D US through the fontanels. The sensitivity of 2D US to dilation is poor because it cannot provide accurate measurements of irregular volumes such as the ventricles, so most clinical evaluations are of a qualitative nature. We developed a 3D US system to image the cerebral ventricles of neonates within the confines of incubators that can be easily translated to more open environments. Ventricle volumes can be segmented from these images giving a quantitative volumetric measurement of ventricle enlargement without moving the patient into an imaging facility. In this paper, we report on in vivo validation studies: 1) comparing 3D US ventricle volumes before and after clinically necessary interventions removing CSF, and 2) comparing 3D US ventricle volumes to those from MRI. Post-intervention ventricle volumes were less than pre-intervention measurements for all patients and all interventions. We found high correlations (R = 0.97) between the difference in ventricle volume and the reported removed CSF with the slope not significantly different than 1 (p < 0.05). Comparisons between ventricle volumes from 3D US and MR images taken 4 (±3.8) days of each other did not show significant difference (p=0.44) between 3D US and MRI through paired t-test.
Visualizing positional uncertainty in freehand 3D ultrasound
Houssem-Eddine Gueziri, Michael J. McGuffin, Catherine Laporte
The freehand 3D ultrasound technique relies on position sensors attached to the probe to register the location of each image to a 3D space. However, the imprecision of the position sensors reduces the reliability of estimated image locations. In this paper, we propose a novel method to compute the positional uncertainty of an image plane. First, we use rigid body point-based registration to compute the error produced by each pixel of the image during the tracking. The Target Registration Error (TRE) is used to compute the covariance matrix of errors at each pixel position. This covariance matrix is then decomposed as a 3D orientation error, in the x, y and z directions. Considering a volume around the image, we introduce the Image Plane Crossing Probability (IPCP) to determine the probability that the plane passes through each voxel. The result is a point cloud probability around the image plane, where each voxel contains the crossing probability and the contribution of each direction of the error. Finally, a simple volume rendering technique is used to visualize the uncertainty of the plane position. The results are validated in two steps. The first step is a Monte Carlo simulation to validate the estimate of the TRE covariance for the tracking errors. The second step simulates TRE errors on a plane and validates the associated positional uncertainty.
Synthetic aperture imaging in ultrasound calibration
Golafsoun Ameri, John S. H. Baxter, A. Jonathan McLeod, et al.
Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.
Cardiac Procedures
icon_mobile_dropdown
Efficient feature-based 2D/3D registration of transesophageal echocardiography to x-ray fluoroscopy for cardiac interventions
We present a novel 2D/ 3D registration algorithm for fusion between transesophageal echocardiography (TEE) and X-ray fluoroscopy (XRF). The TEE probe is modeled as a subset of 3D gradient and intensity point features, which facilitates efficient 3D-to-2D perspective projection. A novel cost-function, based on a combination of intensity and edge features, evaluates the registration cost value without the need for time-consuming generation of digitally reconstructed radiographs (DRRs). Validation experiments were performed with simulations and phantom data. For simulations, in silica XRF images of a TEE probe were generated in a number of different pose configurations using a previously acquired CT image. Random misregistrations were applied and our method was used to recover the TEE probe pose and compare the result to the ground truth. Phantom experiments were performed by attaching fiducial markers externally to a TEE probe, imaging the probe with an interventional cardiac angiographic x-ray system, and comparing the pose estimated from the external markers to that estimated from the TEE probe using our algorithm. Simulations found a 3D target registration error of 1.08(1.92) mm for biplane (monoplane) geometries, while the phantom experiment found a 2D target registration error of 0.69mm. For phantom experiments, we demonstrated a monoplane tracking frame-rate of 1.38 fps. The proposed feature-based registration method is computationally efficient, resulting in near real-time, accurate image based registration between TEE and XRF.
Toward standardized mapping for left atrial analysis and cardiac ablation guidance
In catheter-based cardiac ablation, the pulmonary vein ostia are important landmarks for guiding the ablation procedure, and for this reason, have been the focus of many studies quantifying their size, structure, and variability. Analysis of pulmonary vein structure, however, has been limited by the lack of a standardized reference space for population based studies. Standardized maps are important tools for characterizing anatomic variability across subjects with the goal of separating normal inter-subject variability from abnormal variability associated with disease. In this work, we describe a novel technique for computing flat maps of left atrial anatomy in a standardized space. A flat map of left atrial anatomy is created by casting a single ray through the volume and systematically rotating the camera viewpoint to obtain the entire field of view. The technique is validated by assessing preservation of relative surface areas and distances between the original 3D geometry and the flat map geometry. The proposed methodology is demonstrated on 10 subjects which are subsequently combined to form a probabilistic map of anatomic location for each of the pulmonary vein ostia and the boundary of the left atrial appendage. The probabilistic map demonstrates that the location of the inferior ostia have higher variability than the superior ostia and the variability of the left atrial appendage is similar to the superior pulmonary veins. This technique could also have potential application in mapping electrophysiology data, radio-frequency ablation burns, or treatment planning in cardiac ablation therapy.
Intraoperative measurements on the mitral apparatus using optical tracking: a feasibility study
Sandy Engelhardt, Raffaele De Simone, Diana Wald, et al.
Mitral valve reconstruction is a widespread surgical method to repair incompetent mitral valves. During reconstructive surgery the judgement of mitral valve geometry and subvalvular apparatus is mandatory in order to choose for the appropriate repair strategy. To date, intraoperative analysis of mitral valve is merely based on visual assessment and inaccurate sizer devices, which do not allow for any accurate and standardized measurement of the complex three-dimensional anatomy. We propose a new intraoperative computer-assisted method for mitral valve measurements using a pointing instrument together with an optical tracking system. Sixteen anatomical points were defined on the mitral apparatus. The feasibility and the reproducibility of the measurements have been tested on a rapid prototyping (RP) heart model and a freshly exercised porcine heart. Four heart surgeons repeated the measurements three times on each heart. Morphologically important distances between the measured points are calculated. We achieved an interexpert variability mean of 2.28 +/- 1:13 mm for the 3D-printed heart and 2.45 +/- 0:75 mm for the porcine heart. The overall time to perform a complete measurement is 1-2 minutes, which makes the method viable for virtual annuloplasty during an intervention.
Ultrasound based mitral valve annulus tracking for off-pump beating heart mitral valve repair
Feng P. Li, Martin Rajchl, John Moore, et al.
Mitral regurgitation (MR) occurs when the mitral valve cannot close properly during systole. The NeoChord© tool aims to repair MR by implanting artificial chordae tendineae on flail leaflets inside the beating heart, without a cardiopulmonary bypass. Image guidance is crucial for such a procedure due to the lack of direct vision of the targets or instruments. While this procedure is currently guided solely by transesophageal echocardiography (TEE), our previous work has demonstrated that guidance safety and efficiency can be significantly improved by employing augmented virtuality to provide virtual presentation of mitral valve annulus (MVA) and tools integrated with real time ultrasound image data. However, real-time mitral annulus tracking remains a challenge. In this paper, we describe an image-based approach to rapidly track MVA points on 2D/biplane TEE images. This approach is composed of two components: an image-based phasing component identifying images at optimal cardiac phases for tracking, and a registration component updating the coordinates of MVA points. Preliminary validation has been performed on porcine data with an average difference between manually and automatically identified MVA points of 2.5mm. Using a parallelized implementation, this approach is able to track the mitral valve at up to 10 images per second.
Patient-specific left atrial wall-thickness measurement and visualization for radiofrequency ablation
Jiro Inoue, Allan C. Skanes, James A. White, et al.
INTRODUCTION: For radiofrequency (RF) catheter ablation of the left atrium, safe and effective dosing of RF energy requires transmural left atrium ablation without injury to extra-cardiac structures. The thickness of the left atrial wall may be a key parameter in determining the appropriate amount of energy to deliver. While left atrial wall-thickness is known to exhibit inter- and intra-patient variation, this is not taken into account in the current clinical workflow. Our goal is to develop a tool for presenting patient-specific left atrial thickness information to the clinician in order to assist in the determination of the proper RF energy dose. METHODS: We use an interactive segmentation method with manual correction to segment the left atrial blood pool and heart wall from contrast-enhanced cardiac CT images. We then create a mesh from the segmented blood pool and determine the wall thickness, on a per–vertex basis, orthogonal to the mesh surface. The thickness measurement is visualized by assigning colors to the vertices of the blood pool mesh. We applied our method to 5 contrast-enhanced cardiac CT images. RESULTS: Left atrial wall-thickness measurements were generally consistent with published thickness ranges. Variations were found to exist between patients, and between regions within each patient. CONCLUSION: It is possible to visually determine areas of thick vs. thin heart wall with high resolution in a patient-specific manner.
Mapping cardiac fiber orientations from high-resolution DTI to high-frequency 3D ultrasound
Xulei Qin, Silun Wang, Ming Shen, et al.
The orientation of cardiac fibers affects the anatomical, mechanical, and electrophysiological properties of the heart. Although echocardiography is the most common imaging modality in clinical cardiac examination, it can only provide the cardiac geometry or motion information without cardiac fiber orientations. If the patient’s cardiac fiber orientations can be mapped to his/her echocardiography images in clinical examinations, it may provide quantitative measures for diagnosis, personalized modeling, and image-guided cardiac therapies. Therefore, this project addresses the feasibility of mapping personalized cardiac fiber orientations to three-dimensional (3D) ultrasound image volumes. First, the geometry of the heart extracted from the MRI is translated to 3D ultrasound by rigid and deformable registration. Deformation fields between both geometries from MRI and ultrasound are obtained after registration. Three different deformable registration methods were utilized for the MRI-ultrasound registration. Finally, the cardiac fiber orientations imaged by DTI are mapped to ultrasound volumes based on the extracted deformation fields. Moreover, this study also demonstrated the ability to simulate electricity activations during the cardiac resynchronization therapy (CRT) process. The proposed method has been validated in two rat hearts and three canine hearts. After MRI/ultrasound image registration, the Dice similarity scores were more than 90% and the corresponding target errors were less than 0.25 mm. This proposed approach can provide cardiac fiber orientations to ultrasound images and can have a variety of potential applications in cardiac imaging.
Poster Session
icon_mobile_dropdown
Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration
Kaioqiong Sun, Jayaram K. Udupa, Dewey Odhner, et al.
This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.
Towards enabling ultrasound guidance in cervical cancer high-dose-rate brachytherapy
Adrian Wong, Samira Sojoudia, Marc Gaudet, et al.
MRI and Computed Tomography (CT) are used in image-based solutions for guiding High Dose Rate (HDR) brachytherapy treatment of cervical cancer. MRI is costly and CT exposes the patients to ionizing radiation. Ultrasound, on the other hand, is affordable and safe. The long-term goal of our work is to enable the use of multiparametric ultrasound imaging in image-guided HDR for cervical cancer. In this paper, we report the development of enabling technology for ultrasound guidance and tissue typing. We report a system to obtain the 3D freehand transabdominal ultrasound RF signals and B-mode images of the uterus, and a method for registration of ultrasound to MRI. MRI and 3D ultrasound images of the female pelvis were registered by contouring the uterus in the two modalities, creating a surface model, followed by rigid and B-spline deformable registration. The resulting transformation was used to map the location of the tumor from the T2-weighted MRI to ultrasound images and to determine cancerous and normal areas in ultrasound. B-mode images show a contrast for cancer vs. normal tissue. Our study shows the potential and the challenges of ultrasound imaging in guiding cervical cancer treatments.
Detection of tooth fractures in CBCT images using attention index estimation
The attention index (𝜑) is a number from zero to one that indicates a possible fracture is detected inside a selected tooth. The higher the 𝜑 number, the greater the likelihood for needed attention in the visual examination. The method developed for the 𝜑 estimation extracts a connected component with image properties that are similar to those of a typical tooth fracture. That is, in cone-beam computed tomography (CBCT) images, a fracture appears as a dark canyon inside the tooth. In order to start the visual examination process, the method provides a plane across the geometric center of the suspicious fracture component, which maximizes the number of pixels from that component inside the plane. During visual examination, the user (doctor) can change plane orientations and locations, by manipulating the mouse toward different graphical elements that represent the plane on a 3-D rendition of the tooth, while the corresponding image of the plane is shown at its side. The visual examination aims at confirming or disproving the fracture-detection event. We have designed and implemented these algorithms using the image-foresting transform methodology.
Updating a preoperative surface model with information from real-time tracked 2D ultrasound using a Poisson surface reconstruction algorithm
In this work, we propose a method for intraoperative reconstruction of a left atrial surface model for the application of cardiac ablation therapy. In this approach, the intraoperative point cloud is acquired by a tracked, 2D freehand intra-cardiac echocardiography device, which is registered and merged with a preoperative, high resolution left atrial surface model built from computed tomography data. For the surface reconstruction, we introduce a novel method to estimate the normal vector of the point cloud from the preoperative left atrial model, which is required for the Poisson Equation Reconstruction algorithm. In the current work, the algorithm is evaluated using a preoperative surface model from patient computed tomography data and simulated intraoperative ultrasound data. Factors such as intraoperative deformation of the left atrium, proportion of the left atrial surface sampled by the ultrasound, sampling resolution, sampling noise, and registration error were considered through a series of simulation experiments.
Hand-eye calibration using dual quaternions in medical environment
Danilo Briese, Christine Niebler, Georg Rose
Nowadays medical interventions are often supported by localization systems using different measurement tools (MT). This requires to register the MT coordinate sytem to the world coordinate system used by the medical device. The hand-eye calibration is a well-known method from robotics to estimate the transformation between the gripper of a robot (hand) and a MT (eye) rigidly attached to the robot. Using a calibration tool (e.g. checker board) one can obtain the hand-eye transformation using known relative movements of the robot and the data from the MT. The approach can also be used for MT located elsewhere * using markers on the device. The position of the markers is not required to be known since they are rigid during the motions. Based on prior work using dual quaternions to represent transformations in the SE(3) we not only took into account movements between immediate neighbour positions Pi and Pj , but combined all positions to gain (P/2) submotions in every subset Σ p/n=3 (p/n) t without increasing the number of positions conducted during the calibration. We took into account the unity constraint for dual quaternions since only those represent rigid motions in space. We performed simulations that show the advantage of our algorithm. Additionally we gained experimental data which supported the outcome of the simulations. We can outline that our approach achieves more accurate results estimating the hand-eye transformation than the aforementioned algorithms.
Distribution of guidance models for cardiac resynchronization therapy in the setting of multi-center clinical trials
Martin Rajchl, Kamyar Abhari, John Stirrat, et al.
Multi-center trials provide the unique ability to investigate novel techniques across a range of geographical sites with sufficient statistical power, the inclusion of multiple operators determining feasibility under a wider array of clinical environments and work-flows. For this purpose, we introduce a new means of distributing pre-procedural cardiac models for image-guided interventions across a large scale multi-center trial. In this method, a single core facility is responsible for image processing, employing a novel web-based interface for model visualization and distribution. The requirements for such an interface, being WebGL-based, are minimal and well within the realms of accessibility for participating centers. We then demonstrate the accuracy of our approach using a single-center pacemaker lead implantation trial with generic planning models.
Open framework for management and processing of multi-modality and multidimensional imaging data for analysis and modelling muscular function
David García Juan, Bénédicte M. A. Delattre, Sara Trombella, et al.
Musculoskeletal disorders (MSD) are becoming a big healthcare economical burden in developed countries with aging population. Classical methods like biopsy or EMG used in clinical practice for muscle assessment are invasive and not accurately sufficient for measurement of impairments of muscular performance. Non-invasive imaging techniques can nowadays provide effective alternatives for static and dynamic assessment of muscle function. In this paper we present work aimed toward the development of a generic data structure for handling n-dimensional metabolic and anatomical data acquired from hybrid PET/MR scanners. Special static and dynamic protocols were developed for assessment of physical and functional images of individual muscles of the lower limb. In an initial stage of the project a manual segmentation of selected muscles was performed on high-resolution 3D static images and subsequently interpolated to full dynamic set of contours from selected 2D dynamic images across different levels of the leg. This results in a full set of 4D data of lower limb muscles at rest and during exercise. These data can further be extended to a 5D data by adding metabolic data obtained from PET images. Our data structure and corresponding image processing extension allows for better evaluation of large volumes of multidimensional imaging data that are acquired and processed to generate dynamic models of the moving lower limb and its muscular function.
Innovation in aortoiliac stenting: an in vitro comparison
E. Groot Jebbink, P. C. J. M. Goverde, J. A. van Oostayen, et al.
Aortoiliac occlusive disease (AIOD) may cause disabling claudicatio, due to progression of atherosclerotic plaque. Bypass surgery to treat AIOD has unsurpassed patency results, with 5-year patency rates up to 86%, at the expense of high complication rates (local and systemic morbidity rate of 6% and 16%). Therefore, less invasive, endovascular treatment of AOID with stents in both iliac limbs is the first choice in many cases, however, with limited results (average 5-year patency: 71%, range: 63-82%). Changes in blood flow due to an altered geometry of the bifurcation is likely to be one of the contributing factors. The aim of this study is to compare the geometry and hemodynamics of various aortoiliac stent configurations in vitro. Transparent vessel phantoms mimicking the anatomy of the aortoiliac bifurcation are used to accommodate stent configurations. Bare Metal Kissing stents (BMK), Kissing Covered (KC) stents and the Covered Endovascular Reconstruction of the Aortic Bifurcation (CERAB) configuration are investigated. The models are placed inside a flow rig capable of simulating physiologic relevant flow in the infrarenal area. Dye injection reveals flow disturbances near the neobifurcation of BMK and KC stents as well. At the radial mismatch areas of the KC stents recirculation zones are observed. With the CERAB configuration no flow reversal or large disturbances are observed. In conclusion, dye injection reveals no significant flow disturbances with the new CERAB configuration as seen with the KC and BMK stents.
Preliminary study of rib articulated model based on dynamic fluoroscopy images
Pierre-Frederic Villard, Pierre Escamilla, Erwan Kerrien, et al.
We present in this paper a preliminary study of rib motion tracking during Interventional Radiology (IR) fluoroscopy guided procedures. It consists in providing a physician with moving rib three-dimensional (3D) models projected in the fluoroscopy plane during a treatment. The strategy is to help to quickly recognize the target and the no-go areas i.e. the tumor and the organs to avoid. The method consists in i) elaborating a kinematic model of each rib from a preoperative computerized tomography (CT) scan, ii) processing the on-line fluoroscopy image and iii) optimizing the parameters of the kinematic law such as the transformed 3D rib projected on the medical image plane fit well with the previously processed image. The results show a visually good rib tracking that has been quantitatively validated by showing a periodic motion as well as a good synchronism between ribs.
Solving for free-hand and real-time 3D ultrasound calibration with anisotropic orthogonal Procrustes analysis
Elvis C. S. Chen, A. Jonathan McLeod, Uditha L. Jayarathne, et al.
Real-time 3D ultrasound is an emerging imaging modality, offering full volumetric view of the anatomy without ionizing radiation. A spatial tracking system facilitates its integration with image-guided interventions, but such integration requires an accurate calibration between the spatial tracker and the ultrasound volume. In this paper, a rapid calibration technique for real-time 3D ultrasound is presented, comprising a plane-based calibration phantom, an algorithm for automatic fiducial extraction from ultrasound volumes, and a numerical solution for deriving calibration parameters involving anisotropic scaling. Using a magnetic tracking system and a commercial transesophageal echocardiogram real-time 3D ultrasound probe, this technique achieved a mean Target Registration Error of 2.9mm in a laboratory setting.
Motion and deformation compensation for freehand prostate biopsies
Siavash Khallaghi, Saman Nouranian, Samira Sojoudi, et al.
In this paper, we present a registration pipeline to compensate for prostate motion and deformation during targeted freehand prostate biopsies. We perform 2D-3D registration by reconstructing a thin-volume around the real-time 2D ultrasound imaging plane. Constrained Sum of Squared Differences (SSD) and gradient descent optimization are used to rigidly align the moving volume to the fixed thin-volume. Subsequently, B-spline de- formable registration is performed to compensate for remaining non-linear deformations. SSD and zero-bounded Limited memory Broyden Fletcher Goldfarb Shannon (LBFGS) optimizer are used to find the optimum B-spline parameters. Registration results are validated on five prostate biopsy patients. Initial experiments suggest thin- volume-to-volume registration to be more effective than slice-to-volume registration. Also, a minimum consistent 2 mm improvement of Target Registration Error (TRE) is achieved following the deformable registration.
SimITK: model driven engineering for medical imaging
Melissa Trezise, David Gobbi, James Cordy, et al.
The Insight Segmentation and Registration Toolkit (ITK) is a highly utilized open source medical imaging library providing chiefly the functionality to register, segment, and filter medical images. Although extremely powerful, ITK has a steep learning curve for users with little or no background in programming. It was for this reason that SimITK was developed. SimITK wraps ITK into the model driven engineering environment Simulink, a part of the Matlab development suite. The first released version of SimITK was a proof of concept, and demonstrated that ITK could be wrapped successfully in Simulink. In this paper a new version of SimITK is presented where ITK classes are wrapped using a fully automated process. In addition, SimITK is transitioned to successfully support ITK version 4, in order to remain current with the ITK project. SimITK includes thirty-seven image filters, twelve optimizers, and nineteen transform classes from ITK version 4 which are successfully wrapped and tested, and can be quickly and easily combined to perform medical imaging tasks. These classes were chosen to represent a broad range of usability, and to allow for greater flexibility when creating registration pipelines. SimITK has the potential to reduce the learning curve for ITK and allow the user to focus on developing workflows and algorithms. A release of SimITK along with tutorials and videos is available at www.simitkvtk.com.
Automatic labeling and segmentation of vertebrae in CT images
Labeling and segmentation of the spinal column from CT images is a pre-processing step for a range of image- guided interventions. State-of-the art techniques have focused either on image feature extraction or template matching for labeling of the vertebrae followed by segmentation of each vertebra. Recently, statistical multi- object models have been introduced to extract common statistical characteristics among several anatomies. In particular, we have created models for segmentation of the lumbar spine which are robust, accurate, and computationally tractable. In this paper, we reconstruct a statistical multi-vertebrae pose+shape model and utilize it in a novel framework for labeling and segmentation of the vertebra in a CT image. We validate our technique in terms of accuracy of the labeling and segmentation of CT images acquired from 56 subjects. The method correctly labels all vertebrae in 70% of patients and is only one level off for the remaining 30%. The mean distance error achieved for the segmentation is 2.1 +/- 0.7 mm.
Design and development of an ultrasound calibration phantom and system
Alexis Cheng, Martin K. Ackerman, Gregory S. Chirikjian, et al.
Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the ultrasound transducer and the ultrasound image. A phantom or model with known geometry is also required. In this work, we design and test an ultrasound calibration phantom and software. The two main considerations in this work are utilizing our knowledge of ultrasound physics to design the phantom and delivering an easy to use calibration process to the user. We explore the use of a three-dimensional printer to create the phantom in its entirety without need for user assembly. We have also developed software to automatically segment the three-dimensional printed rods from the ultrasound image by leveraging knowledge about the shape and scale of the phantom. In this work, we present preliminary results from using this phantom to perform ultrasound calibration. To test the efficacy of our method, we match the projection of the points segmented from the image to the known model and calculate a sum squared difference between each point for several combinations of motion generation and filtering methods. The best performing combination of motion and filtering techniques had an error of 1.56 mm and a standard deviation of 1.02 mm.
Computational modeling and analysis for left ventricle motion using CT/Echo image fusion
Ji-Yeon Kim, Nahyup Kang, Hyoung-Euk Lee, et al.
In order to diagnose heart disease such as myocardial infarction, 2D strain through the speckle tracking echocardiography (STE) or the tagged MRI is often used. However out-of-plane strain measurement using STE or tagged MRI is inaccurate. Therefore, strain for whole organ which are analyzed by simulation of 3D cardiac model can be applied in clinical diagnosis. To simulate cardiac contraction in a cycle, cardiac physical properties should be reflected in cardiac model. The myocardial wall in left ventricle is represented as a transversely orthotropic hyperelastic material, with the fiber orientation varying sequentially from the epicardial surface, through about 0° at the midwall, to the endocardial surface. A time-varying elastance model is simulated to contract myocardial fiber, and physiological intraventricular systolic pressure curves are employed for the cardiac dynamics simulation in a cycle. And an exact description of the cardiac motion should be acquired in order that essential boundary conditions for cardiac simulation are obtained effectively. Real time cardiac motion can be acquired by using echocardiography and exact cardiac geometrical 3D model can be reconstructed using 3D CT data. In this research, image fusion technology from CT and echocardiography is employed in order to consider patient-specific left ventricle movement. Finally, longitudinal strain from speckle tracking echocardiography which is known to fit actual left ventricle deformation relatively well is used to verify these results.
Colonoscope navigation system using colonoscope tracking method based on line registration
This paper presents a new colonoscope navigation system. CT colonography is utilized for colon diagnosis based on CT images. If polyps are found while CT colonography, colonoscopic polypectomy can be performed to remove them. While performing a colonoscopic examination, a physician controls colonoscope based on his/her experience. Inexperienced physicians may occur complications such as colon perforation while colonoscopic examinations. To reduce complications, a navigation system of colonoscope while performing the colonoscopic examinations is necessary. We propose a colonoscope navigation system. This system has a new colonoscope tracking method. This method obtains a colon centerline from a CT volume of a patient. A curved line (colonoscope line) representing the shape of colonoscope inserted to the colon is obtained by using electromagnetic sensors. A coordinate system registration process that employs the ICP algorithm is performed to register the CT and sensor coordinate systems. The colon centerline and colonoscope line are registered by using a line registration method. The position of the colonoscope tip in the colon is obtained from the line registration result. Our colonoscope navigation system displays virtual colonoscopic views generated from the CT volumes. A viewpoint of the virtual colonoscopic view is a point on the centerline that corresponds to the colonoscope tip. Experimental results using a colon phantom showed that the proposed colonoscope tracking method can track the colonoscope tip with small tracking errors.
Intraoperative imaging of cortical perfusion by time-resolved thermography using cold bolus approach
Julia Hollmach, Christian Schnabel, Nico Hoffmann, et al.
During the past decade, thermographic cameras with high thermal and temporal resolution of up to 30 mK and 50 Hz, respectively, have been developed. These camera systems can be used to reveal thermal variations and heterogeneities of tissue and blood. Thus, they provide a fast, sensitive, noninvasive, and label-free application to investigate blood perfusion and to detect perfusion disorders. Therefore, time-resolved thermography is evaluated and tested for intraoperative imaging of the cerebral cortex during neurosurgeries. The motivation of this study is the intraoperative evaluation of the cortical perfusion by observing the temporal temperature curve of the cortex during and after the intravenous application of a cold bolus. The temperature curve caused by a cold bolus is influenced by thermodilution, depending on the temperature difference to the patient’s circulation, and the pattern of mixing with the patient’s blood. In this initial study, a flow phantom was used in order to determine the temperature variations of cold boli under stable conditions in a vascular system. The typical temperature profile of cold water passing by can be approximated by a bi- Gaussian function involving a set of four parameters. These parameters can be used to assess the cold bolus, since they provide information about its intensity, duration and arrival time. The findings of the flow phantom can be applied to thermographic measurements of the human cortex. The results demonstrate that time-resolved thermographic imaging is a suitable method to detect cold boli not only at a flow phantom but also at the human cortex.
Registration-based filtering: An acceptable tool for noise reduction in left ventricular dynamic rotational angiography images?
Jean-Yves Wielandts, Stijn De Buck, Joris Ector, et al.
VT ablations could benefit from Dynamic 3D (4D) left ventricle (LV) visualization as road-map for anatomy-guided procedures. We developed a registration-based method that combines information of several cardiac phases to filter out noise and artifacts in low-dose 3D Rotational Angiography (3DRA) images. This also enables generation of accurate multi-phase surface models by semi-automatic segmentation (SAS). The method uses B-spline non-rigid inter-phase registration (IPR) and subsequent averaging of the registered 3DRA images of 4 cardiac phases, acquired with a slow atrial pacing protocol, and was validated on data from 5 porcine experiments. IPR parameter settings were optimized against manual delineations of the LVs using a composed similarity score (Q), dependent on DICE-coefficient, RMSDistance, Hausdorff (HD) and the percentage of inter-surface distances ≤3mm and ≤4mm. The latter are clinically acceptable error cut-off values. Validation was performed after SAS for varying voxel intensity thresholds (ISO), by comparison between models with and without prior use of IPR. Distances to the manual delineations at optimal ISO were reduced to ≤3mm for 95.6±2.7% and to ≤4mm for 97.1±2.0% of model surfaces. Improved quality was proven by significant mean Q-increase irrespective of ISO (7.6% at optimal ISO (95%CI 4.6-10.5,p<0.0001)). Quality improvement was more important at suboptimal ISO values. Significant (p<0.0001) differences were also noted in HD (-20.5%;95%CI -12.1%-- 29.0%), RMSD (-28.3%;95%CI -21.7%--35.0%) and DICE (1.7%;95%CI 0.9%-2.6%). Generating 4D LV models proved feasible, with sufficient accuracy for clinical applications, opening the perspective of more accurate overlay and guidance during ablation in locations with high degrees of movement.
Dimensional accuracy of 3D printed vertebra
Kent Ogden, Nathaniel Ordway, Dalanda Diallo, et al.
3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.
A tool for intraoperative visualization of registration results
Franklin King, Andras Lasso, Csaba Pinter, et al.
PURPOSE: Validation of image registration algorithms is frequently accomplished by the visual inspection of the resulting linear or deformable transformation due to the lack of ground truth information. Visualization of transformations produced by image registration algorithms during image-guided interventions allows for a clinician to evaluate the accuracy of the result transformation. Software packages that perform the visualization of transformations exist, but are not part of a clinically usable software application. We present a tool that visualizes both linear and deformable transformations and is integrated in an open-source software application framework suited for intraoperative use and general evaluation of registration algorithms. METHODS: A choice of six different modes are available for visualization of a transform. Glyph visualization mode uses oriented and scaled glyphs, such as arrows, to represent the displacement field in 3D whereas glyph slice visualization mode creates arrows that can be seen as a 2D vector field. Grid visualization mode creates deformed grids shown in 3D whereas grid slice visualization mode creates a series of 2D grids. Block visualization mode creates a deformed bounding box of the warped volume. Finally, contour visualization mode creates isosurfaces and isolines that visualize the magnitude of displacement across a volume. The application 3D Slicer was chosen as the platform for the transform visualizer tool. 3D Slicer is a comprehensive open-source application framework developed for medical image computing and used for intra-operative registration. RESULTS: The transform visualizer tool fulfilled the requirements for quick evaluation of intraoperative image registrations. Visualizations were generated in 3D Slicer with little computation time on realistic datasets. It is freely available as an extension for 3D Slicer. CONCLUSION: A tool for the visualization of displacement fields was created and integrated into 3D Slicer, facilitating the validation of image registration algorithms within a comprehensive application framework suited for intraoperative use.
Effects of deformable registration algorithms on the creation of statistical maps for preoperative targeting in deep brain stimulation procedures
Deep brain stimulation, which is used to treat various neurological disorders, involves implanting a permanent electrode into precise targets deep in the brain. Accurate pre-operative localization of the targets on pre-operative MRI sequence is challenging as these are typically located in homogenous regions with poor contrast. Population-based statistical atlases can assist with this process. Such atlases are created by acquiring the location of efficacious regions from numerous subjects and projecting them onto a common reference image volume using some normalization method. In previous work, we presented results concluding that non-rigid registration provided the best result for such normalization. However, this process could be biased by the choice of the reference image and/or registration approach. In this paper, we have qualitatively and quantitatively compared the performance of six recognized deformable registration methods at normalizing such data in poor contrasted regions onto three different reference volumes using a unique set of data from 100 patients. We study various metrics designed to measure the centroid, spread, and shape of the normalized data. This study leads to a total of 1800 deformable registrations and results show that statistical atlases constructed using different deformable registration methods share comparable centroids and spreads with marginal differences in their shape. Among the six methods being studied, Diffeomorphic Demons produces the largest spreads and centroids that are the furthest apart from the others in general. Among the three atlases, one atlas consistently outperforms the other two with smaller spreads for each algorithm. However, none of the differences in the spreads were found to be statistically significant, across different algorithms or across different atlases.
Design of a tracked ultrasound calibration phantom made of LEGO bricks
Ryan Walsh, Marie Soehl, Adam Rankin, et al.
PURPOSE: Spatial calibration of tracked ultrasound systems is commonly performed using precisely fabricated phantoms. Machining or 3D printing has relatively high cost and not easily available. Moreover, the possibilities for modifying the phantoms are very limited. Our goal was to find a method to construct a calibration phantom from affordable, widely available components, which can be built in short time, can be easily modified, and provides comparable accuracy to the existing solutions. METHODS: We designed an N-wire calibration phantom made of LEGO® bricks. To affirm the phantom’s reproducibility and build time, ten builds were done by first-time users. The phantoms were used for a tracked ultrasound calibration by an experienced user. The success of each user’s build was determined by the lowest root mean square (RMS) wire reprojection error of three calibrations. The accuracy and variance of calibrations were evaluated for the calibrations produced for various tracked ultrasound probes. The proposed model was compared to two of the currently available phantom models for both electromagnetic and optical tracking. RESULTS: The phantom was successfully built by all ten first-time users in an average time of 18.8 minutes. It cost approximately $10 CAD for the required LEGO® bricks and averaged a 0.69mm of error in the calibration reproducibility for ultrasound calibrations. It is one third the cost of similar 3D printed phantoms and takes much less time to build. The proposed phantom’s image reprojections were 0.13mm more erroneous than those of the highest performing current phantom model The average standard deviation of multiple 3D image reprojections differed by 0.05mm between the phantoms CONCLUSION: It was found that the phantom could be built in less time, was one third the cost, compared to similar 3D printed models. The proposed phantom was found to be capable of producing equivalent calibrations to 3D printed phantoms.
SPECT-US image fusion and clinical applications
Johann Hummel, Marcus Kaar, Rainer Hoffmann, et al.
Because scintigraphic images lack anatomical information, single photon emission tomography (SPECT) and positron emission tomography systems (PET) are combined physically with CTs to compensate for this drawback. In our work, we present a method where the CT is replaced by a 3D ultrasound device. Because in this case a mechanical linkage is not possible, we use an additional optical tracking system (OTS) for spatial correlation of the SPECT or PET information and the US. To enable image fusion between the functional SPECT and the anatomical US we first calibrate the SPECT by means of the optical tracking system. This is done by imaging a phantom with SPECT and scanning the surface of the phantom using a calibrated stylus of the OTS. Applying an iterative closest point (ICP) algorithm results in the transformation between the optical coordinate system and the SPECT coordinate system. When a patient undergoes a SPECT scan, a 3D US image is taken immediately after the scan. Since the scan head of the US is also tracked by the OTS, the transformation between OTS and SPECT can be calculated straight forward. For clinical intervention, the patient is again imaged with the US and a 3D/3D registration between the two US volumes allows to transform the functional information of the SPECT to the current US image in real time. We found a mean distance between the point cloud of the optical stylus and the segmented surface of the phantom of 2.3 mm while the maximum distance was found to be 6.9 mm. The 3D3D registration between the two US images was accomplished with an error of 2.1 mm.
Dual-projection 3D-2D registration for surgical guidance: preclinical evaluation of performance and minimum angular separation
An algorithm for 3D-2D registration of CT and x-ray projections has been developed using dual projection views to provide 3D localization with accuracy exceeding that of conventional tracking systems. The registration framework employs a normalized gradient information (NGI) similarity metric and covariance matrix adaptation evolution strategy (CMAES) to solve for the patient pose in 6 degrees of freedom. Registration performance was evaluated in anthropomorphic head and chest phantoms, as well as a human torso cadaver, using C-arm projection views acquired at angular separations (Δ𝜃) ranging 0–178°. Registration accuracy was assessed in terms target registration error (TRE) and compared to that of an electromagnetic tracker. Studies evaluated the influence of C-arm magnification, x-ray dose, and preoperative CT slice thickness on registration accuracy and the minimum angular separation required to achieve TRE ~2 mm. The results indicate that Δ𝜃 as small as 10–20° is adequate to achieve TRE <2 mm with 95% confidence, comparable or superior to that of commercial trackers. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers, and manual registration. The studies support potential application to percutaneous spine procedures and intracranial neurosurgery.
Feasibility of a touch-free user interface for ultrasound snapshot-guided nephrostomy
Simon Kotwicz Herniczek, Andras Lasso, Tamas Ungi, et al.
PURPOSE: Clinicians are often required to interact with visualization software during image-guided medical interventions, but sterility requirements forbid the use of traditional keyboard and mouse devices. In this study we attempt to determine the feasibility of using a touch-free interface in a real time procedure by creating a full gesture-based guidance module for ultrasound snapshot-guided percutaneous nephrostomy. METHODS: The workflow for this procedure required a gesture to select between two options, a “back” and “next” gesture, a “reset” gesture, and a way to mark a point on an image. Using an orientation sensor mounted on the hand as input device, gesture recognition software was developed based on hand orientation changes. Five operators were recruited to train the developed gesture recognition software. The participants performed each gesture ten times and placed three points on predefined target positions. They also performed tasks unrelated to the sought-after gestures to evaluate the specificity of the gesture recognition. The orientation sensor measurements and the position of the marked points were recorded. The recorded data sets were used to establish threshold values and optimize the gesture recognition algorithm. RESULTS: For the “back”, “reset” and “select option” gesture, a 100% recognition accuracy was achieved. For the “next” gesture, a 92% recognition accuracy was obtained. With the optimized gesture recognition software no misclassified gestures were observed when testing the individual gestures or when performing actions unrelated to the sought-after gestures. The mean point placement error was 0.55 mm with a standard deviation of 0.30 mm. The mean placement time was 4.8 seconds. CONCLUSION: The system that was developed is promising and demonstrates potential for touch-free interfaces in the operating room.
Active shape models with optimised texture features for radiotherapy
K. Cheng, D. Montgomery, F. Yang, et al.
There is now considerable interest in radiation oncology on the use of shape models of anatomy to improve target delineation and assess anatomical disparity at time of radiotherapy. In this paper a texture based active shape model (ASM) is presented for automatic delineation of the gross tumor volume (GTV), containing the prostate, on computed tomography (CT) images of prostate cancer patients. The model was trained on two-dimensional (2D) contours identified by a radiation oncologist on sequential CT image slices. A three-dimensional (3D) GTV shape was constructed from these and iteratively aligned using Procrustes analysis. To train the model the shape deformation variance was learnt using the Active Shape Model (ASM) approach. In a novel development to this approach a profile feature was selected from pre-computed texture features by minimizing the Mahalanobis distance to obtain the most distinct feature for each landmark. The interior of the GTV was modelled using quantile histograms to initialize the shape model on new cases. From the archive of 42 cases of contoured CT scans, 32 cases were randomly selected for training the model and 10 cases for evaluating performance. The gold standard was defined by the radiation oncologist. The shape model achieved an overall Dice coefficient of 0.81 for all test cases. Performance was found to increase, mean Dice coefficient of 0.87, when the volume size of the new case was similar to the mean shape of the model. With further work the approach has the potential to be used in real-time delineation of target volumes and improve segmentation accuracy.
Heuristic estimation of electromagnetically tracked catheter shape for image-guided vascular procedures
Fuad N. Mefleh, G. Hamilton Baker, David M. Kwartowitz
In our previous work we presented a novel image-guided surgery (IGS) system, Kit for Navigation by Image Focused Exploration (KNIFE).1,2 KNIFE has been demonstrated to be effective in guiding mock clinical procedures with the tip of an electromagnetically tracked catheter overlaid onto a pre-captured bi-plane fluoroscopic loop. Representation of the catheter in KNIFE differs greatly from what is captured by the fluoroscope, due to distortions and other properties of fluoroscopic images. When imaged by a fluoroscope, catheters can be visualized due to the inclusion of radiopaque materials (i.e. Bi, Ba, W) in the polymer blend.3 However, in KNIFE catheter location is determined using a single tracking seed located in the catheter tip that is represented as a single point overlaid on pre-captured fluoroscopic images. To bridge the gap in catheter representation between KNIFE and traditional methods we constructed a catheter with five tracking seeds positioned along the distal 70 mm of the catheter. We have currently investigated the use of four spline interpolation methods for estimation of true catheter shape and have assesed the error in their estimation of true catheter shape. In this work we present a method for the evaluation of interpolation algorithms with respect to catheter shape determination.
A dimensionless dynamic contrast enhanced MRI parameter for intra-prostatic tumour target volume delineation: initial comparison with histology
W. Thomas Hrinivich, Eli Gibson, Mena Gaed, et al.
Purpose: T2 weighted and diffusion weighted magnetic resonance imaging (MRI) show promise in isolating prostate tumours. Dynamic contrast enhanced (DCE)-MRI has also been employed as a component in multi-parametric tumour detection schemes. Model-based parameters such as Ktrans are conventionally used to characterize DCE images and require arterial contrast agent (CR) concentration. A robust parameter map that does not depend on arterial input may be more useful for target volume delineation. We present a dimensionless parameter (Wio) that characterizes CR wash-in and washout rates without requiring arterial CR concentration. Wio is compared to Ktrans in terms of ability to discriminate cancer in the prostate, as demonstrated via comparison with histology. Methods: Three subjects underwent DCE-MRI using gadolinium contrast and 7 s imaging temporal resolution. A pathologist identified cancer on whole-mount histology specimens, and slides were deformably registered to MR images. The ability of Wio maps to discriminate cancer was determined through receiver operating characteristic curve (ROC) analysis. Results: There is a trend that Wio shows greater area under the ROC curve (AUC) than Ktrans with median AUC values of 0.74 and 0.69 respectively, but the difference was not statistically significant based on a Wilcoxon signed-rank test (p = 0.13). Conclusions: Preliminary results indicate that Wio shows potential as a tool for Ktrans QA, showing similar ability to discriminate cancer in the prostate as Ktrans without requiring arterial CR concentration.
3D non-rigid surface-based MR-TRUS registration for image-guided prostate biopsy
Yue Sun, Wu Qiu, Cesare Romagnoli, et al.
Two dimensional (2D) transrectal ultrasound (TRUS) guided prostate biopsy is the standard approach for definitive diagnosis of prostate cancer (PCa). However, due to the lack of image contrast of prostate tumors needed to clearly visualize early-stage PCa, prostate biopsy often results in false negatives, requiring repeat biopsies. Magnetic Resonance Imaging (MRI) has been considered to be a promising imaging modality for noninvasive identification of PCa, since it can provide a high sensitivity and specificity for the detection of early stage PCa. Our main objective is to develop and validate a registration method of 3D MR-TRUS images, allowing generation of volumetric 3D maps of targets identified in 3D MR images to be biopsied using 3D TRUS images. Our registration method first makes use of an initial rigid registration of 3D MR images to 3D TRUS images using 6 manually placed approximately corresponding landmarks in each image. Following the manual initialization, two prostate surfaces are segmented from 3D MR and TRUS images and then non-rigidly registered using a thin-plate spline (TPS) algorithm. The registration accuracy was evaluated using 4 patient images by measuring target registration error (TRE) of manually identified corresponding intrinsic fiducials (calcifications and/or cysts) in the prostates. Experimental results show that the proposed method yielded an overall mean TRE of 2.05 mm, which is favorably comparable to a clinical requirement for an error of less than 2.5 mm.
A new CT prostate segmentation for CT-based HDR brachytherapy
Xiaofeng Yang, Peter Rossi, Tomi Ogunleye, et al.
High-dose-rate (HDR) brachytherapy has become a popular treatment modality for localized prostate cancer. Prostate HDR treatment involves placing 10 to 20 catheters (needles) into the prostate gland, and then delivering radiation dose to the cancerous regions through these catheters. These catheters are often inserted with transrectal ultrasound (TRUS) guidance and the HDR treatment plan is based on the CT images. The main challenge for CT-based HDR planning is to accurately segment prostate volume in CT images due to the poor soft tissue contrast and additional artifacts introduced by the catheters. To overcome these limitations, we propose a novel approach to segment the prostate in CT images through TRUS-CT deformable registration based on the catheter locations. In this approach, the HDR catheters are reconstructed from the intra-operative TRUS and planning CT images, and then used as landmarks for the TRUS-CT image registration. The prostate contour generated from the TRUS images captured during the ultrasound-guided HDR procedure was used to segment the prostate on the CT images through deformable registration. We conducted two studies. A prostate-phantom study demonstrated a submillimeter accuracy of our method. A pilot study of 5 prostate-cancer patients was conducted to further test its clinical feasibility. All patients had 3 gold markers implanted in the prostate that were used to evaluate the registration accuracy, as well as previous diagnostic MR images that were used as the gold standard to assess the prostate segmentation. For the 5 patients, the mean gold-marker displacement was 1.2 mm; the prostate volume difference between our approach and the MRI was 7.2%, and the Dice volume overlap was over 91%. Our proposed method could improve prostate delineation, enable accurate dose planning and delivery, and potentially enhance prostate HDR treatment outcome.
Identifying MRI markers to evaluate early treatment-related changes post-laser ablation for cancer pain management
Pallavi Tiwari, Shabbar Danish, Anant Madabhushi
Laser interstitial thermal therapy (LITT) has recently emerged as a new treatment modality for cancer pain management that targets the cingulum (pain center in the brain), and has shown promise over radio-frequency (RF) based ablation which is reported to provide temporary relief. One of the major advantages enjoyed by LITT is its compatibility with magnetic resonance imaging (MRI), allowing for high resolution in vivo imaging to be used in LITT procedures. Since laser ablation for pain management is currently exploratory and is only performed at a few centers worldwide, its short-, and long-term effects on the cingulum are currently unknown. Traditionally treatment effects are evaluated by monitoring changes in volume of the ablation zone post-treatment. However, this is sub-optimal since it involves evaluating a single global parameter (volume) to detect changes pre-, and post-MRI. Additionally, the qualitative observations of LITT-related changes on multi-parametric MRI (MPMRI) do not specifically address differentiation between the appearance of treatment related changes (edema, necrosis) from recurrence of the disease (pain recurrence). In this work, we explore the utility of computer extracted texture descriptors on MP-MRI to capture early treatment related changes on a per-voxel basis by extracting quantitative relationships that may allow for an in-depth understanding of tissue response to LITT on MRI, subtle changes that may not be appreciable on original MR intensities. The second objective of this work is to investigate the efficacy of different MRI protocols in accurately capturing treatment related changes within and outside the ablation zone post-LITT. A retrospective cohort of studies comprising pre- and 24-hour post-LITT 3 Tesla T1-weighted (T1w), T2w, T2-GRE, and T2-FLAIR acquisitions was considered. Our scheme involved (1) inter-protocol as well as inter-acquisition affine registration of pre- and post-LITT MRI, (2) quantitation of MRI parameters by correcting for intensity drift in order to examine tissue-specific response, and (3) quantification of MRI maps via texture and intensity features to evaluate changes in MR markers pre- and post-LITT. A total of 78 texture features comprising of non-steerable and steerable gradient and second order statistical features were extracted from pre- and post-LITT MP-MRI on a per-voxel basis. Quantitative, voxel-wise comparison of the changes in MRI texture features between pre-, and post-LITT MRI indicate that (a) steerable and non-steerable gradient texture features were highly sensitive as well as specific in predicting subtle micro-architectural changes within and around the ablation zone pre- and post-LITT, (b) FLAIR was identified as the most sensitive MRI protocol in identifying early treatment changes yielding a normalized percentage change of 360% within the ablation zone relative to its pre-LITT value, and (c) GRE was identified as the most sensitive MRI protocol in quantifying changes outside the ablation zone post-LITT. Our preliminary results thus indicate great potential for non-invasive computerized MRI features in determining localized micro-architectural focal treatment related changes post-LITT.
Development and evaluation of optical needle depth sensor for percutaneous diagnosis and therapies
Keryn Palmer, David Alelyunas, Connor McCann, et al.
Current methods of needle insertion during percutaneous CT and MRI guided procedures lack precision in needle depth sensing. The depth of the needle insertion is currently monitored through depth markers drawn on the needle and later confirmed by intra-procedural imaging; until this confirmation, the physicians’ judgment that the target is reached is solely based on the depth markers, which are not always clearly visible. We have therefore designed an optical sensing device which provides continuous feedback of needle insertion depth and degree of rotation throughout insertion. An optical mouse sensor was used in conjunction with a microcontroller board, Arduino Due, to acquire needle position information. The device is designed to be attached to a needle guidance robot developed for MRI-guided prostate biopsy in order to aid the manual insertion. An LCD screen and three LEDs were employed with the Arduino Due to form a hand-held device displaying needle depth and rotation. Accuracy of the device was tested to evaluate the impact of insertion speed and rotation. Unlike single dimensional needle depth sensing developed by other researchers, this two dimensional sensing device can also detect the rotation around the needle axis. The combination of depth and rotation sensing would be greatly beneficial for the needle steering approaches that require both depth and rotation information. Our preliminary results indicate that this sensing device can be useful in detecting needle motion when using an appropriate speed and range of motion.
Image to physical space registration of supine breast MRI for image guided breast surgery
Rebekah H. Conley, Ingrid M. Meszoely, Thomas S. Pheiffer, et al.
Breast conservation therapy (BCT) is a desirable option for many women diagnosed with early stage breast cancer and involves a lumpectomy followed by radiotherapy. However, approximately 50% of eligible women will elect for mastectomy over BCT despite equal survival benefit (provided margins of excised tissue are cancer free) due to uncertainty in outcome with regards to complete excision of cancerous cells, risk of local recurrence, and cosmesis. Determining surgical margins intraoperatively is difficult and achieving negative margins is not as robust as it needs to be, resulting in high re-operation rates and often mastectomy. Magnetic resonance images (MRI) can provide detailed information about tumor margin extents, however diagnostic images are acquired in a fundamentally different patient presentation than that used in surgery. Therefore, the high quality diagnostic MRIs taken in the prone position with pendant breast are not optimal for use in surgical planning/guidance due to the drastic shape change between preoperative images and the common supine surgical position. This work proposes to investigate the value of supine MRI in an effort to localize tumors intraoperatively using image-guidance. Mock intraoperative setups (realistic patient positioning in non-sterile environment) and preoperative imaging data were collected from a patient scheduled for a lumpectomy. The mock intraoperative data included a tracked laser range scan of the patient's breast surface, tracked center points of MR visible fiducials on the patient's breast, and tracked B-mode ultrasound and strain images. The preoperative data included a supine MRI with visible fiducial markers. Fiducial markers localized in the MRI were rigidly registered to their mock intraoperative counterparts using an optically tracked stylus. The root mean square (RMS) fiducial registration error using the tracked markers was 3.4mm. Following registration, the average closest point distance between the MR generated surface nodes and the LRS point cloud was 1.76±0.502 mm.
A global CT to US registration of the lumbar spine
Simrin Nagpal, Ilker Hacihaliloglu, Tamas Ungi, et al.
During percutaneous lumbar spine needle interventions, alignment of the preoperative computed tomography (CT) with intraoperative ultrasound (US) can augment anatomical visualization for the clinician. We propose an approach to rigidly align CT and US data of the lumbar spine. The approach involves an intensity-based volume registration step, followed by a surface segmentation and a point-based registration of the entire lumbar spine volume. A clinical feasibility study resulted in mean registration error of approximately 3 mm between CT and US data.
Rigid point registration circuits
In 1996 Freeborough proposed a method for estimating target registration error (TRE) in the absence of ground truth. In his approach, a circuit of registrations is performed using the same registration method on multiple views of the same object: 1 to 2, 2 to 3, …, Nc –1 to Nc , and Nc to 1 where Nc is at least three and the last registration completes the circuit. Any difference between the original and final positions, which we call the “Circuit TRE” or TREc, indicates that at least one step in the registration method suffers from TRE. To estimate the mean single-step error, Freeborough proposed the formula, True TRE = k × TREc, and suggested that k = 1/square-root-of-Nc. Multiple articles have employed Freeborough’s approach to estimate the accuracy of intensity-based registration methods with various values of k, but no theoretical analysis of the expected accuracy of such estimates has been attempted for any registration method. As a first step in this direction, the current work provides such an analysis for the method of rigid point registration, also known as fiducial registration. The analysis, which is validated via computer simulation, reveals that for point registration Freeborough’s formula greatly underestimates TRE. The simulations further reveal that, to an excellent approximation, True TRE = k ×square-root-of-TREc, where k depends not only on the number of points but also on their configuration. We investigate the usefulness of this formula as a means to estimate true TRE. We find that it is less reliable than a standard formula published in 1998.
Needle localization using a moving stylet/catheter in ultrasound-guided regional anesthesia: a feasibility study
Despite the wide range and long history of ultrasound guided needle insertions, an unresolved issue in many cases is clear needle visibility. A well-known ad hoc technique to detect the needle is to move the stylet and look for changes in the needle appearance. We present a new method to automatically locate a moving stylet/catheter within a stationary cannula using motion detection. We then use this information to detect the needle trajectory and the tip. The differences between the current frame and the previous frame are detected and localized, to minimize the influence of tissue global motions. A polynomial fit based on the detected needle axis determines the estimated stylet shaft trajectory, and the extent of the differences along the needle axis represents the tip. Over a few periodic movements of the stylet including its full insertion into the cannula to the tip, a combination of polynomial fits determines the needle trajectory and the last detected point represents the needle tip. Experiments are conducted in water bath and bovine muscle tissue for several stylet/catheter materials. Results show that a plastic stylet has the best needle shaft and tip localization accuracy in the water bath with RMSE = 0:16 mm and RMSE = 0:51 mm, respectively. In the bovine tissue, the needle tip was best localized with the plastic catheter with RMSE = 0:33 mm. The stylet tip localization was most accurate with the steel stylet, with RMSE = 2:81 mm and the shaft was best localized with the plastic catheter, with RMSE = 0:32 mm.
Tracked ultrasound calibration studies with a phantom made of LEGO bricks
Marie Soehl, Ryan Walsh, Adam Rankin, et al.
In this study, spatial calibration of tracked ultrasound was compared by using a calibration phantom made of LEGO® bricks and two 3-D printed N-wire phantoms. METHODS: The accuracy and variance of calibrations were compared under a variety of operating conditions. Twenty trials were performed using an electromagnetic tracking device with a linear probe and three trials were performed using varied probes, varied tracking devices and the three aforementioned phantoms. The accuracy and variance of spatial calibrations found through the standard deviation and error of the 3-D image reprojection were used to compare the calibrations produced from the phantoms. RESULTS: This study found no significant difference between the measured variables of the calibrations. The average standard deviation of multiple 3-D image reprojections with the highest performing printed phantom and those from the phantom made of LEGO® bricks differed by 0.05 mm and the error of the reprojections differed by 0.13 mm. CONCLUSION: Given that the phantom made of LEGO® bricks is significantly less expensive, more readily available, and more easily modified than precision-machined N-wire phantoms, it prompts to be a viable calibration tool especially for quick laboratory research and proof of concept implementations of tracked ultrasound navigation.
Mapping surgical fields by moving a laser-scanning multimodal scope attached to a robot arm
Yuanzheng Gong, Tomothy D. Soper, Vivian W. Hou, et al.
Endoscopic visualization in brain tumor removal is challenging because tumor tissue is often visually indistinguishable from healthy tissue. Fluorescence imaging can improve tumor delineation, though this impairs reflectance-based visualization of gross anatomical features. To accurately navigate and resect tumors, we created an ultrathin/flexible, scanning fiber endoscope (SFE) that acquires reflectance and fluorescence wide-field images at high-resolution. Furthermore, our miniature imaging system is affixed to a robotic arm providing programmable motion of SFE, from which we generate multimodal surface maps of the surgical field.

To test this system, synthetic phantoms of debulked tumor from brain are fabricated having spots of fluorescence representing residual tumor. Three-dimension (3D) surface maps of this surgical field are produced by moving the SFE over the phantom during concurrent reflectance and fluorescence imaging (30Hz video). SIFT-based feature matching between reflectance images is implemented to select a subset of key frames, which are reconstructed in 3D by bundle adjustment. The resultant reconstruction yields a multimodal 3D map of the tumor region that can improve visualization and robotic path planning.

Efficiency of creating these maps is important as they are generated multiple times during tumor margin clean-up. By using pre-programmed vector motions of the robot arm holding the SFE, the computer vision algorithms are optimized for efficiency by reducing search times. Preliminary results indicate that the time for creating these 3D multimodal maps of the surgical field can be reduced to one third by using known trajectories of the surgical robot moving the image-guided tool.