Proceedings Volume 9786

Medical Imaging 2016: Image-Guided Procedures, Robotic Interventions, and Modeling

Robert J. Webster III, Ziv R. Yaniv
cover
Proceedings Volume 9786

Medical Imaging 2016: Image-Guided Procedures, Robotic Interventions, and Modeling

Robert J. Webster III, Ziv R. Yaniv
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 3 August 2016
Contents: 13 Sessions, 100 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2016
Volume Number: 9786

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9786
  • Cardiac Procedures
  • Segmentation and 2D and 3D Registration
  • Spine and Percutaneous Procedures
  • Ultrasound Image Guidance: Joint Session with Conferences 9786 and 9790
  • Registration
  • Robotic Systems and Treatment Planning
  • Tissue Deformation and Motion
  • Intraoperative Imaging and Visualization
  • Endoscopy/Laparoscopy
  • Keynote and New Robotic Applications
  • Prostate Procedures
  • Poster Session
Front Matter: Volume 9786
icon_mobile_dropdown
Front Matter: Volume 9786
This PDF file contains the front matter associated with SPIE Proceedings Volume 9786, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Cardiac Procedures
icon_mobile_dropdown
Improved image guidance technique for minimally invasive mitral valve repair using real-time tracked 3D ultrasound
Adam Rankin, John Moore, Daniel Bainbridge, et al.
In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.
Cognitive tools pipeline for assistance of mitral valve surgery
Nicolai Schoch, Patrick Philipp, Tobias Weller, et al.
For cardiac surgeons, mitral valve reconstruction (MVR) surgery is a highly demanding procedure, where an artificial annuloplasty ring is implanted onto the mitral valve annulus to re-enable the valve's proper closing functionality. For a successful operation the surgeon has to keep track of a variety of relevant impact factors, such as patient-individual medical history records, valve geometries, or tissue properties of the surgical target, and thereon-based deduce type and size of the best-suitable ring prosthesis according to practical surgery experience. With this work, we aim at supporting the surgeon in selecting this ring prosthesis by means of a comprehensive information processing pipeline. It gathers all available patient-individual information, and mines this data according to 'surgical rules', that represent published MVR expert knowledge and recommended best practices, in order to suggest a set of potentially suitable annuloplasty rings. Subsequently, these rings are employed in biomechanical MVR simulation scenarios, which simulate the behavior of the patient-specific mitral valve subjected to the respective virtual ring implantation. We present the implementation of our deductive system for MVR ring selection and how it is integrated into a cognitive data processing pipeline architecture, which is built under consideration of Linked Data principles in order to facilitate holistic information processing of heterogeneous medical data. By the example of MVR surgery, we demonstrate the ease of use and the applicability of our development. We expect to essentially support patient-specific decision making in MVR surgery by means of this holistic information processing approach.
Dynamic tracking of prosthetic valve motion and deformation from bi-plane x-ray views: feasibility study
Charles R. Hatt, Martin Wagner, Amish N. Raval, et al.
Transcatheter aortic valve replacement (TAVR) requires navigation and deployment of a prosthetic valve within the aortic annulus under fluoroscopic guidance. To support improved device visualization in this procedure, this study investigates the feasibility of frame-by-frame 3D reconstruction of a moving and expanding prosthetic valve structure from simultaneous bi-plane x-ray views. In the proposed method, a dynamic 3D model of the valve is used in a 2D/3D registration framework to obtain a reconstruction of the valve. For each frame, valve model parameters describing position, orientation, expansion state, and deformation are iteratively adjusted until forward projections of the model match both bi-plane views. Simulated bi-plane imaging of a valve at different signal-difference-to-noise ratio (SDNR) levels was performed to test the approach. 20 image sequences with 50 frames of valve deployment were simulated at each SDNR. The simulation achieved a target registration error (TRE) of the estimated valve model of 0.93 ± 2.6 mm (mean ± S.D.) for the lowest SDNR of 2. For higher SDNRs (5 to 50) a TRE of 0.04 mm ± 0.23 mm was achieved. A tabletop phantom study was then conducted using a TAVR valve. The dynamic 3D model was constructed from high resolution CT scans and a simple expansion model. TRE was 1.22 ± 0.35 mm for expansion states varying from undeployed to fully deployed, and for moderate amounts of inter-frame motion. Results indicate that it is feasible to use bi-plane imaging to recover the 3D structure of deformable catheter devices.
Classification of calcium in intravascular OCT images for the purpose of intervention planning
Ronny Shalev, Hiram G. Bezerra, Soumya Ray, et al.
The presence of extensive calcification is a primary concern when planning and implementing a vascular percutaneous intervention such as stenting. If the balloon does not expand, the interventionalist must blindly apply high balloon pressure, use an atherectomy device, or abort the procedure. As part of a project to determine the ability of Intravascular Optical Coherence Tomography (IVOCT) to aid intervention planning, we developed a method for automatic classification of calcium in coronary IVOCT images. We developed an approach where plaque texture is modeled by the joint probability distribution of a bank of filter responses where the filter bank was chosen to reflect the qualitative characteristics of the calcium. This distribution is represented by the frequency histogram of filter response cluster centers. The trained algorithm was evaluated on independent ex-vivo image data accurately labeled using registered 3D microscopic cryo-image data which was used as ground truth. In this study, regions for extraction of sub-images (SI's) were selected by experts to include calcium, fibrous, or lipid tissues. We manually optimized algorithm parameters such as choice of filter bank, size of the dictionary, etc. Splitting samples into training and testing data, we achieved 5-fold cross validation calcium classification with F1 score of 93.7±2.7% with recall of ≥89% and a precision of ≥97% in this scenario with admittedly selective data. The automated algorithm performed in close-to-real-time (2.6 seconds per frame) suggesting possible on-line use. This promising preliminary study indicates that computational IVOCT might automatically identify calcium in IVOCT coronary artery images.
Fusion of CTA and XA data using 3D centerline registration for plaque visualization during coronary intervention
Coronary Artery Disease (CAD) results in the buildup of plaque below the intima layer inside the vessel wall of the coronary arteries causing narrowing of the vessel and obstructing blood flow. Percutaneous coronary intervention (PCI) is usually done to enlarge the vessel lumen and regain back normal flow of blood to the heart. During PCI, X-ray imaging is done to assist guide wire movement through the vessels to the area of stenosis. While X-ray imaging allows for good lumen visualization, information on plaque type is unavailable. Also due to the projection nature of the X-ray imaging, additional drawbacks such as foreshortening and overlap of vessels limit the efficacy of the cardiac intervention. Reconstruction of 3D vessel geometry from biplane X-ray acquisitions helps to overcome some of these projection drawbacks. However, the plaque type information remains an issue. In contrast, imaging using computed tomography angiography (CTA) can provide us with information on both lumen and plaque type and allows us to generate a complete 3D coronary vessel tree unaffected by the foreshortening and overlap problems of the X-ray imaging. In this paper, we combine x-ray biplane images with CT angiography to visualize three plaque types (dense calcium, fibrous fatty and necrotic core) on x-ray images. 3D registration using three different registration methods is done between coronary centerlines available from x-ray images and from the CTA volume along with 3D plaque information available from CTA. We compare the different registration methods and evaluate their performance based on 3D root mean squared errors. Two methods are used to project this 3D information onto 2D plane of the x-ray biplane images. Validation of our approach is performed using artificial biplane x-ray datasets.
Segmentation and 2D and 3D Registration
icon_mobile_dropdown
Random walk based segmentation for the prostate on 3D transrectal ultrasound images
Ling Ma, Rongrong Guo, Zhiqiang Tian, et al.
This paper proposes a new semi-automatic segmentation method for the prostate on 3D transrectal ultrasound images (TRUS) by combining the region and classification information. We use a random walk algorithm to express the region information efficiently and flexibly because it can avoid segmentation leakage and shrinking bias. We further use the decision tree as the classifier to distinguish the prostate from the non-prostate tissue because of its fast speed and superior performance, especially for a binary classification problem. Our segmentation algorithm is initialized with the user roughly marking the prostate and non-prostate points on the mid-gland slice which are fitted into an ellipse for obtaining more points. Based on these fitted seed points, we run the random walk algorithm to segment the prostate on the mid-gland slice. The segmented contour and the information from the decision tree classification are combined to determine the initial seed points for the other slices. The random walk algorithm is then used to segment the prostate on the adjacent slice. We propagate the process until all slices are segmented. The segmentation method was tested in 32 3D transrectal ultrasound images. Manual segmentation by a radiologist serves as the gold standard for the validation. The experimental results show that the proposed method achieved a Dice similarity coefficient of 91.37±0.05%. The segmentation method can be applied to 3D ultrasound-guided prostate biopsy and other applications.
Resection planning for robotic acoustic neuroma surgery
Kepra L. McBrayer, George B. Wanna, Benoit M. Dawant, et al.
Acoustic neuroma surgery is a procedure in which a benign mass is removed from the Internal Auditory Canal (IAC). Currently this surgical procedure requires manual drilling of the temporal bone followed by exposure and removal of the acoustic neuroma. This procedure is physically and mentally taxing to the surgeon. Our group is working to develop an Acoustic Neuroma Surgery Robot (ANSR) to perform the initial drilling procedure. Planning the ANSR's drilling region using pre-operative CT requires expertise and around 35 minutes' time. We propose an approach for automatically producing a resection plan for the ANSR that would avoid damage to sensitive ear structures and require minimal editing by the surgeon. We first compute an atlas-based segmentation of the mastoid section of the temporal bone, refine it based on the position of anatomical landmarks, and apply a safety margin to the result to produce the automatic resection plan. In experiments with CTs from 9 subjects, our automated process resulted in a resection plan that was verified to be safe in every case. Approximately 2 minutes were required in each case for the surgeon to verify and edit the plan to permit functional access to the IAC. We measured a mean Dice coefficient of 0.99 and surface error of 0.08 mm between the final and automatically proposed plans. These preliminary results indicate that our approach is a viable method for resection planning for the ANSR and drastically reduces the surgeon's planning effort.
Fat segmentation on chest CT images via fuzzy models
Yubing Tong, Jayaram K. Udupa, Caiyun Wu, et al.
Quantification of fat throughout the body is vital for the study of many diseases. In the thorax, it is important for lung transplant candidates since obesity and being underweight are contraindications to lung transplantation given their associations with increased mortality. Common approaches for thoracic fat segmentation are all interactive in nature, requiring significant manual effort to draw the interfaces between fat and muscle with low efficiency and questionable repeatability. The goal of this paper is to explore a practical way for the segmentation of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) components of chest fat based on a recently developed body-wide automatic anatomy recognition (AAR) methodology. The AAR approach involves 3 main steps: building a fuzzy anatomy model of the body region involving all its major representative objects, recognizing objects in any given test image, and delineating the objects. We made several modifications to these steps to develop an effective solution to delineate SAT/VAT components of fat. Two new objects representing interfaces of SAT and VAT regions with other tissues, SatIn and VatIn are defined, rather than using directly the SAT and VAT components as objects for constructing the models. A hierarchical arrangement of these new and other reference objects is built to facilitate their recognition in the hierarchical order. Subsequently, accurate delineations of the SAT/VAT components are derived from these objects. Unenhanced CT images from 40 lung transplant candidates were utilized in experimentally evaluating this new strategy. Mean object location error achieved was about 2 voxels and delineation error in terms of false positive and false negative volume fractions were, respectively, 0.07 and 0.1 for SAT and 0.04 and 0.2 for VAT.
Automatic masking for robust 3D-2D image registration in image-guided spine surgery
M. D. Ketcha, T. De Silva, A. Uneri, et al.
During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.
Robust patella motion tracking using intensity-based 2D-3D registration on dynamic bi-plane fluoroscopy: towards quantitative assessment in MPFL reconstruction surgery
Yoshito Otake, Matthieu Esnault, Robert Grupp, et al.
The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).
Fast generation of digitally reconstructed radiograph through an efficient preprocessing of ray attenuation values
Soheil Ghafurian, Dimitris N. Metaxas, Virak Tan, et al.
Digitally reconstructed radiographs (DRR) are a simulation of radiographic images produced through a perspective projection of the three-dimensional (3D) image (volume) onto a two-dimensional (2D) image plane. The traditional method for the generation of DRRs, namely ray-casting, is a computationally intensive process and accounts for most of solution time in 3D/2D medical image registration frameworks, where a large number of DRRs is required. A few alternate methods for a faster DRR generation have been proposed, the most successful of which are based on the idea of pre-calculating the attenuation value of possible rays. Despite achieving good quality, these methods support a limited range of motion for the volume and entail long pre-calculation time. In this paper, we propose a new preprocessing procedure and data structure for the calculation of the ray attenuation values. This method supports all possible volume positions with practically small memory requirements in addition to reducing the complexity of the problem from O(n3) to O(n2). In our experiments, we generated DRRs of high quality in 63 milliseconds with a preprocessing time of 99.48 seconds and a memory size of 7.45 megabytes.
Spine and Percutaneous Procedures
icon_mobile_dropdown
Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry
Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.
Automatic geometric rectification for patient registration in image-guided spinal surgery
Yunliang Cai, Jonathan D. Olson, Xiaoyao Fan, et al.
Accurate and efficient patient registration is crucial for the success of image-guidance in open spinal surgery. Recently, we have established the feasibility of using intraoperative stereovision (iSV) to perform patient registration with respect to preoperative CT (pCT) in human subjects undergoing spinal surgery. Although a desired accuracy was achieved, the method required manual segmentation and placement of feature points on reconstructed iSV and pCT surfaces. In this study, we present an improved registration pipeline to eliminate these manual operations. Specifically, automatic geometric rectification was performed on spines extracted from pCT and iSV into pose-invariant shapes using a nonlinear principal component analysis (NLPCA). Rectified spines were obtained by projecting the reconstructed 3D surfaces into an anatomically determined orientation. Two-dimensional projection images were then created with image intensity values encoding feature "height" in the dorsal-ventral direction. Registration between the 2D depth maps yielded an initial point-wise correspondence between the 3D surfaces. A refined registration was achieved using an iterative closest point (ICP) algorithm. The technique was successfully applied to two explanted and one live porcine spines. The computational cost of the registration pipeline was less than 1 min, with an average target registration error (TRE) less than 2.2 mm in the laminae area. These results suggest the potential for the pose-invariant, rectification-based registration technique for clinical application in human subjects in the future.
Real-time self-calibration of a tracked augmented reality display
Zachary Baum, Andras Lasso, Tamas Ungi, et al.
PURPOSE: Augmented reality systems have been proposed for image-guided needle interventions but they have not become widely used in clinical practice due to restrictions such as limited portability, low display refresh rates, and tedious calibration procedures. We propose a handheld tablet-based self-calibrating image overlay system.

METHODS: A modular handheld augmented reality viewbox was constructed from a tablet computer and a semi-transparent mirror. A consistent and precise self-calibration method, without the use of any temporary markers, was designed to achieve an accurate calibration of the system. Markers attached to the viewbox and patient are simultaneously tracked using an optical pose tracker to report the position of the patient with respect to a displayed image plane that is visualized in real-time. The software was built using the open-source 3D Slicer application platform's SlicerIGT extension and the PLUS toolkit.

RESULTS: The accuracy of the image overlay with image-guided needle interventions yielded a mean absolute position error of 0.99 mm (95th percentile 1.93 mm) in-plane of the overlay and a mean absolute position error of 0.61 mm (95th percentile 1.19 mm) out-of-plane. This accuracy is clinically acceptable for tool guidance during various procedures, such as musculoskeletal injections.

CONCLUSION: A self-calibration method was developed and evaluated for a tracked augmented reality display. The results show potential for the use of handheld image overlays in clinical studies with image-guided needle interventions.
Clinical workflow for spinal curvature measurement with portable ultrasound
Reza Tabanfar, Christina Yan, Michael Kempston, et al.
PURPOSE: Spinal curvature monitoring is essential in making treatment decisions in scoliosis. Monitoring entails radiographic examinations, however repeated ionizing radiation exposure has been shown to increase cancer risk. Ultrasound does not emit ionizing radiation and is safer for spinal curvature monitoring. We investigated a clinical sonography protocol and challenges associated with position-tracked ultrasound in spinal curvature measurement in scoliosis. METHODS: Transverse processes were landmarked along each vertebra using tracked ultrasound snapshots. The transverse process angle was used to determine the orientation of each vertebra. We tested our methodology on five patients in a local pediatric scoliosis clinic, comparing ultrasound to radiographic curvature measurements. RESULTS: Despite strong correlation between radiographic and ultrasound curvature angles in phantom studies, we encountered new challenges in the clinical setting. Our main challenge was differentiating transverse processes from ribs and other structures during landmarking. We observed up to 13° angle variability for a single vertebra and a 9.85° ± 10.81° difference between ultrasound and radiographic Cobb angles for thoracic curvatures. Additionally, we were unable to visualize anatomical landmarks in the lumbar region where soft tissue depth was 25–35mm. In volunteers with large Cobb angles (greater than 40° thoracic and 60° lumbar), we observed spinal protrusions resulting in incomplete probe-skin contact and partial ultrasound images not suitable for landmarking. CONCLUSION: Spinal curvature measurement using tracked ultrasound is viable on phantom spine models. In the clinic, new challenges were encountered which must be resolved before a universal sonography protocol can be developed.
MIND Demons for MR-to-CT deformable image registration in image-guided spine surgery
S. Reaungamornrat, T. De Silva, A. Uneri, et al.
Purpose: Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration.

Method: The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons.

Result: The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine.

Conclusions: A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.
Ultrasound Image Guidance: Joint Session with Conferences 9786 and 9790
icon_mobile_dropdown
Automatic detection of a hand-held needle in ultrasound via phased-based analysis of the tremor motion
Parmida Beigi, Septimiu E. Salcudean, Robert Rohling, et al.
This paper presents an automatic localization method for a standard hand-held needle in ultrasound based on temporal motion analysis of spatially decomposed data. Subtle displacement arising from tremor motion has a periodic pattern which is usually imperceptible in the intensity image but may convey information in the phase image. Our method aims to detect such periodic motion of a hand-held needle and distinguish it from intrinsic tissue motion, using a technique inspired by video magnification. Complex steerable pyramids allow specific design of the wavelets' orientations according to the insertion angle as well as the measurement of the local phase. We therefore use steerable pairs of even and odd Gabor wavelets to decompose the ultrasound B-mode sequence into various spatial frequency bands. Variations of the local phase measurements in the spatially decomposed input data is then temporally analyzed using a finite impulse response bandpass filter to detect regions with a tremor motion pattern. Results obtained from different pyramid levels are then combined and thresholded to generate the binary mask input for the Hough transform, which determines an estimate of the direction angle and discards some of the outliers. Polynomial fitting is used at the final stage to remove any remaining outliers and improve the trajectory detection. The detected needle is finally added back to the input sequence as an overlay of a cloud of points. We demonstrate the efficiency of our approach to detect the needle using subtle tremor motion in an agar phantom and in-vivo porcine cases where intrinsic motion is also present. The localization accuracy was calculated by comparing to expert manual segmentation, and presented in (mean, standard deviation and root-mean-square error) of (0.93°, 1.26° and 0.87°) and (1.53 mm, 1.02 mm and 1.82 mm) for the trajectory and the tip, respectively.
Ultrasound to video registration using a bi-plane transrectal probe with photoacoustic markers
Alexis Cheng, Hyun Jae Kang, Haichong K. Zhang, et al.
Modern surgical scenarios typically provide surgeons with additional information through fusion of video and other imaging modalities. To provide this information, the tools and devices used in surgery must be registered together with interventional guidance equipment and surgical navigation systems. In this work, we focus explicitly on registering ultrasound with a stereo camera system using photoacoustic markers. Previous work has shown that photoacoustic markers can be used in this registration task to achieve target registration errors lower than the current available systems. Photoacoustic markers are defined as a set of non-collinear laser spots projected onto some surface. They can be simultaneously visualized by a stereo camera system and an ultrasound transducer because of the photoacoustic effect.

In more recent work, the three-dimensional ultrasound volume was replaced by images from a single ultrasound image pose from a convex array transducer. The feasibility of this approach was demonstrated, but the accuracy was lacking due to the physical limitations of the convex array transducer. In this work, we propose the use of a bi-plane transrectal ultrasound transducer. The main advantage of using this type of transducer is that the ultrasound elements are no longer restricted to a single plane. While this development would be limited to prostate applications, liver and kidney applications are also feasible if a suitable transducer is built. This work is demonstrated in two experiments, one without photoacoustic sources and one with. The resulting target registration error for these experiments were 1.07mm±0.35mm and 1.27mm±0.47mm respectively, both of which are better than current available navigation systems.
Classification of prostate cancer grade using temporal ultrasound: in vivo feasibility study
Sahar Ghavidel, Farhad Imani, Siavash Khallaghi, et al.
Temporal ultrasound has been shown to have high classification accuracy in differentiating cancer from benign tissue. In this paper, we extend the temporal ultrasound method to classify lower grade Prostate Cancer (PCa) from all other grades. We use a group of nine patients with mostly lower grade PCa, where cancerous regions are also limited. A critical challenge is to train a classifier with limited aggressive cancerous tissue compared to low grade cancerous tissue. To resolve the problem of imbalanced data, we use Synthetic Minority Oversampling Technique (SMOTE) to generate synthetic samples for the minority class. We calculate spectral features of temporal ultrasound data and perform feature selection using Random Forests. In leave-one-patient-out cross-validation strategy, an area under receiver operating characteristic curve (AUC) of 0.74 is achieved with overall sensitivity and specificity of 70%. Using an unsupervised learning approach prior to proposed method improves sensitivity and AUC to 80% and 0.79. This work represents promising results to classify lower and higher grade PCa with limited cancerous training samples, using temporal ultrasound.
Registration
icon_mobile_dropdown
Deformable registration of x-ray to MRI for post-implant dosimetry in prostate brachytherapy
Seyoun Park, Danny Y. Song, Junghoon Lee
Post-implant dosimetric assessment in prostate brachytherapy is typically performed using CT as the standard imaging modality. However, poor soft tissue contrast in CT causes significant variability in target contouring, resulting in incorrect dose calculations for organs of interest. CT-MR fusion-based approach has been advocated taking advantage of the complementary capabilities of CT (seed identification) and MRI (soft tissue visibility), and has proved to provide more accurate dosimetry calculations. However, seed segmentation in CT requires manual review, and the accuracy is limited by the reconstructed voxel resolution. In addition, CT deposits considerable amount of radiation to the patient. In this paper, we propose an X-ray and MRI based post-implant dosimetry approach. Implanted seeds are localized using three X-ray images by solving a combinatorial optimization problem, and the identified seeds are registered to MR images by an intensity-based points-to-volume registration. We pre-process the MR images using geometric and Gaussian filtering. To accommodate potential soft tissue deformation, our registration is performed in two steps, an initial affine transformation and local deformable registration. An evolutionary optimizer in conjunction with a points-to-volume similarity metric is used for the affine registration. Local prostate deformation and seed migration are then adjusted by the deformable registration step with external and internal force constraints. We tested our algorithm on six patient data sets, achieving registration error of (1.2±0.8) mm in < 30 sec. Our proposed approach has the potential to be a fast and cost-effective solution for post-implant dosimetry with equivalent accuracy as the CT-MR fusion-based approach.
Evaluation of a μCT-based electro-anatomical cochlear implant model
Cochlear implants (CIs) are considered standard treatment for patients who experience sensory-based hearing loss. Although these devices have been remarkably successful at restoring hearing, it is rare to achieve natural fidelity, and many patients experience poor outcomes. Previous studies have shown that outcomes can be improved when optimizing CI processor settings using an estimation of the CI's neural activation patterns found by detecting the distance between the CI electrodes and the nerves they stimulate in pre- and post-implantation CT images. We call this method Image-Guided CI Programming (IGCIP). More comprehensive electro-anatomical models (EAMs) might better estimate neural activation patterns than using a distance-based estimate, potentially leading to selecting further optimized CI settings. Our goal in this study is to investigate whether μCT-based EAMs can accurately estimate neural stimulation patterns. For this purpose, we have constructed EAMs of N=9 specimens. We analyzed the sensitivity of our model to design parameters such as field-of-view, resolution, and tissue resistivity. Our results show that our model is stable to parameter changes. To evaluate the utility of patient-specific modeling, we quantify the difference in estimated neural activation patterns across specimens for identically located electrodes. The average computed coefficient of variation (COV) across specimens is 0.186, suggesting patient-specific models are necessary and that the accuracy of a generic model would be insufficient. Our results suggest that development of in vivo patient-specific EAMs could lead to better methods for selecting CI settings, which would ultimately lead to better hearing outcomes with CIs.
Fast simulated annealing and adaptive Monte Carlo sampling based parameter optimization for dense optical-flow deformable image registration of 4DCT lung anatomy
Tai H. Dou, Yugang Min, John Neylon, et al.
Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods

In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes.

Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets.

Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.
Automatic pose correction for image-guided nonhuman primate brain surgery planning
Soheil Ghafurian, Antong Chen, Catherine Hines, et al.
Intracranial delivery of recombinant DNA and neurochemical analysis in nonhuman primate (NHP) requires precise targeting of various brain structures via imaging derived coordinates in stereotactic surgeries. To attain targeting precision, the surgical planning needs to be done on preoperative three dimensional (3D) CT and/or MR images, in which the animals head is fixed in a pose identical to the pose during the stereotactic surgery. The matching of the image to the pose in the stereotactic frame can be done manually by detecting key anatomical landmarks on the 3D MR and CT images such as ear canal and ear bar zero position. This is not only time intensive but also prone to error due to the varying initial poses in the images which affects both the landmark detection and rotation estimation. We have introduced a fast, reproducible, and semi-automatic method to detect the stereotactic coordinate system in the image and correct the pose. The method begins with a rigid registration of the subject images to an atlas and proceeds to detect the anatomical landmarks through a sequence of optimization, deformable and multimodal registration algorithms. The results showed similar precision (maximum difference of 1.71 in average in-plane rotation) to a manual pose correction.
Accurate tracking of tumor volume change during radiotherapy by CT-CBCT registration with intensity correction
Seyoun Park, Adam Robinson, Harry Quon, et al.
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician’s contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70±2.30 (B-spline), 1.25±1.78 (demons), 0.93±1.14 (optical flow), and 4.39±3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
Robotic Systems and Treatment Planning
icon_mobile_dropdown
Towards disparity joint upsampling for robust stereoscopic endoscopic scene reconstruction in robotic prostatectomy
Xiongbiao Luo, A. Jonathan McLeod, Uditha L. Jayarathne, et al.
Three-dimensional (3-D) scene reconstruction from stereoscopic binocular laparoscopic videos is an effective way to expand the limited surgical field and augment the structure visualization of the organ being operated in minimally invasive surgery. However, currently available reconstruction approaches are limited by image noise, occlusions, textureless and blurred structures. In particular, an endoscope inside the body only has the limited light source resulting in illumination non-uniformities in the visualized field. These limitations unavoidably deteriorate the stereo image quality and hence lead to low-resolution and inaccurate disparity maps, resulting in blurred edge structures in 3-D scene reconstruction. This paper proposes an improved stereo correspondence framework that integrates cost-volume filtering with joint upsampling for robust disparity estimation. Joint bilateral upsampling, joint geodesic upsampling, and tree filtering upsampling were compared to enhance the disparity accuracy. The experimental results demonstrate that joint upsampling provides an effective way to boost the disparity estimation and hence to improve the surgical endoscopic scene 3-D reconstruction. Moreover, the bilateral upsampling generally outperforms the other two upsampling methods in disparity estimation.
Endoscopes and robots for tight surgical spaces: use of precurved elastic elements to enhance curvature
Andria A. Remirez, Robert J. Webster III
Many applications in medicine require flexible surgical manipulators and endoscopes capable of reaching tight curvatures. The maximum curvature these devices can achieve is often restricted either by a strain limit, or by a maximum actuation force that the device's components can tolerate without risking mechanical failure. In this paper we propose the use of precurvature to "bias" the workspace of the device in one direction. Combined with axial shaft rotation, biasing increases the size of the device's workspace, enabling it to reach tighter curvatures than a comparable device without biasing can achieve, while still being able to fully straighten. To illustrate this effect, we describe several example prototype devices which use flexible nitinol strips that can be pushed and pulled to generate bending. We provide a statics model that relates the manipulator curvature to actuation force, and validate it experimentally.
Disposable patient-mounted geared robot for image-guided needle insertion
Charles Watkins, Takahisa Kato, Nobuhiko Hata
Patient-mounted robotic needle guidance is an emerging method of needle insertion in percutaneous ablation therapies. During needle insertion, patient-mounted robots can account for patient body movement, unlike gantry or floor mounted devices, and still increase the accuracy and precision of needle placement. Patient-mounted robots, however, require repeated sterilisation, which is often a difficult process with complex devices; overcoming this challenge is therefore key to the success of a patient mounted robot. To eliminate the need for repeated sterilization, we have developed a disposable patient-mounted robot with two rings as a kinematic structure: an angled upper ring both rotates and revolves about the lower ring. Using this structure, the robot has a clinically suitable range of needle insertion angles with a remote center of motion. To achieve disposability, our structure applies a disposable gear transmission component which detachably interfaces with non-disposable driving motors. With a manually driven prototype of the gear trains, we assessed whether the kinematic structure of the two rings can be operated only by using input pinions locating at outside of the kinematic structure. Our tests confirmed that the input pinions were able to rotate both upper and lower rings independently. We also determined a linear relationship of rotation transmission with the gear trains and determined that the rotation transmission between the pinions and the two rings were within 3 % of error from the designed value. Our robot introduces a novel approach to patient-mounted robots, and has potential to enable sterile and accurate needle guidance in percutaneous ablation therapies.
Comparison of portable and conventional ultrasound imaging in spinal curvature measurement
Christina Yan, Reza Tabanfar, Michael Kempston, et al.
PURPOSE: In scoliosis monitoring, tracked ultrasound has been explored as a safer imaging alternative to traditional radiography. The use of ultrasound in spinal curvature measurement requires identification of vertebral landmarks, but bones have reduced visibility in ultrasound imaging and high quality ultrasound machines are often expensive and not portable. In this work, we investigate the image quality and measurement accuracy of a low cost and portable ultrasound machine in comparison to a standard ultrasound machine in scoliosis monitoring.

METHODS: Two different kinds of ultrasound machines were tested on three human subjects, using the same position tracker and software. Spinal curves were measured in the same reference coordinate system using both ultrasound machines. Lines were defined by connecting two symmetric landmarks identified on the left and right transverse process of the same vertebrae, and spinal curvature was defined as the transverse process angle between two such lines, projected on the coronal plane.

RESULTS: Three healthy volunteers were scanned by both ultrasound configurations. Three experienced observers localized transverse processes as skeletal landmarks and obtained transverse process angles in images obtained from both ultrasounds. The mean difference per transverse process angle measured was 3.00 ±2.1°. 94% of transverse processes visualized in the Sonix Touch were also visible in the Telemed. Inter-observer error in the Telemed was 4.5° and 4.3° in the Sonix Touch.

CONCLUSION: Price, convenience and accessibility suggest the Telemed to be a viable alternative in scoliosis monitoring, however further improvements in measurement protocol and image noise reduction must be completed before implementing the Telemed in the clinical setting.
Image-guided preoperative prediction of pyramidal tract side effect in deep brain stimulation
C. Baumgarten, Y. Zhao, P. Sauleau, et al.
Deep brain stimulation of the medial globus pallidus is a surgical procedure for treating patients suffering from Parkinson's disease. Its therapeutic effect may be limited by the presence of pyramidal tract side effect (PTSE). PTSE is a contraction time-locked to the stimulation when the current spreading reaches the motor fibers of the pyramidal tract within the internal capsule. The lack of side-effect predictive model leads the neurologist to secure an optimal electrode placement by iterating clinical testing on an awake patient during the surgical procedure. The objective of the study was to propose a preoperative predictive model of PTSE. A machine learning based method called PyMAN (for Pyramidal tract side effect Model based on Artificial Neural network) that accounted for the current of the stimulation, the 3D electrode coordinates and the angle of the trajectory, was designed to predict the occurrence of PTSE. Ten patients implanted in the medial globus pallidus have been tested by a clinician to create a labeled dataset of the stimulation parameters that trigger PTSE. The kappa index value between the data predicted by PyMAN and the labeled data was .78. Further evaluation studies are desirable to confirm whether PyMAN could be a reliable tool for assisting the surgeon to prevent PTSE during the preoperative planning.
Rapid virtual stenting for intracranial aneurysms
The rupture of Intracranial Aneurysms is the most severe form of stroke with high rates of mortality and disability. One of its primary treatments is to use stent or Flow Diverter to divert the blood flow away from the IA in a minimal invasive manner. To optimize such treatments, it is desirable to provide an automatic tool for virtual stenting before its actual implantation. In this paper, we propose a novel method, called ball-sweeping, for rapid virtual stenting. Our method sweeps a maximum inscribed sphere through the aneurysmal region of the vessel and directly generates a stent surface touching the vessel wall without needing to iteratively grow a deformable stent surface. Our resulting stent mesh has guaranteed smoothness and variable pore density to achieve an enhanced occlusion performance. Comparing to existing methods, our technique is computationally much more efficient.
Tissue Deformation and Motion
icon_mobile_dropdown
Surface driven biomechanical breast image registration
Björn Eiben, Vasileios Vavourakis, John H. Hipwell, et al.
Biomechanical modelling enables large deformation simulations of breast tissues under different loading conditions to be performed. Such simulations can be utilised to transform prone Magnetic Resonance (MR) images into a different patient position, such as upright or supine. We present a novel integration of biomechanical modelling with a surface registration algorithm which optimises the unknown material parameters of a biomechanical model and performs a subsequent regularised surface alignment. This allows deformations induced by effects other than gravity, such as those due to contact of the breast and MR coil, to be reversed. Correction displacements are applied to the biomechanical model enabling transformation of the original pre-surgical images to the corresponding target position.

The algorithm is evaluated for the prone-to-supine case using prone MR images and the skin outline of supine Computed Tomography (CT) scans for three patients. A mean target registration error (TRE) of 10:9 mm for internal structures is achieved. For the prone-to-upright scenario, an optical 3D surface scan of one patient is used as a registration target and the nipple distances after alignment between the transformed MRI and the surface are 10:1 mm and 6:3 mm respectively.
Modeling and simulation of tumor-influenced high resolution real-time physics-based breast models for model-guided robotic interventions
John Neylon, Katelyn Hasse, Ke Sheng, et al.
Breast radiation therapy is typically delivered to the patient in either supine or prone position. Each of these positioning systems has its limitations in terms of tumor localization, dose to the surrounding normal structures, and patient comfort. We envision developing a pneumatically controlled breast immobilization device that will enable the benefits of both supine and prone positioning. In this paper, we present a physics-based breast deformable model that aids in both the design of the breast immobilization device as well as a control module for the device during every day positioning. The model geometry is generated from a subject’s CT scan acquired during the treatment planning stage. A GPU based deformable model is then generated for the breast. A mass-spring-damper approach is then employed for the deformable model, with the spring modeled to represent a hyperelastic tissue behavior. Each voxel of the CT scan is then associated with a mass element, which gives the model its high resolution nature. The subject specific elasticity is then estimated from a CT scan in prone position. Our results show that the model can deform at >60 deformations per second, which satisfies the real-time requirement for robotic positioning. The model interacts with a computer designed immobilization device to position the breast and tumor anatomy in a reproducible location. The design of the immobilization device was also systematically varied based on the breast geometry, tumor location, elasticity distribution and the reproducibility of the desired tumor location.
Accuracy of lesion boundary tracking in navigated breast tumor excision
Emily Heffernan, Tamas Ungi, Thomas Vaughan, et al.
PURPOSE: An electromagnetic navigation system for tumor excision in breast conserving surgery has recently been developed. Preoperatively, a hooked needle is positioned in the tumor and the tumor boundaries are defined in the needle coordinate system. The needle is tracked electromagnetically throughout the procedure to localize the tumor. However, the needle may move and the tissue may deform, leading to errors in maintaining a correct excision boundary. It is imperative to quantify these errors so the surgeon can choose an appropriate resection margin.

METHODS: A commercial breast biopsy phantom with several inclusions was used. Location and shape of a lesion before and after mechanical deformation were determined using 3D ultrasound volumes. Tumor location and shape were estimated from initial contours and tracking data. The difference in estimated and actual location and shape of the lesion after deformation was quantified using the Hausdorff distance. Data collection and analysis were done using our 3D Slicer software application and PLUS toolkit.

RESULTS: The deformation of the breast resulted in 3.72 mm (STD 0.67 mm) average boundary displacement for an isoelastic lesion and 3.88 mm (STD 0.43 mm) for a hyperelastic lesion. The difference between the actual and estimated tracked tumor boundary was 0.88 mm (STD 0.20 mm) for the isoelastic and 1.78 mm (STD 0.18 mm) for the hyperelastic lesion.

CONCLUSION: The average lesion boundary tracking error was below 2mm, which is clinically acceptable. We suspect that stiffness of the phantom tissue affected the error measurements. Results will be validated in patient studies.
Diaphragm motion characterization using chest motion data for biomechanics-based lung tumor tracking during EBRT
Elham Karami, Stewart Gaede, Ting-Yim Lee, et al.
Despite recent advances in image-guided interventions, lung cancer External Beam Radiation Therapy (EBRT) is still very challenging due to respiration induced tumor motion. Among various proposed methods of tumor motion compensation, real-time tumor tracking is known to be one of the most effective solutions as it allows for maximum normal tissue sparing, less overall radiation exposure and a shorter treatment session. As such, we propose a biomechanics-based real-time tumor tracking method for effective lung cancer radiotherapy. In the proposed algorithm, the required boundary conditions for the lung Finite Element model, including diaphragm motion, are obtained using the chest surface motion as a surrogate signal. The primary objective of this paper is to demonstrate the feasibility of developing a function which is capable of inputting the chest surface motion data and outputting the diaphragm motion in real-time. For this purpose, after quantifying the diaphragm motion with a Principal Component Analysis (PCA) model, correlation coefficient between the model parameters of diaphragm motion and chest motion data was obtained through Partial Least Squares Regression (PLSR). Preliminary results obtained in this study indicate that the PCA coefficients representing the diaphragm motion can be obtained through chest surface motion tracking with high accuracy.
Determination of surgical variables for a brain shift correction pipeline using an Android application
Rohan Vijayan, Rebekah H. Conley, Reid C. Thompson, et al.
Brain shift describes the deformation that the brain undergoes from mechanical and physiological effects typically during a neurosurgical or neurointerventional procedure. With respect to image guidance techniques, brain shift has been shown to compromise the fidelity of these approaches. In recent work, a computational pipeline has been developed to predict “brain shift” based on preoperatively determined surgical variables (such as head orientation), and subsequently correct preoperative images to more closely match the intraoperative state of the brain. However, a clinical workflow difficulty in the execution of this pipeline has been acquiring the surgical variables by the neurosurgeon prior to surgery. In order to simplify and expedite this process, an Android, Java-based application designed for tablets was developed to provide the neurosurgeon with the ability to orient 3D computer graphic models of the patient’s head, determine expected location and size of the craniotomy, and provide the trajectory into the tumor. These variables are exported for use as inputs for the biomechanical models of the preoperative computing phase for the brain shift correction pipeline. The accuracy of the application’s exported data was determined by comparing it to data acquired from the physical execution of the surgeon’s plan on a phantom head. Results indicated good overlap of craniotomy predictions, craniotomy centroid locations, and estimates of patient’s head orientation with respect to gravity. However, improvements in the app interface and mock surgical setup are needed to minimize error.
Non-rigid point set registration of curves: registration of the superficial vessel centerlines of the brain
Filipe M. M. Marreiros, Chunliang Wang, Sandro Rossitti, et al.
In this study we present a non-rigid point set registration for 3D curves (composed by 3D set of points). The method was evaluated in the task of registration of 3D superficial vessels of the brain where it was used to match vessel centerline points. It consists of a combination of the Coherent Point Drift (CPD) and the Thin-Plate Spline (TPS) semilandmarks. The CPD is used to perform the initial matching of centerline 3D points, while the semilandmark method iteratively relaxes/slides the points.

For the evaluation, a Magnetic Resonance Angiography (MRA) dataset was used. Deformations were applied to the extracted vessels centerlines to simulate brain bulging and sinking, using a TPS deformation where a few control points were manipulated to obtain the desired transformation (T1). Once the correspondences are known, the corresponding points are used to define a new TPS deformation(T2). The errors are measured in the deformed space, by transforming the original points using T1 and T2 and measuring the distance between them. To simulate cases where the deformed vessel data is incomplete, parts of the reference vessels were cut and then deformed. Furthermore, anisotropic normally distributed noise was added.

The results show that the error estimates (root mean square error and mean error) are below 1 mm, even in the presence of noise and incomplete data.
A novel craniotomy simulation system for evaluation of stereo-pair reconstruction fidelity and tracking
Brain shift compensation using computer modeling strategies is an important research area in the field of image-guided neurosurgery (IGNS). One important source of available sparse data during surgery to drive these frameworks is deformation tracking of the visible cortical surface. Possible methods to measure intra-operative cortical displacement include laser range scanners (LRS), which typically complicate the clinical workflow, and reconstruction of cortical surfaces from stereo pairs acquired with the operating microscopes. In this work, we propose and demonstrate a craniotomy simulation device that permits simulating realistic cortical displacements designed to measure and validate the proposed intra-operative cortical shift measurement systems. The device permits 3D deformations of a mock cortical surface which consists of a membrane made of a Dragon Skin® high performance silicone rubber on which vascular patterns are drawn. We then use this device to validate our stereo pair-based surface reconstruction system by comparing landmark positions and displacements measured with our systems to those positions and displacements as measured by a stylus tracked by a commercial optical system. Our results show a 1mm average difference in localization error and a 1.2mm average difference in displacement measurement. These results suggest that our stereo-pair technique is accurate enough for estimating intra-operative displacements in near real-time without affecting the surgical workflow.
Intraoperative Imaging and Visualization
icon_mobile_dropdown
Biplane reconstruction and visualization of virtual endoscopic and fluoroscopic views for interventional device navigation
Martin G. Wagner, Charles M. Strother, Sebastian Schafer, et al.
Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.
Visual feedback mounted on surgical tool: proof of concept
K. Carter, T. Vaughan, M. Holden, et al.
PURPOSE: When using surgical navigation systems in the operating room, feedback is typically displayed on a computer monitor. The surgeon’s attention is usually focused on the tool and the surgical site, so the display is typically out of the direct line of sight. The purpose is to develop a visual feedback device mounted on an electromagnetically tracked electrosurgical cauterizer which will provide navigation information for the surgeon in their field of view. METHODS: A study was conducted to determine the usefulness of the visual feedback in adjunct to the navigation system currently in use. Subjects were asked to follow tumor contours with the tracked cauterizer using 3D screen navigation with the mounted visual feedback and the 3D navigation screen alone. The movements of the cauterizer were recorded. RESULTS: The study showed a significant decrease in the subjects’ distance from the tumor margin, a significant increase in the subjects' confidence to avoid cutting the tumor and a statistically significant reduction in the subjects' perception of the need to look at the screen when using the visual feedback device compared to without. DISCUSSION: The LED feedback device helped the subjects feel confident in their ability to identify safe margins and minimize the amount of healthy tissue removed in the tumor resection. CONCLUSION: Good potential for the visual LED feedback has been shown. With additional training, this approach promises to lead to improved resection technique, with fewer cuts into the tumor and less healthy tissue removed.
CT thermometry for cone-beam CT guided ablation
Zachary DeStefano, Nadine Abi-Jaoudeh, Ming Li, et al.
Monitoring temperature during a cone-beam CT (CBCT) guided ablation procedure is important for prevention of over-treatment and under-treatment. In order to accomplish ideal temperature monitoring, a thermometry map must be generated. Previously, this was attempted using CBCT scans of a pig shoulder undergoing ablation.1 We are extending this work by using CBCT scans of real patients and incorporating more processing steps. We register the scans before comparing them due to the movement and deformation of organs. We then automatically locate the needle tip and the ablation zone. We employ a robust change metric due to image noise and artifacts. This change metric takes windows around each pixel and uses an equation inspired by Time Delay Analysis to calculate the error between windows with the assumption that there is an ideal spatial offset. Once the change map is generated, we correlate change data with measured temperature data at the key points in the region. This allows us to transform our change map into a thermal map. This thermal map is then able to provide an estimate as to the size and temperature of the ablation zone. We evaluated our procedure on a data set of 12 patients who had a total of 24 ablation procedures performed. We were able to generate reasonable thermal maps with varying degrees of accuracy. The average error ranged from 2.7 to 16.2 degrees Celsius. In addition to providing estimates of the size of the ablation zone for surgical guidance, 3D visualizations of the ablation zone and needle are also produced.
A computational model for estimating tumor margins in complementary tactile and 3D ultrasound images
Arefin Shamsil, Abelardo Escoto, Michael D. Naish, et al.
Conventional surgical methods are effective for treating lung tumors; however, they impose high trauma and pain to patients. Minimally invasive surgery is a safer alternative as smaller incisions are required to reach the lung; however, it is challenging due to inadequate intraoperative tumor localization. To address this issue, a mechatronic palpation device was developed that incorporates tactile and ultrasound sensors capable of acquiring surface and cross-sectional images of palpated tissue. Initial work focused on tactile image segmentation and fusion of position-tracked tactile images, resulting in a reconstruction of the palpated surface to compute the spatial locations of underlying tumors. This paper presents a computational model capable of analyzing orthogonally-paired tactile and ultrasound images to compute the surface circumference and depth margins of a tumor. The framework also integrates an error compensation technique and an algebraic model to align all of the image pairs and to estimate the tumor depths within the tracked thickness of a palpated tissue. For validation, an ex vivo experimental study was conducted involving the complete palpation of 11 porcine liver tissues injected with iodine-agar tumors of varying sizes and shapes. The resulting tactile and ultrasound images were then processed using the proposed model to compute the tumor margins and compare them to fluoroscopy based physical measurements. The results show a good negative correlation (r = −0.783, p = 0.004) between the tumor surface margins and a good positive correlation (r = 0.743, p = 0.009) between the tumor depth margins.
Freehand 3D-US reconstruction with robust visual tracking with application to ultrasound-augmented laparoscopy
Uditha L. Jayarathne, Elvis C. S. Chen, John Moore, et al.
Three dimensional reconstruction of ultrasound volumes from tracked 2D images is a cost effective intraoperative 3D imaging strategy. The reconstructed volumes can be visualized in-situ if the probe is tracked with respect to the laparoscopic camera during laparoscopic thoraco-abdominal interventions. To this end, efficient, image-based intrinsic pose tracking methods are preferred over well-established extrinsic tracking methods primarily due to cost effectiveness and reduced work-flow overheads. However, the potential of these intrinsic tracking methods as means of tracking in freehand 3D ultrasound reconstruction has not been investigated to-date. In this paper, we demonstrate that a recently proposed image-based, robust pose tracking method can be used to achieve high quality reconstructions. By imaging a tissue-mimicking phantom with structures resembling anatomical targets, we demonstrate that the 3D US volumes resulted from this method are geometrically accurate. Both qualitative and quantitative results of the experiments are presented.
Endoscopy/Laparoscopy
icon_mobile_dropdown
Superpixel-based structure classification for laparoscopic surgery
Sebastian Bodenstedt, Jochen Görtler, Martin Wagner, et al.
Minimally-invasive interventions offers multiple benefits for patients, but also entails drawbacks for the surgeon. The goal of context-aware assistance systems is to alleviate some of these difficulties. Localizing and identifying anatomical structures, maligned tissue and surgical instruments through endoscopic image analysis is paramount for an assistance system, making online measurements and augmented reality visualizations possible. Furthermore, such information can be used to assess the progress of an intervention, hereby allowing for a context-aware assistance. In this work, we present an approach for such an analysis. First, a given laparoscopic image is divided into groups of connected pixels, so-called superpixels, using the SEEDS algorithm. The content of a given superpixel is then described using information regarding its color and texture. Using a Random Forest classifier, we determine the class label of each superpixel. We evaluated our approach on a publicly available dataset for laparoscopic instrument detection and achieved a DICE score of 0.69.
Tissue classification for laparoscopic image understanding based on multispectral texture analysis
Yan Zhang, Sebastian J. Wirkert, Justin Iszatt, et al.
Intra-operative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study we show (1) that multispectral imaging data is superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) that combining the tissue texture with the reflectance spectrum improves the classification performance. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.
Endoscopic feature tracking for augmented-reality assisted prosthesis selection in mitral valve repair
Sandy Engelhardt, Silvio Kolb, Raffaele De Simone, et al.
Mitral valve annuloplasty describes a surgical procedure where an artificial prosthesis is sutured onto the anatomical structure of the mitral annulus to re-establish the valve's functionality. Choosing an appropriate commercially available ring size and shape is a difficult decision the surgeon has to make intraoperatively according to his experience. In our augmented-reality framework, digitalized ring models are superimposed onto endoscopic image streams without using any additional hardware. To place the ring model on the proper position within the endoscopic image plane, a pose estimation is performed that depends on the localization of sutures placed by the surgeon around the leaflet origins and punctured through the stiffer structure of the annulus. In this work, the tissue penetration points are tracked by the real-time capable Lucas Kanade optical flow algorithm. The accuracy and robustness of this tracking algorithm is investigated with respect to the question whether outliers influence the subsequent pose estimation. Our results suggest that optical flow is very stable for a variety of different endoscopic scenes and tracking errors do not affect the position of the superimposed virtual objects in the scene, making this approach a viable candidate for annuloplasty augmented reality-enhanced decision support.
Method for endobronchial video parsing
Endoscopic examination of the lungs during bronchoscopy produces a considerable amount of endobronchial video. A physician uses the video stream as a guide to navigate the airway tree for various purposes such as general airway examinations, collecting tissue samples, or administering disease treatment. Aside from its intraoperative utility, the recorded video provides high-resolution detail of the airway mucosal surfaces and a record of the endoscopic procedure. Unfortunately, due to a lack of robust automatic video-analysis methods to summarize this immense data source, it is essentially discarded after the procedure. To address this problem, we present a fully-automatic method for parsing endobronchial video for the purpose of summarization. Endoscopic- shot segmentation is first performed to parse the video sequence into structurally similar groups according to a geometric model. Bronchoscope-motion analysis then identifies motion sequences performed during bronchoscopy and extracts relevant information. Finally, representative key frames are selected based on the derived motion information to present a drastically reduced summary of the processed video. The potential of our method is demonstrated on four endobronchial video sequences from both phantom and human data. Preliminary tests show that, on average, our method reduces the number of frames required to represent an input video sequence by approximately 96% and consistently selects salient key frames appropriately distributed throughout the video sequence, enabling quick and accurate post-operative review of the endoscopic examination.
Uncalibrated stereo rectification and disparity range stabilization: a comparison of different feature detectors
Xiongbiao Luo, Uditha L. Jayarathne, A. Jonathan McLeod, et al.
This paper studies uncalibrated stereo rectification and stable disparity range determination for surgical scene three-dimensional (3-D) reconstruction. Stereoscopic endoscope calibration sometimes is not available and also increases the complexity of the operating-room environment. Stereo from uncalibrated endoscopic cameras is an alternative to reconstruct the surgical field visualized by binocular endoscopes within the body. Uncalibrated rectification is usually performed on the basis of a number of matched feature points (semi-dense correspondence) between the left and the right images of stereo pairs. After uncalibrated rectification, the corresponding feature points can be used to determine the proper disparity range that helps to improve the reconstruction accuracy and reduce the computational time of disparity map estimation. Therefore, the corresponding or matching accuracy and robustness of feature point descriptors is important to surgical field 3-D reconstruction. This work compares four feature detectors: (1) scale invariant feature transform (SIFT), (2) speeded up robust features (SURF), (3) affine scale invariant feature transform (ASIFT), and (4) gauge speeded up robust features (GSURF) with applications to uncalibrated rectification and stable disparity range determination. We performed our experiments on surgical endoscopic video images that were collected during robotic prostatectomy. The experimental results demonstrate that ASIFT outperforms other feature detectors in the uncalibrated stereo rectification and also provides a stable stable disparity range for surgical scene reconstruction.
Position-based adjustment of landmark-based correspondence finding in electromagnetic sensor-based colonoscope tracking method
Masahiro Oda, Hiroaki Kondo, Takayuki Kitasaka, et al.
This paper presents detailed evaluations of a colonoscope tracking method in order to clarify relationships between landmark correspondence parameters used in the method and tracking errors. Based on these evaluations, we implement a colonoscope tracking method whose correspondence finding conditions are adjusted for each position in the colon. An electromagnetic sensor-based colonoscope tracking method has been proposed. This method performs a landmark-based coarse correspondence finding and a length-based fine correspondence finding processes to find the colonoscope tip position. The landmark-based coarse correspondence finding finds corresponding landmark pairs by using distance thresholds. The distance thresholds used in the landmark-based coarse correspondence finding affects tracking errors. However relationships between the landmark-based coarse correspondence finding and tracking errors are not clarified. In this paper, we investigate the relationships between the distance threshold and tracking errors. To evaluate the tracking errors, we measure tracking errors at 52 points in the colon phantom. Based on the measurement results, we change the distance thresholds for each position in the colon phantom. The experimental results showed small distance threshold values caused smaller tracking errors. However, colonoscope tracking using small distance threshold was unstable. Large tracking errors were caused in some colon segments in the colonoscope tracking using small distance threshold.
Keynote and New Robotic Applications
icon_mobile_dropdown
Toward automated cochlear implant insertion using tubular manipulators
Josephine Granna, Thomas S. Rau, Thien-Dang Nguyen, et al.
During manual cochlear implant electrode insertion the surgeon is at risk to damage the intracochlear fine-structure, as the electrode array is inserted through a small opening in the cochlea blindly with little force-feedback. This paper addresses a novel concept for cochlear electrode insertion using tubular manipulators to reduce risks of causing trauma during insertion and to automate the insertion process.

We propose a tubular manipulator incorporated into the electrode array composed of an inner wire within a tube, both elastic and helically shaped. It is our vision to use this manipulator to actuate the initially straight electrode array during insertion into the cochlea by actuation of the wire and tube, i.e. translation and slight axial rotation. In this paper, we evaluate the geometry of the human cochlea in 22 patient datasets in order to derive design requirements for the manipulator. We propose an optimization algorithm to automatically determine the tube set parameters (curvature, torsion, diameter, length) for an ideal final position within the cochlea. To prove our concept, we demonstrate that insertion can be realized in a follow-the-leader fashion for 19 out of 22 cochleas. This is possible with only 4 different tube/wire sets.
Increasing safety of a robotic system for inner ear surgery using probabilistic error modeling near vital anatomy
Neal P. Dillon, Michael A. Siebold, Jason E. Mitchell, et al.
Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy.
Prostate Procedures
icon_mobile_dropdown
A comparison of needle tip localization accuracy using 2D and 3D trans-rectal ultrasound for high-dose-rate prostate cancer brachytherapy treatment planning
W. Thomas Hrinivich, Douglas A. Hoover, Kathleen Surry, et al.
Background: High-dose-rate brachytherapy (HDR-BT) is a prostate cancer treatment option involving the insertion of hollow needles into the gland through the perineum to deliver a radioactive source. Conventional needle imaging involves indexing a trans-rectal ultrasound (TRUS) probe in the superior/inferior (S/I) direction, using the axial transducer to produce an image set for organ segmentation. These images have limited resolution in the needle insertion direction (S/I), so the sagittal transducer is used to identify needle tips, requiring a manual registration with the axial view. This registration introduces a source of uncertainty in the final segmentations and subsequent treatment plan. Our lab has developed a device enabling 3D-TRUS guided insertions with high S/I spatial resolution, eliminating the need to align axial and sagittal views.

Purpose: To compare HDR-BT needle tip localization accuracy between 2D and 3D-TRUS.

Methods: 5 prostate cancer patients underwent conventional 2D TRUS guided HDR-BT, during which 3D images were also acquired for post-operative registration and segmentation. Needle end-length measurements were taken, providing a gold standard for insertion depths.

Results: 73 needles were analyzed from all 5 patients. Needle tip position differences between imaging techniques was found to be largest in the S/I direction with mean±SD of -2.5±4.0 mm. End-length measurements indicated that 3D TRUS provided statistically significantly lower mean±SD insertion depth error of -0.2±3.4 mm versus 2.3±3.7 mm with 2D guidance (p < .001).

Conclusions: 3D TRUS may provide more accurate HDR-BT needle localization than conventional 2D TRUS guidance for the majority of HDR-BT needles.
An MRI guided system for prostate laser ablation with treatment planning and multi-planar temperature monitoring
Sheng Xu, Harsh Agarwal, Marcelino Bernardo, et al.
Prostate cancer is often over treated with standard treatment options which impact the patients’ quality of life. Laser ablation has emerged as a new approach to treat prostate cancer while sparing the healthy tissue around the tumor. Since laser ablation has a small treatment zone with high temperature, it is necessary to use accurate image guidance and treatment planning to enable full ablation of the tumor. Intraoperative temperature monitoring is also desirable to protect critical structures from being damaged in laser ablation. In response to these problems, we developed a navigation platform and integrated it with a clinical MRI scanner and a side firing laser ablation device. The system allows imaging, image guidance, treatment planning and temperature monitoring to be carried out on the same platform. Temperature sensing phantoms were developed to demonstrate the concept of iterative treatment planning and intraoperative temperature monitoring. Retrospective patient studies were also conducted to show the clinical feasibility of the system.
How does prostate biopsy guidance error impact pathologic cancer risk assessment?
Peter R. Martin, Mena Gaed, José A. Gómez, et al.
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21–47% false negative rate of clinical 2D TRUS-guided sextant biopsy, but still has a substantial false negative rate. This could be improved via biopsy needle target optimization, accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As an initial step toward the broader goal of optimized prostate biopsy targeting, in this study we elucidated the impact of biopsy needle delivery error on the probability of obtaining a tumor sample, and on the core involvement. These are both important parameters to patient risk stratification and the decision for active surveillance vs. definitive therapy. We addressed these questions for cancer of all grades, and separately for high grade (≥ Gleason 4+3) cancer. We used expert-contoured gold-standard prostatectomy histology to simulate targeted biopsies using an isotropic Gaussian needle delivery error from 1 to 6 mm, and investigated the amount of cancer obtained in each biopsy core as determined by histology. Needle delivery error resulted in variability in core involvement that could influence treatment decisions; the presence or absence of cancer in 1/3 or more of each needle core can be attributed to a needle delivery error of 4 mm. However, our data showed that by making multiple biopsy attempts at selected tumor foci, we may increase the probability of correctly characterizing the extent and grade of the cancer.
Impact of region contouring variability on image-based focal therapy evaluation
Eli Gibson, Ian A. Donaldson, Taimur T. Shah, et al.
Motivation: Focal therapy is an emerging low-morbidity treatment option for low-intermediate risk prostate cancer; however, challenges remain in accurately delivering treatment to specified targets and determining treatment success. Registered multi-parametric magnetic resonance imaging (MPMRI) acquired before and after treatment can support focal therapy evaluation and optimization; however, contouring variability, when defining the prostate, the clinical target volume (CTV) and the ablation region in images, reduces the precision of quantitative image-based focal therapy evaluation metrics. To inform the interpretation and clarify the limitations of such metrics, we investigated inter-observer contouring variability and its impact on four metrics.

Methods: Pre-therapy and 2-week-post-therapy standard-of-care MPMRI were acquired from 5 focal cryotherapy patients. Two clinicians independently contoured, on each slice, the prostate (pre- and post-treatment) and the dominant index lesion CTV (pre-treatment) in the T2-weighted MRI, and the ablated region (post-treatment) in the dynamic-contrast- enhanced MRI. For each combination of clinician contours, post-treatment images were registered to pre-treatment images using a 3D biomechanical-model-based registration of prostate surfaces, and four metrics were computed: the proportion of the target tissue region that was ablated and the target:ablated region volume ratio for each of two targets (the CTV and an expanded planning target volume). Variance components analysis was used to measure the contribution of each type of contour to the variance in the therapy evaluation metrics.

Conclusions: 14–23% of evaluation metric variance was attributable to contouring variability (including 6–12% from ablation region contouring); reducing this variability could improve the precision of focal therapy evaluation metrics.
Poster Session
icon_mobile_dropdown
Structure Sensor for mobile markerless augmented reality
T. Kilgus, R. Bux, A. M. Franz, et al.
3D Visualization of anatomical data is an integral part of diagnostics and treatment in many medical disciplines, such as radiology, surgery and forensic medicine. To enable intuitive interaction with the data, we recently proposed a new concept for on-patient visualization of medical data which involves rendering of subsurface structures on a mobile display that can be moved along the human body. The data fusion is achieved with a range imaging device attached to the display. The range data is used to register static 3D medical imaging data with the patient body based on a surface matching algorithm. However, our previous prototype was based on the Microsoft Kinect camera and thus required a cable connection to acquire color and depth data. The contribution of this paper is two-fold. Firstly, we replace the Kinect with the Structure Sensor - a novel cable-free range imaging device - to improve handling and user experience and show that the resulting accuracy (target registration error: 4.8±1.5 mm) is comparable to that achieved with the Kinect. Secondly, a new approach to visualizing complex 3D anatomy based on this device, as well as 3D printed models of anatomical surfaces, is presented. We demonstrate that our concept can be applied to in vivo data and to a 3D printed skull of a forensic case. Our new device is the next step towards clinical integration and shows that the concept cannot only be applied during autopsy but also for presentation of forensic data to laypeople in court or medical education.
Visual design and verification tool for collision-free dexterous patient specific neurosurgical instruments
Maggie Hess, Kyle Eastwood, Bence Linder, et al.
PURPOSE: In many minimally invasive neurosurgical procedures, the surgical workspace is a small tortuous cavity that is accessed using straight, rigid instruments with limited dexterity. Specifically considering neuroendoscopy, it is often challenging for surgeons, using standard instruments, to reach multiple surgical targets from a single incision. To address this problem, continuum tools are under development to create highly dexterous minimally invasive instruments. However, this design process is not trivial, and therefore, a user-friendly design platform capable of easily incorporating surgeon input is needed.

METHODS: We propose a method that uses simulation and visual verification to design continuum tools that are patient and procedure specific. Our software module utilizes pre-operative scans and virtual threedimensional (3D) patient models to intuitively aid instrument design. The user specifies basic tool parameters and the parameterized tools and trocar are modeled within the virtual patient. By selecting and dragging the instrument models, the tools are instantly reshaped and repositioned. The tool geometry and surgical entry points are then returned as outputs to undergo optimization. We have completed an initial validation of the software by comparing a simulation of a physical instrument’s reachability to the corresponding virtual design.

RESULTS AND CONCLUSION: The software was assessed qualitatively by two neurosurgeons, who design tools for an intraventricular endoscopic procedure. Further, validation experiments comparing the design of a virtual instrument to a physical tool demonstrate that the software module functions correctly. Thus, our platform permits user-friendly, application specific design of continuum instruments. These instruments will give surgeons much more flexibility in developing future minimally invasive procedures.
A web-based computer aided system for liver surgery planning: initial implementation on RayPlus
Ming Luo, Rong Yuan, Zhi Sun, et al.
At present, computer aided systems for liver surgery design and risk evaluation are widely used in clinical all over the world. However, most systems are local applications that run on high-performance workstations, and the images have to processed offline. Compared with local applications, a web-based system is accessible anywhere and for a range of regardless of relative processing power or operating system. RayPlus (http://rayplus.life.hust.edu.cn), a B/S platform for medical image processing, was developed to give a jump start on web-based medical image processing. In this paper, we implement a computer aided system for liver surgery planning on the architecture of RayPlus. The system consists of a series of processing to CT images including filtering, segmentation, visualization and analyzing. Each processing is packaged into an executable program and runs on the server side. CT images in DICOM format are processed step by to interactive modeling on browser with zero-installation and server-side computing. The system supports users to semi-automatically segment the liver, intrahepatic vessel and tumor from the pre-processed images. Then, surface and volume models are built to analyze the vessel structure and the relative position between adjacent organs. The results show that the initial implementation meets satisfactorily its first-order objectives and provide an accurate 3D delineation of the liver anatomy. Vessel labeling and resection simulation are planned to add in the future. The system is available on Internet at the link mentioned above and an open username for testing is offered.
Kinect based real-time position calibration for nasal endoscopic surgical navigation system
Jingfan Fan, Jian Yang, Yakui Chu, et al.
Unanticipated, reactive motion of the patient during skull based tumor resective surgery is the source of the consequence that the nasal endoscopic tracking system is compelled to be recalibrated. To accommodate the calibration process with patient's movement, this paper developed a Kinect based Real-time positional calibration method for nasal endoscopic surgical navigation system. In this method, a Kinect scanner was employed as the acquisition part of the point cloud volumetric reconstruction of the patient's head during surgery. Then, a convex hull based registration algorithm aligned the real-time image of the patient head with a model built upon the CT scans performed in the preoperative preparation to dynamically calibrate the tracking system if a movement was detected. Experimental results confirmed the robustness of the proposed method, presenting a total tracking error within 1 mm under the circumstance of relatively violent motions. These results point out the tracking accuracy can be retained stably and the potential to expedite the calibration of the tracking system against strong interfering conditions, demonstrating high suitability for a wide range of surgical applications.
An improved robust hand-eye calibration for endoscopy navigation system
Wei He, Kumsok Kang, Yanfang Li, et al.
Endoscopy is widely used in clinical application, and surgical navigation system is an extremely important way to enhance the safety of endoscopy. The key to improve the accuracy of the navigation system is to solve the positional relationship between camera and tracking marker precisely. The problem can be solved by the hand-eye calibration method based on dual quaternions. However, because of the tracking error and the limited motion of the endoscope, the sample motions may contain some incomplete motion samples. Those motions will cause the algorithm unstable and inaccurate. An advanced selection rule for sample motions is proposed in this paper to improve the stability and accuracy of the methods based on dual quaternion. By setting the motion filter to filter out the incomplete motion samples, finally, high precision and robust result is achieved. The experimental results show that the accuracy and stability of camera registration have been effectively improved by selecting sample motion data automatically.
Towards robust specularity detection and inpainting in cardiac images
Samar M. Alsaleh, Angelica I. Aviles, Pilar Sobrevilla, et al.
Computer-assisted cardiac surgeries had major advances throughout the years and are gaining more popularity over conventional cardiac procedures as they offer many benefits to both patients and surgeons. One obvious advantage is that they enable surgeons to perform delicate tasks on the heart while it is still beating, avoiding the risks associated with cardiac arrest. Consequently, the surgical system needs to accurately compensate the physiological motion of the heart which is a very challenging task in medical robotics since there exist different sources of disturbances. One of which is the bright light reflections, known as specular highlights, that appear on the glossy surface of the heart and partially occlude the field of view. This work is focused on developing a robust approach that accurately detects and removes those highlights to reduce their disturbance to the surgeon and the motion compensation algorithm. As a first step, we exploit both color attributes and Fuzzy edge detector to identify specular regions in each acquired image frame. These two techniques together work as restricted thresholding and are able to accurately identify specular regions. Then, in order to eliminate the specularity artifact and give the surgeon a better perception of the heart, the second part of our solution is dedicated to correct the detected regions using inpainting to propagate and smooth the results. Our experimental results, which we carry out in realistic datasets, reveal how efficient and precise the proposed solution is, as well as demonstrate its robustness and real-time performance.
Real-time mosaicing of fetoscopic videos using SIFT
Pankaj Daga, François Chadebecq, Dzhoshkun I. Shakir, et al.
Fetoscopic laser photo-coagulation of the placental vascular anastomoses remains the most effective therapy for twin-to-twin transfusion syndrome (TTTS) in monochorionic twin pregnancies. However, to ensure the success of the intervention, complete photo-coagulation of all anastomoses is needed. This is made difficult by the limited field of view of the fetoscopic video guidance, which hinders the surgeon's ability to locate all the anastomoses. A potential solution to this problem is to expand the field of view of the placental surface by creating a mosaic from overlapping fetoscopic images. This mosaic can then be used for anastomoses localization and spatial orientation during surgery. However, this requires accurate and fast algorithms that can operate within the real-time constraints of fetal surgery. In this work, we present an image mosaicing framework that leverages the parallelism of modern GPUs and can process clinical fetoscopic images in real-time. Initial qualitative results on ex-vivo placental images indicate that the proposed framework can generate clinically useful mosaics from fetoscopic videos in real-time.
Multiple video sequences synchronization during minimally invasive surgery
Abdelkrim Belhaoua, Johan Moreau, Alexandre Krebs, et al.
Hybrid operating rooms are an important development in the medical ecosystem. They allow integrating, in the same procedure, the advantages of radiological imaging and surgical tools. However, one of the challenges faced by clinical engineers is to support the connectivity and interoperability of medical-electrical point-of-care devices. A system that could enable plug-and-play connectivity and interoperability for medical devices would improve patient safety, save hospitals time and money, and provide data for electronic medical records. In this paper, we propose a hardware platform dedicated to collect and synchronize multiple videos captured from medical equipment in real-time. The final objective is to integrate augmented reality technology into an operation room (OR) in order to assist the surgeon during a minimally invasive operation. To the best of our knowledge, there is no prior work dealing with hardware based video synchronization for augmented reality applications on OR. Whilst hardware synchronization methods can embed temporal value, so called timestamp, into each sequence on-the-y and require no post-processing, they require specialized hardware. However the design of our hardware is simple and generic. This approach was adopted and implemented in this work and its performance is evaluated by comparison to the start-of-the-art methods.
Visualization framework for colonoscopy videos
Saad Nadeem, Arie Kaufman
We present a visualization framework for annotating and comparing colonoscopy videos, where these annotations can then be used for semi-automatic report generation at the end of the procedure. Currently, there are approximately 14 million colonoscopies performed every year in the US. In this work, we create a visualization tool to deal with the deluge of colonoscopy videos in a more effective way. We present an interactive visualization framework for the annotation and tagging of colonoscopy videos in an easy and intuitive way. These annotations and tags can later be used for report generation for electronic medical records and for comparison at an individual as well as group level. We also present important use cases and medical expert feedback for our visualization framework.
HPC enabled real-time remote processing of laparoscopic surgery
Zahra Ronaghi, Karan Sapra, Ryan Izard, et al.
Laparoscopic surgery is a minimally invasive surgical technique. The benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures. One particular laparoscopic system is the daVinci-si robotic surgical system. The video streams generate approximately 360 megabytes of data per second. Real-time processing this large stream of data on a bedside PC, single or dual node setup, has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second.

We have implement and compared performance of compression, segmentation and registration algorithms on Clemson's Palmetto supercomputer using dual NVIDIA K40 GPUs per node. Our computing framework will also enable reliability using replication of computation. We will securely transfer the files to remote HPC clusters utilizing an OpenFlow-based network service, Steroid OpenFlow Service (SOS) that can increase performance of large data transfers over long-distance and high bandwidth networks. As a result, utilizing high-speed OpenFlow- based network to access computing clusters with GPUs will improve surgical procedures by providing real-time medical image processing and laparoscopic data.
Content-based retrieval in videos from laparoscopic surgery
Klaus Schoeffmann, Christian Beecks, Mathias Lux, et al.
In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures for long-term archival. These endoscopic videos are a good source of information for explanations to patients and follow-up operations. As the endoscope is the "eye of the surgeon", the video shows the same information the surgeon has seen during the operation, and can describe the situation inside the patient much more precisely than an operation report would do. Recorded endoscopic videos can also be used for training young surgeons and in some countries the long-term archival of video recordings from endoscopic procedures is even enforced by law. A major challenge, however, is to efficiently access these very large video archives for later purposes. One problem, for example, is to locate specific images in the videos that show important situations, which are additionally captured as static images during the procedure. This work addresses this problem and focuses on contentbased video retrieval in data from laparoscopic surgery. We propose to use feature signatures, which can appropriately and concisely describe the content of laparoscopic images, and show that by using this content descriptor with an appropriate metric, we are able to efficiently perform content-based retrieval in laparoscopic videos. In a dataset with 600 captured static images from 33 hours recordings, we are able to find the correct video segment for more than 88% of these images.
Cost-effective surgical registration using consumer depth cameras
The high costs associated with technological innovation have been previously identified as both a major contributor to the rise of health care expenses, and as a limitation for widespread adoption of new technologies. In this work we evaluate the use of two consumer grade depth cameras, the Microsoft Kinect v1 and 3DSystems Sense, as a means for acquiring point clouds for registration. These devices have the potential to replace professional grade laser range scanning devices in medical interventions that do not require sub-millimetric registration accuracy, and may do so at a significantly reduced cost. To facilitate the use of these devices we have developed a near real-time (1-4 sec/frame) rigid registration framework combining several alignment heuristics with the Iterative Closest Point (ICP) algorithm. Using nearest neighbor registration error as our evaluation criterion we found the optimal scanning distances for the Sense and Kinect to be 50-60cm and 70-80cm respectively. When imaging a skull phantom at these distances, RMS error values of 1.35mm and 1.14mm were obtained. The registration framework was then evaluated using cranial MR scans of two subjects. For the first subject, the RMS error using the Sense was 1.28 ± 0.01 mm. Using the Kinect this error was 1.24 ± 0.03 mm. For the second subject, whose MR scan was significantly corrupted by metal implants, the errors increased to 1.44 ± 0.03 mm and 1.74 ± 0.06 mm but the system nonetheless performed within acceptable bounds.
Exploring the effects of dimensionality reduction in deep networks for force estimation in robotic-assisted surgery
Angelica I. Aviles, Samar Alsaleh, Pilar Sobrevilla, et al.
Robotic-Assisted Surgery approach overcomes the limitations of the traditional laparoscopic and open surgeries. However, one of its major limitations is the lack of force feedback. Since there is no direct interaction between the surgeon and the tissue, there is no way of knowing how much force the surgeon is applying which can result in irreversible injuries. The use of force sensors is not practical since they impose different constraints. Thus, we make use of a neuro-visual approach to estimate the applied forces, in which the 3D shape recovery together with the geometry of motion are used as input to a deep network based on LSTM-RNN architecture. When deep networks are used in real time, pre-processing of data is a key factor to reduce complexity and improve the network performance. A common pre-processing step is dimensionality reduction which attempts to eliminate redundant and insignificant information by selecting a subset of relevant features to use in model construction. In this work, we show the effects of dimensionality reduction in a real-time application: estimating the applied force in Robotic-Assisted Surgeries. According to the results, we demonstrated positive effects of doing dimensionality reduction on deep networks including: faster training, improved network performance, and overfitting prevention. We also show a significant accuracy improvement, ranging from about 33% to 86%, over existing approaches related to force estimation.
Current sensing for navigated electrosurgery: proof of concept
K. Carter, A. Lasso, T. Ungi, et al.
PURPOSE: Tracked power-tools are routinely used in computer-assisted intervention and surgical systems. In order to properly perform temporal and spatial monitoring of the tracked tool with the navigation system, it is important to know when the tool, such as an electrosurgical cauterizer, is being activated during surgery. We have developed a general purpose current sensor that can be augmented to tracked surgical devices in order to inform the surgeon and the navigation system when the tool is activated. METHODS: Two non-invasive AC current sensors, two peak detector circuits, one voltage comparator circuit, and a microcontroller were used to detect when an electrosurgical cauterizer is being powered on and differentiate between the cut and coagulation modes. The system was tested by cauterizing various substances at varied power ratings. RESULTS: By comparing the ratio of amplitudes as well as the frequencies of the signals, the current sensing system is able to differentiate between on/off, cut/coagulation, as well as when cauterizer tissue. DISCUSSION: The current sensing system is able to detect when the cauterizer is being powered on and can differentiate between monopolar cut and coagulation modes. CONCLUSION: This system shows promise for detecting when the cauterizer is being powered on and in the future could be integrated with a navigation system in order to easily temporally monitor the electrosurgical tool.
Characterization of a phantom setup for breast conserving cancer surgery
Jacob T. Chadwell, Rebekah H. Conley, Jarrod A. Collins, et al.
The purpose of this work is to develop an anatomically and mechanically representative breast phantom for the validation of breast conserving surgical therapies, specifically, in this case, image guided surgeries. Using three patients scheduled for lumpectomy and four healthy volunteers in mock surgical presentations, the magnitude, direction, and location of breast deformations was analyzed. A phantom setup was then designed to approximate such deformations in a mock surgical environment. Specifically, commercially available and custom-built polyvinyl alcohol (PVA) phantoms were used to mimic breast tissue during surgery. A custom designed deformation apparatus was then created to reproduce deformations seen in typical clinical setups of the pre- and intra-operative breast geometry. Quantitative analysis of the human subjects yielded a positive correlation between breast volume and amount of breast deformation. Phantom results reflected similar behavior with the custom-built PVA phantom outperforming the commercial phantom.
Image-guided intracranial cannula placement for awake in vivo microdialysis in nonhuman primates
Antong Chen, Ashleigh Bone, Catherine D. G. Hines, et al.
Intracranial microdialysis is used for sampling neurochemicals and large peptides along with their metabolites from the interstitial fluid (ISF) of the brain. The ability to perform this in nonhuman primates (NHP) e.g., rhesus could improve the prediction of pharmacokinetic (PK) and pharmacodynamics (PD) action of drugs in human. However, microdialysis in rhesus brains is not as routinely performed as in rodents. One challenge is that the precise intracranial probe placement in NHP brains is difficult due to the richness of the anatomical structure and the variability of the size and shape of brains across animals. Also, a repeatable and reproducible ISF sampling from the same animal is highly desirable when combined with cognitive behaviors or other longitudinal study end points. Toward that end, we have developed a semi-automatic flexible neurosurgical method employing MR and CT imaging to (a) derive coordinates for permanent guide cannula placement in mid-brain structures and (b) fabricate a customized recording chamber to implant above the skull for enclosing and safeguarding access to the cannula for repeated experiments. In order to place the intracranial guide cannula in each subject, the entry points in the skull and the depth in the brain were derived using co-registered images acquired from MR and CT scans. The anterior/posterior (A/P) and medial-lateral (M/L) rotation in the pose of the animal was corrected in the 3D image to appropriately represent the pose used in the stereotactic frame. An array of implanted fiducial markers was used to transform stereotactic coordinates to the images. The recording chamber was custom fabricated using computer-aided design (CAD), such that it would fit the contours of the individual skull with minimum error. The chamber also helped in guiding the cannula through the entry points down a trajectory into the depth of the brain. We have validated our method in four animals and our results indicate average placement error of cannula to be 1.20 ± 0.68 mm of the targeted positions. The approach employed here for derivation of the coordinates, surgical implantation and post implant validation is built using traditional access to surgical and imaging methods without the necessity of intra-operative imaging. The validation of our method lends support to its wider application in most nonhuman primate laboratories with onsite MR and CT imaging capabilities.
Patch-based label fusion for automatic multi-atlas-based prostate segmentation in MR images
Xiaofeng Yang, Ashesh B. Jani, Peter J. Rossi, et al.
In this paper, we propose a 3D multi-atlas-based prostate segmentation method for MR images, which utilizes patch-based label fusion strategy. The atlases with the most similar appearance are selected to serve as the best subjects in the label fusion. A local patch-based atlas fusion is performed using voxel weighting based on anatomical signature. This segmentation technique was validated with a clinical study of 13 patients and its accuracy was assessed using the physicians' manual segmentations (gold standard). Dice volumetric overlapping was used to quantify the difference between the automatic and manual segmentation. In summary, we have developed a new prostate MR segmentation approach based on nonlocal patch-based label fusion, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.
Phantom-based ground-truth generation for cerebral vessel segmentation and pulsatile deformation analysis
Daniel Schetelig, Dennis Säring, Till Illies, et al.
Hemodynamic and mechanical factors of the vascular system are assumed to play a major role in understanding, e.g., initiation, growth and rupture of cerebral aneurysms. Among those factors, cardiac cycle-related pulsatile motion and deformation of cerebral vessels currently attract much interest. However, imaging of those effects requires high spatial and temporal resolution and remains challenging { and similarly does the analysis of the acquired images: Flow velocity changes and contrast media inflow cause vessel intensity variations in related temporally resolved computed tomography and magnetic resonance angiography data over the cardiac cycle and impede application of intensity threshold-based segmentation and subsequent motion analysis. In this work, a flow phantom for generation of ground-truth images for evaluation of appropriate segmentation and motion analysis algorithms is developed. The acquired ground-truth data is used to illustrate the interplay between intensity fluctuations and (erroneous) motion quantification by standard threshold-based segmentation, and an adaptive threshold-based segmentation approach is proposed that alleviates respective issues. The results of the phantom study are further demonstrated to be transferable to patient data.
A general approach to liver lesion segmentation in CT images
Li Cao, Jayaram K. Udupa, Dewey Odhner, et al.
Lesion segmentation has remained a challenge in different body regions. Generalizability is lacking in published methods as variability in results is common, even for a given organ and modality, such that it becomes difficult to establish standardized methods of disease quantification and reporting. This paper makes an attempt at a generalizable method based on classifying lesions along with their background into groups using clinically used visual attributes. Using an Iterative Relative Fuzzy Connectedness (IRFC) delineation engine, the ideas are implemented for the task of liver lesion segmentation in computed tomography (CT) images. For lesion groups with the same background properties, a few subjects are chosen as the training set to obtain the optimal IRFC parameters for the background tissue components. For lesion groups with similar foreground properties, optimal foreground parameters for IRFC are set as the median intensity value of the training lesion subset. To segment liver lesions belonging to a certain group, the devised method requires manual loading of the corresponding parameters, and correct setting of the foreground and background seeds. The segmentation is then completed in seconds. Segmentation accuracy and repeatability with respect to seed specification are evaluated. Accuracy is assessed by the assignment of a delineation quality score (DQS) to each case. Inter-operator repeatability is assessed by the difference between segmentations carried out independently by two operators. Experiments on 80 liver lesion cases show that the proposed method achieves a mean DQS score of 4.03 and inter-operator repeatability of 92.3%.
A comparison study of atlas-based 3D cardiac MRI segmentation: global versus global and local transformations
Aditya Daryanani, Shusil Dangi, Yehuda Kfir Ben-Zikri, et al.
Magnetic Resonance Imaging (MRI) is a standard-of-care imaging modality for cardiac function assessment and guidance of cardiac interventions thanks to its high image quality and lack of exposure to ionizing radiation. Cardiac health parameters such as left ventricular volume, ejection fraction, myocardial mass, thickness, and strain can be assessed by segmenting the heart from cardiac MRI images. Furthermore, the segmented pre-operative anatomical heart models can be used to precisely identify regions of interest to be treated during minimally invasive therapy. Hence, the use of accurate and computationally efficient segmentation techniques is critical, especially for intra-procedural guidance applications that rely on the peri-operative segmentation of subject-specific datasets without delaying the procedure workflow. Atlas-based segmentation incorporates prior knowledge of the anatomy of interest from expertly annotated image datasets. Typically, the ground truth atlas label is propagated to a test image using a combination of global and local registration. The high computational cost of non-rigid registration motivated us to obtain an initial segmentation using global transformations based on an atlas of the left ventricle from a population of patient MRI images and refine it using well developed technique based on graph cuts. Here we quantitatively compare the segmentations obtained from the global and global plus local atlases and refined using graph cut-based techniques with the expert segmentations according to several similarity metrics, including Dice correlation coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.
Surface mesh to voxel data registration for patient-specific anatomical modeling
Júlia E. E. de Oliveira, Paul Giessler, András Keszei, et al.
Virtual Physiological Human (VPH) models are frequently used for training, planning, and performing medical procedures. The Regional Anaesthesia Simulator and Assistant (RASimAs) project has the goal of increasing the application and effectiveness of regional anesthesia (RA) by combining a simulator of ultrasound-guided and electrical nerve-stimulated RA procedures and a subject-specific assistance system through an integration of image processing, physiological models, subject-specific data, and virtual reality. Individualized models enrich the virtual training tools for learning and improving regional anaesthesia (RA) skills. Therefore, we suggest patient-specific VPH models that are composed by registering the general mesh-based models with patient voxel data-based recordings. Specifically, the pelvis region has been focused for the support of the femoral nerve block. The processing pipeline is composed of different freely available toolboxes such as MatLab, the open Simulation framework (SOFA), and MeshLab. The approach of Gilles is applied for mesh-to-voxel registration. Personalized VPH models include anatomical as well as mechanical properties of the tissues. Two commercial VPH models (Zygote and Anatomium) were used together with 34 MRI data sets. Results are presented for the skin surface and pelvic bones. Future work will extend the registration procedure to cope with all model tissue (i.e., skin, muscle, bone, vessel, nerve, fascia) in a one-step procedure and extrapolating the personalized models to body regions actually being out of the captured field of view.
Estimation of line-based target registration error
We present a novel method for estimating target registration error (TRE) in point-to-line registration. We develop a spatial stiffness model of the registration problem and derive the stiffness matrix of the model which leads to an analytic expression for predicting the root-mean-square (RMS) TRE. Under the assumption of isotropic localization noise, we show that the stiffness matrix for line-based registration is equal to the difference of the stiffness matrices for fiducial registration and surface-based registration. The expression for TRE is validated in the context of freehand ultrasound calibration performed using a tracked line fiducial as a calibration phantom. Measurements taken during calibration of a tracked linear ultrasound probe were used in simulations to assess TRE of point-to-line registration and the results were compared to the values predicted by the analytic expression. The difference between predicted and simulated RMS TRE magnitude for targets near the centroid of the registration points was less than 5% of the simulated magnitude when using more than 6 registration points. The difference between predicted and simulated RMS TRE magnitude for targets over the entire ultrasound image was almost always less than 10% of the simulated magnitude when using more than 10 registration points. TRE magnitude was minimized near the centroid of the registration points and the isocontours of TRE were elliptic in shape.
A MRI-CT prostate registration using sparse representation technique
Xiaofeng Yang, Ashesh B. Jani, Peter J. Rossi, et al.
Purpose: To develop a new MRI-CT prostate registration using patch-based deformation prediction framework to improve MRI-guided prostate radiotherapy by incorporating multiparametric MRI into planning CT images.

Methods: The main contribution is to estimate the deformation between prostate MRI and CT images in a patch-wise fashion by using the sparse representation technique. We assume that two image patches should follow the same deformation if their patch-wise appearance patterns are similar. Specifically, there are two stages in our proposed framework, i.e., the training stage and the application stage. In the training stage, each prostate MR images are carefully registered to the corresponding CT images and all training MR and CT images are carefully registered to a selected CT template. Thus, we obtain the dense deformation field for each training MR and CT image. In the application stage, for registering a new subject MR image with the same subject CT image, we first select a small number of key points at the distinctive regions of this subject CT image. Then, for each key point in the subject CT image, we extract the image patch, centered at the underlying key point. Then, we adaptively construct the coupled dictionary for the underlying point where each atom in the dictionary consists of image patches and the respective deformations obtained from training pair-wise MRI-CT images. Next, the subject image patch can be sparsely represented by a linear combination of training image patches in the dictionary, where we apply the same sparse coefficients to the respective deformations in the dictionary to predict the deformation for the subject MR image patch. After we repeat the same procedure for each subject CT key point, we use B-splines to interpolate a dense deformation field, which is used as the initialization to allow the registration algorithm estimating the remaining small segment of deformations from MRI to CT image.

Results: Our MRI-CT registration technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using some identified landmarks in both MRI and CT images. Our proposed registration was compared with the current free-form-deformation (FFD)-based registration method. The accuracy of the proposed method was significantly higher than the commonly used FFD-based registration utilizing normalized mutual information (NMI).

Conclusions: We have developed a new prostate MR-CT registration approach based on patch-deformation dictionary, demonstrated its clinical feasibility, and validated its accuracy with some identified landmarks. The proposed registration method may provide an accurate and robust means of estimating prostate-gland deformation between MRI and CT scans, and is therefore well-suited for a number of MR-targeted CT-based prostate radiotherapy.
Fusion of cone-beam CT and 3D photographic images for soft tissue simulation in maxillofacial surgery
Soyoung Chung, Joojin Kim, Helen Hong
During maxillofacial surgery, prediction of the facial outcome after surgery is main concern for both surgeons and patients. However, registration of the facial CBCT images and 3D photographic images has some difficulties that regions around the eyes and mouth are affected by facial expressions or the registration speed is low due to their dense clouds of points on surfaces. Therefore, we propose a framework for the fusion of facial CBCT images and 3D photos with skin segmentation and two-stage surface registration. Our method is composed of three major steps. First, to obtain a CBCT skin surface for the registration with 3D photographic surface, skin is automatically segmented from CBCT images and the skin surface is generated by surface modeling. Second, to roughly align the scale and the orientation of the CBCT skin surface and 3D photographic surface, point-based registration with four corresponding landmarks which are located around the mouth is performed. Finally, to merge the CBCT skin surface and 3D photographic surface, Gaussian-weight-based surface registration is performed within narrow-band of 3D photographic surface.
Image updating for brain deformation compensation in tumor resection
Xiaoyao Fan, Songbai Ji, Jonathan D. Olson, et al.
Preoperative magnetic resonance images (pMR) are typically used for intraoperative guidance in image-guided neurosurgery, the accuracy of which can be significantly compromised by brain deformation. Biomechanical finite element models (FEM) have been developed to estimate whole-brain deformation and produce model-updated MR (uMR) that compensates for brain deformation at different surgical stages. Early stages of surgery, such as after craniotomy and after dural opening, have been well studied, whereas later stages after tumor resection begins remain challenging. In this paper, we present a method to simulate tumor resection by incorporating data from intraoperative stereovision (iSV). The amount of tissue resection was estimated from iSV using a "trial-and-error" approach, and the cortical shift was measured from iSV through a surface registration method using projected images and an optical flow (OF) motion tracking algorithm. The measured displacements were employed to drive the biomechanical brain deformation model, and the estimated whole-brain deformation was subsequently used to deform pMR and produce uMR. We illustrate the method using one patient example. The results show that the uMR aligned well with iSV and the overall misfit between model estimates and measured displacements was 1.46 mm. The overall computational time was ~5 min, including iSV image acquisition after resection, surface registration, modeling, and image warping, with minimal interruption to the surgical flow. Furthermore, we compare uMR against intraoperative MR (iMR) that was acquired following iSV acquisition.
A fully automatic image-to-world registration method for image-guided procedure with intraoperative imaging updates
Senhu Li, David Sarment
Image-guided procedure with intraoperative imaging updates has made a big impact on minimally invasive surgery. Compact and mobile CT imaging device combining with current commercial available image guided navigation system is a legitimate and cost-efficient solution for a typical operating room setup. However, the process of manual fiducial-based registration between image and physical spaces (image-to-world) is troublesome for surgeons during the procedure, which results in much procedure interruptions and is the main source of registration errors. In this study, we developed a novel method to eliminate the manual registration process. Instead of using probe to manually localize the fiducials during the surgery, a tracking plate with known fiducial positions relative to the reference coordinates is designed and fabricated through 3D printing technique. The workflow and feasibility of this method has been studied through a phantom experiment.
Optimal atlas construction through hierarchical image registration
George J. Grevera, Jayaram K. Udupa, Dewey Odhner, et al.
Atlases (digital or otherwise) are common in medicine. However, there is no standard framework for creating them from medical images. One traditional approach is to pick a representative subject and then proceed to label structures/regions of interest in this image. Another is to create a "mean" or average subject. Atlases may also contain more than a single representative (e.g., the Visible Human contains both a male and a female data set). Other criteria besides gender may be used as well, and the atlas may contain many examples for a given criterion. In this work, we propose that atlases be created in an optimal manner using a well-established graph theoretic approach using a min spanning tree (or more generally, a collection of them). The resulting atlases may contain many examples for a given criterion. In fact, our framework allows for the addition of new subjects to the atlas to allow it to evolve over time. Furthermore, one can apply segmentation methods to the graph (e.g., graph-cut, fuzzy connectedness, or cluster analysis) which allow it to be separated into "sub-atlases" as it evolves. We demonstrate our method by applying it to 50 3D CT data sets of the chest region, and by comparing it to a number of traditional methods using measures such as Mean Squared Difference, Mattes Mutual Information, and Correlation, and for rigid registration. Our results demonstrate that optimal atlases can be constructed in this manner and outperform other methods of construction using freely available software.
Single slice US-MRI registration for neurosurgical MRI-guided US
Utsav Pardasani, John S. H. Baxter, Terry M. Peters, et al.
Image-based ultrasound to magnetic resonance image (US-MRI) registration can be an invaluable tool in image-guided neuronavigation systems. State-of-the-art commercial and research systems utilize image-based registration to assist in functions such as brain-shift correction, image fusion, and probe calibration.

Since traditional US-MRI registration techniques use reconstructed US volumes or a series of tracked US slices, the functionality of this approach can be compromised by the limitations of optical or magnetic tracking systems in the neurosurgical operating room. These drawbacks include ergonomic issues, line-of-sight/magnetic interference, and maintenance of the sterile field. For those seeking a US vendor-agnostic system, these issues are compounded with the challenge of instrumenting the probe without permanent modification and calibrating the probe face to the tracking tool.

To address these challenges, this paper explores the feasibility of a real-time US-MRI volume registration in a small virtual craniotomy site using a single slice. We employ the Linear Correlation of Linear Combination (LC2) similarity metric in its patch-based form on data from MNI’s Brain Images for Tumour Evaluation (BITE) dataset as a PyCUDA enabled Python module in Slicer. By retaining the original orientation information, we are able to improve on the poses using this approach. To further assist the challenge of US-MRI registration, we also present the BOXLC2 metric which demonstrates a speed improvement to LC2, while retaining a similar accuracy in this context.
Rapidly-steered single-element ultrasound for real-time volumetric imaging and guidance
Mark Stauber, Craig Western, Roman Solek, et al.
Volumetric ultrasound (US) imaging has the potential to provide real-time anatomical imaging with high soft-tissue contrast in a variety of diagnostic and therapeutic guidance applications. However, existing volumetric US machines utilize "wobbling" linear phased array or matrix phased array transducers which are costly to manufacture and necessitate bulky external processing units. To drastically reduce cost, improve portability, and reduce footprint, we propose a rapidly-steered single-element volumetric US imaging system. In this paper we explore the feasibility of this system with a proof-of-concept single-element volumetric US imaging device. The device uses a multi-directional raster-scan technique to generate a series of two-dimensional (2D) slices that were reconstructed into three-dimensional (3D) volumes. At 15 cm depth, 90° lateral field of view (FOV), and 20° elevation FOV, the device produced 20-slice volumes at a rate of 0.8 Hz. Imaging performance was evaluated using an US phantom. Spatial resolution was 2.0 mm, 4.7 mm, and 5.0 mm in the axial, lateral, and elevational directions at 7.5 cm. Relative motion of phantom targets were automatically tracked within US volumes with a mean error of -0.3±0.3 mm, -0.3±0.3 mm, and -0.1±0.5 mm in the axial, lateral, and elevational directions, respectively. The device exhibited a mean spatial distortion error of 0.3±0.9 mm, 0.4±0.7 mm, and -0.3±1.9 in the axial, lateral, and elevational directions. With a production cost near $1000, the performance characteristics of the proposed system make it an ideal candidate for diagnostic and image-guided therapy applications where form factor and low cost are paramount.
Investigation of permanent magnets in low-cost position tracking
Ryan Anderson, Andras Lasso, Keyvan Hashtrudi-Zaad, et al.
PURPOSE: Low cost portable ultrasound systems could see improved utility if similarly low cost portable trackers were developed. Permanent magnet based tracking systems potentially offer adequate tracking accuracy in a small workspace suitable for ultrasound image reconstruction. In this study the use of simple permanent magnet tracking techniques is investigated to determine feasibility for use in an ultrasound tracking system. METHODS: Permanent magnet tracking requires finding a position input into a field model which minimizes the error between the measured field, and the field expected from the model. A simulator was developed in MATLAB to determine the effect of sources of error in permanent magnet tracking systems. Insights from the simulations were used to develop a calibration and tracking experiment to determine the accuracy of a simple and low cost permanent magnet tracking system. RESULTS: Simulation and experimental results show permanent magnet based tracking to be highly sensitive to errors in sensor measurements, calibration and experimental setup. The reduction in field strength of permanent magnets lowers with the cube of distance, which leads to very poor signal-to-noise ratios at distances above 20 cm. Small errors in experimental setup also led to high tracking error. CONCLUSION: Permanent magnet tracking was found to be less accurate than is clinically useful, and highly sensitive to errors in sensors and experimental setup. Sensor and calibration limitations make simple permanent magnet tracking systems a poor choice given the present state of sensor technology.
Image-guided endobronchial ultrasound
William E. Higgins, Xiaonan Zang, Ronnarit Cheirsilp, et al.
Endobronchial ultrasound (EBUS) is now recommended as a standard procedure for in vivo verification of extraluminal diagnostic sites during cancer-staging bronchoscopy. Yet, physicians vary considerably in their skills at using EBUS effectively. Regarding existing bronchoscopy guidance systems, studies have shown their effectiveness in the lung-cancer management process. With such a system, a patient's X-ray computed tomography (CT) scan is used to plan a procedure to regions of interest (ROIs). This plan is then used during follow-on guided bronchoscopy. Recent clinical guidelines for lung cancer, however, also dictate using positron emission tomography (PET) imaging for identifying suspicious ROIs and aiding in the cancer-staging process. While researchers have attempted to use guided bronchoscopy systems in tandem with PET imaging and EBUS, no true EBUS-centric guidance system exists. We now propose a full multimodal image-based methodology for guiding EBUS. The complete methodology involves two components: 1) a procedure planning protocol that gives bronchoscope movements appropriate for live EBUS positioning; and 2) a guidance strategy and associated system graphical user interface (GUI) designed for image-guided EBUS. We present results demonstrating the operation of the system.
A motorized ultrasound system for MRI-ultrasound fusion guided prostatectomy
Reza Seifabadi, Sheng Xu, Peter Pinto, et al.
Purpose: This study presents MoTRUS, a motorized transrectal ultrasound system, to enable remote navigation of a transrectal ultrasound (TRUS) probe during da Vinci assisted prostatectomy. MoTRUS not only provides a stable platform to the ultrasound probe, but also allows the physician to navigate it remotely while sitting on the da Vinci console. This study also presents phantom feasibility study with the goal being intraoperative MRI-US image fusion capability to bring preoperative MR images to the operating room for the best visualization of the gland, boundaries, nerves, etc. Method: A two degree-of-freedom probe holder is developed to insert and rotate a bi-plane transrectal ultrasound transducer. A custom joystick is made to enable remote navigation of MoTRUS. Safety features have been considered to avoid inadvertent risks (if any) to the patient. Custom design software has been developed to fuse pre-operative MR images to intraoperative ultrasound images acquired by MoTRUS. Results: Remote TRUS probe navigation was evaluated on a patient after taking required consents during prostatectomy using MoTRUS. It took 10 min to setup the system in OR. MoTRUS provided similar capability in addition to remote navigation and stable imaging. No complications were observed. Image fusion was evaluated on a commercial prostate phantom. Electromagnetic tracking was used for the fusion. Conclusions: Motorized navigation of the TRUS probe during prostatectomy is safe and feasible. Remote navigation provides physician with a more precise and easier control of the ultrasound image while removing the burden of manual manipulation of the probe. Image fusion improved visualization of the prostate and boundaries in a phantom study.
Visualization of hepatic arteries with 3D ultrasound during intra-arterial therapies
Maxime Gérard, An Tang, Anaïs Badoual, et al.
Liver cancer represents the second most common cause of cancer-related mortality worldwide. The prognosis is poor with an overall mortality of 95%. Moreover, most hepatic tumors are unresectable due to their advanced stage at discovery or poor underlying liver function. Tumor embolization by intra-arterial approaches is the current standard of care for advanced cases of hepatocellular carcinoma. These therapies rely on the fact that the blood supply of primary hepatic tumors is predominantly arterial. Feedback on blood flow velocities in the hepatic arteries is crucial to ensure maximal treatment efficacy on the targeted masses. Based on these velocities, the intra-arterial injection rate is modulated for optimal infusion of the chemotherapeutic drugs into the tumorous tissue. While Doppler ultrasound is a well-documented technique for the assessment of blood flow, 3D visualization of vascular anatomy with ultrasound remains challenging. In this paper we present an image-guidance pipeline that enables the localization of the hepatic arterial branches within a 3D ultrasound image of the liver. A diagnostic Magnetic resonance angiography (MRA) is first processed to automatically segment the hepatic arteries. A non-rigid registration method is then applied on the portal phase of the MRA volume with a 3D ultrasound to enable the visualization of the 3D mesh of the hepatic arteries in the Doppler images. To evaluate the performance of the proposed workflow, we present initial results from porcine models and patient images.
3D shape tracking of minimally invasive medical instruments using optical frequency domain reflectometry
Francois Parent, Koushik Kanti Mandal, Sebastien Loranger, et al.
We propose here a new alternative to provide real-time device tracking during minimally invasive interventions using a truly-distributed strain sensor based on optical frequency domain reflectometry (OFDR) in optical fibers. The guidance of minimally invasive medical instruments such as needles or catheters (ex. by adding a piezoelectric coating) has been the focus of extensive research in the past decades. Real-time tracking of instruments in medical interventions facilitates image guidance and helps the user to reach a pre-localized target more precisely. Image-guided systems using ultrasound imaging and shape sensors based on fiber Bragg gratings (FBG)-embedded optical fibers can provide retroactive feedback to the user in order to reach the targeted areas with even more precision. However, ultrasound imaging with electro-magnetic tracking cannot be used in the magnetic resonance imaging (MRI) suite, while shape sensors based on FBG embedded in optical fibers provides discrete values of the instrument position, which requires approximations to be made to evaluate its global shape. This is why a truly-distributed strain sensor based on OFDR could enhance the tracking accuracy. In both cases, since the strain is proportional to the radius of curvature of the fiber, a strain sensor can provide the three-dimensional shape of medical instruments by simply inserting fibers inside the devices. To faithfully follow the shape of the needle in the tracking frame, 3 fibers glued in a specific geometry are used, providing 3 degrees of freedom along the fiber. Near real-time tracking of medical instruments is thus obtained offering clear advantages for clinical monitoring in remotely controlled catheter or needle guidance. We present results demonstrating the promising aspects of this approach as well the limitations of using the OFDR technique.
Measurement of electromagnetic tracking error in a navigated breast surgery setup
Vinyas Harish, Aidan Baksh, Tamas Ungi, et al.
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup.

METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth.

RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree.

CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
Image-guided navigation surgery for pelvic malignancies using electromagnetic tracking
Jasper Nijkamp, Koert Kuhlmann, Jan-Jakob Sonke, et al.
The purpose of this study was to implement and evaluate a surgical navigation system for pelvic malignancies.

For tracking an NDI Aurora tabletop field generator and in-house developed navigation software were used. For patient tracking three EM-sensor stickers were used, one on the back and two on the superior iliac spines. During surgery a trackable pointer was used. One day before surgery a CT scan was acquired with the stickers in-place and marked. From the CT scan the EM-sensors, tumor and normal structures were segmented. During surgery, accuracy was independently checked by pointing at the aorta bifurcation and the common iliac artery bifurcations. Subsequently, the system was used to localize the ureters and the tumor.

Seven patients were included, three rectal tumors with lymph node-involvement, three lymph node recurrences, and one rectal recurrence. The average external marker registration accuracy was 0.75 cm RMSE (range 0.31-1.58 cm). The average distance between the pointer and the arterial bifurcations was 1.55 cm (1SD=0.63 cm). We were able to localize and confirm the location of all ureters. Twelve out of thirteen lymph nodes were localized and removed. All tumors were removed radically. In all cases the surgeons indicated that the system aided in better anatomical insight, and faster localization of malignant tissue and ureters. In 2/7 cases surgeons indicated that radical resection was only possible with navigation.

The navigation accuracy was limited due to the use of skin markers. Nevertheless, preliminary results indicated potential clinical benefit due to better utilization of pre-treatment 3D imaging information.
Feasibility of tracked electrodes for use in epilepsy surgery
David Holmes III, Benjamin Brinkmann, Dennis Hanson, et al.
Subdural electrode recording is commonly used to evaluate intractable epilepsy. In order to accurately record electrical activity responsible for seizure, electrodes must be positioned precisely near targets of interest, often indicated preoperatively through imaging studies. To achieve accurate placement, a large craniotomy is used to expose the brain surface. With the intent of limiting the size and improving the location of craniotomy for electrode placement, we examined magnetic tracking for localization of electrode strips. Commercially available electrode strips were attached to specialized magnetic tracking sensors developed by Medtronic plc. In a rigid phantom we evaluated the strips to determine the accuracy of electrode placement on targets. We further conducted an animal study to evaluate the impact of magnetic field interference during data collection. The measured distance between the physical fiducial and lead coil of the electrode strip was 1.32 ± 1.03mm in the phantom experiments. The tracking system induces a very strong signal in the electrodes in the Very Low Frequency, an International Telecommunication Union (ITU) designated frequency band, from 3 kHz to 30 kHz. The results of the animal experiment demonstrated both tracking feasibility and data collection.
4D cone-beam CT imaging for guidance in radiation therapy: setup verification by use of implanted fiducial markers
Peng Jin, Niek van Wieringen, Maarten C. C. M. Hulshof, et al.
The use of 4D cone-beam computed tomography (CBCT) and fiducial markers for guidance during radiation therapy of mobile tumors is challenging due to the trade-off between image quality, imaging dose, and scanning time. We aimed to investigate the visibility of markers and the feasibility of marker-based 4D registration and manual respiration-induced marker motion quantification for different CBCT acquisition settings. A dynamic thorax phantom and a patient with implanted gold markers were included. For both the phantom and patient, the peak-to-peak amplitude of marker motion in the cranial-caudal direction ranged from 5.3 to 14.0 mm, which did not affect the marker visibility and the associated marker-based registration feasibility. While using a medium field of view (FOV) and the same total imaging dose as is applied for 3D CBCT scanning in our clinic, it was feasible to attain an improved marker visibility by reducing the imaging dose per projection and increasing the number of projection images. For a small FOV with a shorter rotation arc but similar total imaging dose, streak artifacts were reduced due to using a smaller sampling angle. Additionally, the use of a small FOV allowed reducing total imaging dose and scanning time (~2.5 min) without losing the marker visibility. In conclusion, by using 4D CBCT with identical or lower imaging dose and a reduced gantry speed, it is feasible to attain sufficient marker visibility for marker-based 4D setup verification. Moreover, regardless of the settings, manual marker motion quantification can achieve a high accuracy with the error <1.2 mm.
Effects of voxelization on dose volume histogram accuracy
Kyle Sunderland, Csaba Pinter, Andras Lasso, et al.
PURPOSE: In radiotherapy treatment planning systems, structures of interest such as targets and organs at risk are stored as 2D contours on evenly spaced planes. In order to be used in various algorithms, contours must be converted into binary labelmap volumes using voxelization. The voxelization process results in lost information, which has little effect on the volume of large structures, but has significant impact on small structures, which contain few voxels. Volume differences for segmented structures affects metrics such as dose volume histograms (DVH), which are used for treatment planning. Our goal is to evaluate the impact of voxelization on segmented structures, as well as how factors like voxel size affects metrics, such as DVH.

METHODS: We create a series of implicit functions, which represent simulated structures. These structures are sampled at varying resolutions, and compared to labelmaps with high sub-millimeter resolutions. We generate DVH and evaluate voxelization error for the same structures at different resolutions by calculating the agreement acceptance percentage between the DVH.

RESULTS: We implemented tools for analysis as modules in the SlicerRT toolkit based on the 3D Slicer platform. We found that there were large DVH variation from the baseline for small structures or for structures located in regions with a high dose gradient, potentially leading to the creation of suboptimal treatment plans.

CONCLUSION: This work demonstrates that labelmap and dose volume voxel size is an important factor in DVH accuracy, which must be accounted for in order to ensure the development of accurate treatment plans.
Partition-based acquisition model for speed up navigated beta-probe surface imaging
Although gross total resection in low-grade glioma surgery leads to a better patient outcome, the in-vivo control of resection borders remains challenging. For this purpose, navigated beta-probe systems combined with 18F-based radiotracer, relying on activity distribution surface estimation, have been proposed to generate reconstructed images. The clinical relevancy has been outlined by early studies where intraoperative functional information is leveraged although inducing low spatial resolution in reconstruction. To improve reconstruction quality, multiple acquisition models have been proposed. They involve the definition of attenuation matrix for designing radiation detection physics. Yet, they require high computational power for efficient intraoperative use. To address the problem, we propose a new acquisition model called Partition Model (PM) considering an existing model where coefficients of the matrix are taken from a look-up table (LUT). Our model is based upon the division of the LUT into averaged homogeneous values for assigning attenuation coefficients. We validated our model using in vitro datasets, where tumors and peri-tumoral tissues have been simulated. We compared our acquisition model with the o_-the-shelf LUT and the raw method. Acquisition models outperformed the raw method in term of tumor contrast (7.97:1 mean T:B) but with a difficulty of real-time use. Both acquisition models reached the same detection performance with references (0.8 mean AUC and 0.77 mean NCC), where PM slightly improves the mean tumor contrast up to 10.1:1 vs 9.9:1 with the LUT model and more importantly, it reduces the mean computation time by 7.5%. Our model gives a faster solution for an intraoperative use of navigated beta-probe surface imaging system, with improved image quality.
Stent enhancement in digital x-ray fluoroscopy using an adaptive feature enhancement filter
Yuhao Jiang, Josey Zachary
Fluoroscopic images belong to the classes of low contrast and high noise. Simply lowering radiation dose will render the images unreadable. Feature enhancement filters can reduce patient dose by acquiring images at low dose settings and then digitally restoring them to the original quality. In this study, a stent contrast enhancement filter is developed to selectively improve the contrast of stent contour without dramatically boosting the image noise including quantum noise and clinical background noise. Gabor directional filter banks are implemented to detect the edges and orientations of the stent. A high orientation resolution of 9° is used. To optimize the use of the information obtained from Gabor filters, a computerized Monte Carlo simulation followed by ROC study is used to find the best nonlinear operator. The next stage of filtering process is to extract symmetrical parts in the stent. The global and local symmetry measures are used. The information gathered from previous two filter stages are used to generate a stent contour map. The contour map is then scaled and added back to the original image to get a contrast enhanced stent image. We also apply a spatio-temporal channelized Hotelling observer model and other numerical measures to characterize the response of the filters and contour map to optimize the selections of parameters for image quality. The results are compared to those filtered by an adaptive unsharp masking filter previously developed. It is shown that stent enhancement filter can effectively improve the stent detection and differentiation in the interventional fluoroscopy.
Evaluation of left ventricular scar identification from contrast enhanced magnetic resonance imaging for guidance of ventricular catheter ablation therapy
M. E. Rettmann, H. I. Lehmann, S. B. Johnson, et al.
Patients with ventricular arrhythmias typically exhibit myocardial scarring, which is believed to be an important anatomic substrate for reentrant circuits, thereby making these regions a key target in catheter ablation therapy. In ablation therapy, a catheter is guided into the left ventricle and radiofrequency energy is delivered into the tissue to interrupt arrhythmic electrical pathways. Low bipolar voltage regions are typically localized during the procedure through point-by-point construction of an electroanatomic map by sampling the endocardial surface with the ablation catheter and are used as a surrogate for myocardial scar. This process is time consuming, requires significant skill, and has the potential to miss low voltage sites. This has led to efforts to quantify myocardial scar preoperatively using delayed, contrast-enhanced MRI. In this paper, we evaluate the utility of left ventricular scar identification from delayed contrast enhanced magnetic resonance imaging for guidance of catheter ablation of ventricular arrhythmias. Myocardial infarcts were created in three canines followed by a delayed, contrast enhanced MRI scan and electroanatomic mapping. The left ventricle and myocardial scar is segmented from preoperative MRI images and sampled points from the procedural electroanatomical map are registered to the segmented endocardial surface. Sampled points with low bipolar voltage points visually align with the segmented scar regions. This work demonstrates the potential utility of using preoperative delayed, enhanced MRI to identify myocardial scarring for guidance of ventricular catheter ablation therapy.
Interactive visualization for scar transmurality in cardiac resynchronization therapy
Sabrina Reiml, Daniel Toth, Maria Panayiotou, et al.
Heart failure is a serious disease affecting about 23 million people worldwide. Cardiac resynchronization therapy is used to treat patients suffering from symptomatic heart failure. However, 30% to 50% of patients have limited clinical benefit. One of the main causes is suboptimal placement of the left ventricular lead. Pacing in areas of myocardial scar correlates with poor clinical outcomes. Therefore precise knowledge of the individual patient’s scar characteristics is critical for delivering tailored treatments capable of improving response rates. Current research methods for scar assessment either map information to an alternative non-anatomical coordinate system or they use the image coordinate system but lose critical information about scar extent and scar distribution. This paper proposes two interactive methods for visualizing relevant scar information. A 2-D slice based approach with a scar mask overlaid on a 16 segment heart model and a 3-D layered mesh visualization which allows physicians to scroll through layers of scar from endocardium to epicardium. These complementary methods enable physicians to evaluate scar location and transmurality during planning and guidance. Six physicians evaluated the proposed system by identifying target regions for lead placement. With the proposed method more target regions could be identified.
A robust automated left ventricle region of interest localization technique using a cardiac cine MRI atlas
Region of interest detection is a precursor to many medical image processing and analysis applications, including segmentation, registration and other image manipulation techniques. The optimal region of interest is often selected manually, based on empirical knowledge and features of the image dataset. However, if inconsistently identified, the selected region of interest may greatly affect the subsequent image analysis or interpretation steps, in turn leading to incomplete assessment during computer-aided diagnosis or incomplete visualization or identification of the surgical targets, if employed in the context of pre-procedural planning or image-guided interventions. Therefore, the need for robust, accurate and computationally efficient region of interest localization techniques is prevalent in many modern computer-assisted diagnosis and therapy applications. Here we propose a fully automated, robust, a priori learning-based approach that provides reliable estimates of the left and right ventricle features from cine cardiac MR images. The proposed approach leverages the temporal frame-to-frame motion extracted across a range of short axis left ventricle slice images with small training set generated from les than 10% of the population. This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion. This region is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques. Lastly, the optimal left ventricle region of interest of the test image is identified based on the correlation of known ground truth segmentations associated with the training dataset deemed closest to the test image. The proposed approach was tested on a population of 100 patient datasets and was validated against the ground truth region of interest of the test images manually annotated by experts. This tool successfully identified a mask around the LV and RV and furthermore the minimal region of interest around the LV that fully enclosed the left ventricle from all testing datasets, yielding a 98% overlap with their corresponding ground truth. The achieved mean absolute distance error between the two contours that normalized by the radius of the ground truth is 0.20 ± 0.09.
Classification of coronary artery tissues using optical coherence tomography imaging in Kawasaki disease
Atefeh Abdolmanafi, Arpan Suravi Prasad, Luc Duong, et al.
Intravascular imaging modalities, such as Optical Coherence Tomography (OCT) allow nowadays improving diagnosis, treatment, follow-up, and even prevention of coronary artery disease in the adult. OCT has been recently used in children following Kawasaki disease (KD), the most prevalent acquired coronary artery disease during childhood with devastating complications. The assessment of coronary artery layers with OCT and early detection of coronary sequelae secondary to KD is a promising tool for preventing myocardial infarction in this population. More importantly, OCT is promising for tissue quantification of the inner vessel wall, including neo intima luminal myofibroblast proliferation, calcification, and fibrous scar deposits. The goal of this study is to classify the coronary artery layers of OCT imaging obtained from a series of KD patients. Our approach is focused on developing a robust Random Forest classifier built on the idea of randomly selecting a subset of features at each node and based on second- and higher-order statistical texture analysis which estimates the gray-level spatial distribution of images by specifying the local features of each pixel and extracting the statistics from their distribution. The average classification accuracy for intima and media are 76.36% and 73.72% respectively. Random forest classifier with texture analysis promises for classification of coronary artery tissue.