Proceedings Volume 10135

Medical Imaging 2017: Image-Guided Procedures, Robotic Interventions, and Modeling

Robert J. Webster III, Baowei Fei
cover
Proceedings Volume 10135

Medical Imaging 2017: Image-Guided Procedures, Robotic Interventions, and Modeling

Robert J. Webster III, Baowei Fei
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 7 June 2017
Contents: 14 Sessions, 101 Papers, 48 Presentations
Conference: SPIE Medical Imaging 2017
Volume Number: 10135

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10135
  • Modeling Tissue Deformation
  • Registration
  • Neurosurgical Procedures
  • Spine Interventions
  • Cochlear Implantation
  • Keynote and Percutaneous Procedures
  • Optical Sensing
  • Novel Robots and Robotic Procedures
  • Cardiac Procedures
  • Joint Session with Conferences 10135 and 10139: Ultrasound Image Guidance
  • Anatomical Measurement and Respiratory Tracking
  • Segmentation
  • Poster Session
Front Matter: Volume 10135
icon_mobile_dropdown
Front Matter: Volume 10135
This PDF file contains the front matter associated with SPIE Proceedings Volume 10135, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Modeling Tissue Deformation
icon_mobile_dropdown
Towards quantitative quasi-static elastography with a gravity-induced deformation source
Rebekah H. Griesenauer, Jared A. Weis, Lori R. Arlinghaus, et al.
Biomechanical breast models have been employed for applications in image registration and analysis, breast augmentation simulation, and for surgical and biopsy guidance. Accurate applications of stress-strain relationships of tissue within the breast can improve the accuracy of biomechanical models that attempt to simulate breast movements. Reported stiffness values for adipose, glandular, and cancerous tissue types vary greatly. Variations in reported stiffness properties are mainly due to differences in testing methodologies and assumptions, measurement errors, and natural inter patient differences in tissue elasticity. Therefore, patient specific, in vivo determination of breast tissue properties is ideal for these procedural applications. Many in vivo elastography methods are not quantitative and/or do not measure material properties under deformation conditions that are representative of the procedure being simulated in the model. In this study, we developed an elasticity estimation method that is performed using deformations representative of supine therapeutic procedures. Reconstruction of material properties was performed by iteratively fitting two anatomical images before and after tissue stimulation. The method proposed is work flow friendly, quantitative, and uses a non-contact, gravity-induced deformation source. We tested this material property optimization procedure in a healthy volunteer and in simulation. In simulation, we show that the algorithm can reconstruct properties with errors below 1% for adipose and 5.6% for glandular tissue regardless of the starting stiffness values used as initial guesses. In clinical data, reconstruction errors are higher (3.6% and 24.2%) due to increased noise in the system. In a clinical context, the elastography method was shown to be promising for use in biomechanical model assisted supine procedures.
Validation of model-based brain shift correction in neurosurgery via intraoperative magnetic resonance imaging: preliminary results
Ma Luo, Sarah F. Frisken, Jared A. Weis, et al.
The quality of brain tumor resection surgery is dependent on the spatial agreement between preoperative image and intraoperative anatomy. However, brain shift compromises the aforementioned alignment. Currently, the clinical standard to monitor brain shift is intraoperative magnetic resonance (iMR). While iMR provides better understanding of brain shift, its cost and encumbrance is a consideration for medical centers. Hence, we are developing a model-based method that can be a complementary technology to address brain shift in standard resections, with resource-intensive cases as referrals for iMR facilities. Our strategy constructs a deformation ‘atlas’ containing potential deformation solutions derived from a biomechanical model that account for variables such as cerebrospinal fluid drainage and mannitol effects. Volumetric deformation is estimated with an inverse approach that determines the optimal combinatory ‘atlas’ solution fit to best match measured surface deformation. Accordingly, preoperative image is updated based on the computed deformation field. This study is the latest development to validate our methodology with iMR. Briefly, preoperative and intraoperative MR images of 2 patients were acquired. Homologous surface points were selected on preoperative and intraoperative scans as measurement of surface deformation and used to drive the inverse problem. To assess the model accuracy, subsurface shift of targets between preoperative and intraoperative states was measured and compared to model prediction. Considering subsurface shift above 3 mm, the proposed strategy provides an average shift correction of 59% across 2 cases. While further improvements in both the model and ability to validate with iMR are desired, the results reported are encouraging.
Mapping 3D breast lesions from full-field digital mammograms using subject-specific finite element models
Patient-specific finite element (FE) models of the breast have received increasing attention due to the potential capability of fusing images from different modalities. During the Magnetic Resonance Imaging (MRI) to X-ray mammography registration procedure, the FE model is compressed mimicking the mammographic acquisition. Subsequently, suspicious lesions in the MRI volume can be projected into the 2D mammographic space. However, most registration algorithms do not provide the reverse information, avoiding to obtain the 3D geometrical information from the lesions localized in the mammograms. In this work we introduce a fast method to localize the 3D position of the lesion within the MRI, using both cranio-caudal (CC) and medio-lateral oblique (MLO) mammographic projections, indexing the tetrahedral elements of the biomechanical model by means of an uniform grid. For each marked lesion in the Full-Field Digital Mammogram (FFDM), the X-ray path from source to the marker is calculated. Barycentric coordinates are computed in the tetrahedrons traversed by the ray. The list of elements and coordinates allows to localize two curves within the MRI and the closest point between both curves is taken as the 3D position of the lesion. The registration errors obtained in the mammographic space are 9.89 ± 3.72 mm in CC- and 8.04 ± 4.68 mm in MLO-projection and the error in the 3D MRI space is equal to 10.29 ± 3.99 mm. Regarding the uniform grid, it is computed spending between 0.1 and 0.7 seconds. The average time spent to compute the 3D location of a lesion is about 8 ms.
A biomechanical approach for in vivo diaphragm muscle motion prediction during normal respiration
Brett Coelho, Elham Karami, Seyyed M. H. Haddad, et al.
Lung cancer is one of the leading causes of cancer death in men and women. External Beam Radiation Therapy (EBRT) is a commonly used primary treatment for the condition. A major challenge with such treatments is the delivery of sufficient radiation dose to the lung tumor while ensuring that surrounding healthy lung parenchyma receives only minimal dose. This can be achieved by coupling EBRT with respiratory computer models which can predict the tumour location as a function of phase during the breathing cycle1. The diaphragm muscle contraction is mainly responsible for a large portion of the lung tumor motion during normal breathing, especially when tumours are in the lower lobes, therefore the importance of accurately modelling the diaphragm is paramount in lung tumour motion prediction. The goal of this research is to develop a biomechanical model of the diaphragm, including its active and passive response, using detailed geometric, biomechanical and anatomical information that mimics the diaphragmatic behaviour in a patient specific manner. For this purpose, a Finite Element Model (FEM) of the diaphragm was developed in order to predict the in vivo motion of the diaphragm, paving the way for computer assisted lung cancer tumor tracking in EBRT. Preliminary results obtained from the proposed model are promising and they indicate that it can be used as a plausible tool for effective lung cancer EBRT to improve patient care.
Modeling patterns of anatomical deformations in prostate patients undergoing radiation therapy with an endorectal balloon
Eliott Brion, Christian Richter, Benoit Macq, et al.
External beam radiation therapy (EBRT) treats cancer by delivering daily fractions of radiation to a target volume. For prostate cancer, the target undergoes day-to-day variations in position, volume, and shape. For stereotactic photon and for proton EBRT, endorectal balloons (ERBs) can be used to limit variations. To date, patterns of non-rigid variations for patients with ERB have not been modeled. We extracted and modeled the patient-specific patterns of variations, using regularly acquired CT-images, non-rigid point cloud registration, and principal component analysis (PCA). For each patient, a non-rigid point-set registration method, called Coherent Point Drift, (CPD) was used to automatically generate landmark correspondences between all target shapes. To ensure accurate registrations, we tested and validated CPD by identifying parameter values leading to the smallest registration errors (surface matching error 0.13±0.09 mm). PCA demonstrated that 88±3.2% of the target motion could be explained using only 4 principal modes. The most dominant component of target motion is a squeezing and stretching in the anterior-posterior and superior-inferior directions. A PCA model of daily landmark displacements, generated using 6 to 10 CT-scans, could explain well the target motion for the CT-scans not included in the model (modeling error decreased from 1.83±0.8 mm for 6 CT-scans to 1.6±0.7 mm for 10 CT-scans). PCA modeling error was smaller than the naive approximation by the mean shape (approximation error 2.66±0.59 mm). Future work will investigate the use of the PCA-model to improve the accuracy of EBRT techniques that are highly susceptible to anatomical variations such as, proton therapy
Registration
icon_mobile_dropdown
Panorama imaging for image-to-physical registration of narrow drill holes inside spongy bones
Jan Bergmeier, Jacob Friedemann Fast, Tobias Ortmaier, et al.
Image-to-physical registration based on volumetric data like computed tomography on the one side and intraoperative endoscopic images on the other side is an important method for various surgical applications. In this contribution, we present methods to generate panoramic views from endoscopic recordings for image-to-physical registration of narrow drill holes inside spongy bone. One core application is the registration of drill poses inside the mastoid during minimally invasive cochlear implantations. Besides the development of image processing software for registration, investigations are performed on a miniaturized optical system, achieving 360° radial imaging with one shot by extending a conventional, small, rigid, rod lens endoscope. A reflective cone geometry is used to deflect radially incoming light rays into the endoscope optics. Therefore, a cone mirror is mounted in front of a conventional 0° endoscope. Furthermore, panoramic images of inner drill hole surfaces in artificial bone material are created. Prior to drilling, cone beam computed tomography data is acquired from this artificial bone and simulated endoscopic views are generated from this data. A qualitative and quantitative image comparison of resulting views in terms of image-to-image registration is performed. First results show that downsizing of panoramic optics to a diameter of 3mm is possible. Conventional rigid rod lens endoscopes can be extended to produce suitable panoramic one-shot image data. Using unrolling and stitching methods, images of the inner drill hole surface similar to computed tomography image data of the same surface were created. Registration is performed on ten perturbations of the search space and results in target registration errors of (0:487 ± 0:438)mm at the entry point and (0:957 ± 0:948)mm at the exit as well as an angular error of (1:763 ± 1:536)°. The results show suitability of this image data for image-to-image registration. Analysis of the error components in different directions reveals a strong influence of the pattern structure, meaning higher diversity results into smaller errors.
Fundamental limits of image registration performance: effects of image noise and resolution in CT-guided interventions
M. D. Ketcha, T. de Silva, R. Han, et al.
Purpose: In image-guided procedures, image acquisition is often performed primarily for the task of geometrically registering information from another image dataset, rather than detection / visualization of a particular feature. While the ability to detect a particular feature in an image has been studied extensively with respect to image quality characteristics (noise, resolution) and is an ongoing, active area of research, comparatively little has been accomplished to relate such image quality characteristics to registration performance.

Methods: To establish such a framework, we derived Cramer-Rao lower bounds (CRLB) for registration accuracy, revealing the underlying dependencies on image variance and gradient strength. The CRLB was analyzed as a function of image quality factors (in particular, dose) for various similarity metrics and compared to registration accuracy using CT images of an anthropomorphic head phantom at various simulated dose levels. Performance was evaluated in terms of root mean square error (RMSE) of the registration parameters.

Results: Analysis of the CRLB shows two primary dependencies: 1) noise variance (related to dose); and 2) sum of squared image gradients (related to spatial resolution and image content). Comparison of the measured RMSE to the CRLB showed that the best registration method, RMSE achieved the CRLB to within an efficiency factor of 0.21, and optimal estimators followed the predicted inverse proportionality between registration performance and radiation dose.

Conclusions: Analysis of the CRLB for image registration is an important step toward understanding and evaluating an intraoperative imaging system with respect to a registration task. While the CRLB is optimistic in absolute performance, it reveals a basis for relating the performance of registration estimators as a function of noise content and may be used to guide acquisition parameter selection (e.g., dose) for purposes of intraoperative registration.
Which point-line registration?
Based on the Iterative Closest Point (ICP) framework, we present a generalized solution for the registration between homologous points and lines. The transformation we seek comprises an anisotropic scaling, followed by rotation and translation. This algorithm is demonstrated using the Perspective-n-Point (PnP) problem where lines form a bundle, and the Non-Perspective-n-Point (NPnP) problem where each line potentially has its own origin. We also prove that one existing NPnP solution is, in fact, equivalent to ICP, and that a second PnP solution differs from ICP only in the iteratively estimated translation. Applications for these types of registration include ultrasound calibration, kinematics tracking under fluoroscopic video, and camera pose estimation. Simulation results suggest this ICP algorithm compares favorably to existing PnP and NPnP algorithms, and has an extremely compact formulation.
Deformable 3D-2D registration for guiding K-wire placement in pelvic trauma surgery
J. Goerres, M. Jacobson, A. Uneri, et al.
Pelvic Kirschner wire (K-wire) insertion is a challenging surgical task requiring interpretation of complex 3D anatomical shape from 2D projections (fluoroscopy) and delivery of device trajectories within fairly narrow bone corridors in proximity to adjacent nerves and vessels. Over long trajectories (~10-25 cm), K-wires tend to curve (deform), making conventional rigid navigation inaccurate at the tip location. A system is presented that provides accurate 3D localization and guidance of rigid or deformable surgical devices (“components” – e.g., K-wires) based on 3D-2D registration. The patient is registered to a preoperative CT image by virtually projecting digitally reconstructed radiographs (DRRs) and matching to two or more intraoperative x-ray projections. The K-wire is localized using an analogous procedure matching DRRs of a deformably parametrized model for the device component (deformable known-component registration, or dKC-Reg). A cadaver study was performed in which a K-wire trajectory was delivered in the pelvis. The system demonstrated target registration error (TRE) of 2.1 ± 0.3 mm in location of the K-wire tip (median ± interquartile range, IQR) and 0.8 ± 1.4º in orientation at the tip (median ± IQR), providing functionality analogous to surgical tracking / navigation using imaging systems already in the surgical arsenal without reliance on a surgical tracker. The method offers quantitative 3D guidance using images (e.g., inlet / outlet views) already acquired in the standard of care, potentially extending the advantages of navigation to broader utilization in trauma surgery to improve surgical precision and safety.
3D/2D image registration method for joint motion analysis using low-quality images from mini C-arm machines
Soheil Ghafurian, Ilker Hacihaliloglu, Dimitris N. Metaxas, et al.
A 3D kinematic measurement of joint movement is crucial for orthopedic surgery assessment and diagnosis. This is usually obtained through a frame-by-frame registration of the 3D bone volume to a fluoroscopy video of the joint movement. The high cost of a high-quality fluoroscopy imaging system has hindered the access of many labs to this application. This is while the more affordable and low-dosage version, the mini C-arm, is not commonly used for this application due to low image quality. In this paper, we introduce a novel method for kinematic analysis of joint movement using the mini C-arm. In this method the bone of interest is recovered and isolated from the rest of the image using a non-rigid registration of an atlas to each frame. The 3D/2D registration is then performed using the weighted histogram of image gradients as an image feature. In our experiments, the registration error was 0.89 mm and 2.36° for human C2 vertebra. While the precision is still lacking behind a high quality fluoroscopy machine, it is a good starting point facilitating the use of mini C-arms for motion analysis making this application available to lower-budget environments. Moreover, the registration was highly resistant to the initial distance from the true registration, converging to the answer from anywhere within ±90° of it.
Investigation of 3D histograms of oriented gradients for image-based registration of CT with interventional CBCT
Barbara Trimborn, Ivo Wolf, Denis Abu-Sammour, et al.
Image registration of preprocedural contrast-enhanced CTs to intraprocedual cone-beam computed tomography (CBCT) can provide additional information for interventional liver oncology procedures such as transcatheter arterial chemoembolisation (TACE). In this paper, a novel similarity metric for gradient-based image registration is proposed. The metric relies on the patch-based computation of histograms of oriented gradients (HOG) building the basis for a feature descriptor. The metric was implemented in a framework for rigid 3D-3D-registration of pre-interventional CT with intra-interventional CBCT data obtained during the workflow of a TACE. To evaluate the performance of the new metric, the capture range was estimated based on the calculation of the mean target registration error and compared to the results obtained with a normalized cross correlation metric. The results show that 3D HOG feature descriptors are suitable as image-similarity metric and that the novel metric can compete with established methods in terms of registration accuracy
Neurosurgical Procedures
icon_mobile_dropdown
Toward real-time tumor margin identification in image-guided robotic brain tumor resection
Danying Hu, Yang Jiang, Evgenii Belykh, et al.
For patients with malignant brain tumors (glioblastomas), a safe maximal resection of tumor is critical for an increased survival rate. However, complete resection of the cancer is hard to achieve due to the invasive nature of these tumors, where the margins of the tumors become blurred from frank tumor to more normal brain tissue, but in which single cells or clusters of malignant cells may have invaded. Recent developments in fluorescence imaging techniques have shown great potential for improved surgical outcomes by providing surgeons intraoperative contrast-enhanced visual information of tumor in neurosurgery. The current near-infrared (NIR) fluorophores, such as indocyanine green (ICG), cyanine5.5 (Cy5.5), 5-aminolevulinic acid (5-ALA)-induced protoporphyrin IX (PpIX), are showing clinical potential to be useful in targeting and guiding resections of such tumors. Real-time tumor margin identification in NIR imaging could be helpful to both surgeons and patients by reducing the operation time and space required by other imaging modalities such as intraoperative MRI, and has the potential to integrate with robotically assisted surgery. In this paper, a segmentation method based on the Chan-Vese model was developed for identifying the tumor boundaries in an ex-vivo mouse brain from relatively noisy fluorescence images acquired by a multimodal scanning fiber endoscope (mmSFE). Tumor contours were achieved iteratively by minimizing an energy function formed by a level set function and the segmentation model. Quantitative segmentation metrics based on tumor-to-background (T/B) ratio were evaluated. Results demonstrated feasibility in detecting the brain tumor margins at quasi-real-time and has the potential to yield improved precision brain tumor resection techniques or even robotic interventions in the future.
Real-time phase recognition in novel needle-based intervention: a multi-operator feasibility study
Sébastien Muller, Fabien Despinoy, Daniel Bratbak, et al.
Purpose. One of the goals of new navigation systems in the operating room and in outpatient clinics is to support the surgeon's decision making while minimizing the additional load on surrounding health personnel. To do so, the system needs to rely on context-awareness providing the surgeon with the most relevant visualization at all times. Such a system could also provide support for surgical training. The objective of this work is to assess the feasibility of an automatic surgical phase recognition using tracking data from a novel instrument for injections and biopsies. Methods. An injection into the sphenopalatine ganglion planned with MRI and CT images is carried out using optical tracking of the instrument. In the context of a feasibility study, the intervention was performed by 5 operators, each 5 times, on a specially designed phantom. The coordinate information is processed into 7 features characterizing the intervention. Three classifiers, Hidden Markov Model (HMM), a Support Vector Machine (SVM), and a combination of these (SVM+HMM) are trained on manually annotated data and cross-validated for intra- and inter-operator variability. Standard test metrics are used to compare the performance of each classifier. Results. HMM alone and SVM alone are comparable classifiers, but feeding the output of the SVM into an HMM results in significantly better classifications: accuracy of 97.8%, sensitivity of 93.1% and specificity of 98.4%. Conclusion. The use of trajectory information can provide a robust real-time phase recognition of surgical phases for needle-based interventions.
Development of a mechanics-based model of brain deformations during intracerebral hemorrhage evacuation
Saramati Narasimhan, Jared A. Weis, Isuru S. Godage, et al.
Intracerebral hemorrhages (ICHs) occur in 24 out of 100,000 people annually and have high morbidity and mortality rates. The standard treatment is conservative. We hypothesize that a patient-specific, mechanical model coupled with a robotic steerable needle, used to aspirate a hematoma, would result in a minimally invasive approach to ICH management that will improve outcomes. As a preliminary study, three realizations of a tissue aspiration framework are explored within the context of a biphasic finite element model based on Biot's consolidation theory. Short-term transient effects were neglected in favor of steady state formulation. The Galerkin Method of Weighted Residuals was used to solve coupled partial differential equations using linear basis functions, and assumptions of plane strain and homogeneous isotropic properties. All aspiration models began with the application of aspiration pressure sink(s), calculated pressures and displacements, and the use of von Mises stresses within a tissue failure criterion. With respect to aspiration strategies, one model employs an element-deletion strategy followed by aspiration redeployment on the remaining grid, while the other approaches use principles of superposition on a fixed grid. While the element-deletion approach had some intuitive appeal, without incorporating a dynamic grid strategy, it evolved into a less realistic result. The superposition strategy overcame this, but would require empirical investigations to determine the optimum distribution of aspiration sinks to match material removal. While each modeling framework demonstrated some promise, the superposition method's ease of computation, ability to incorporate the surgical plan, and better similarity to existing empirical observational data, makes it favorable.
The introduction of capillary structures in 4D simulated vascular tree for ART 3.5D algorithm further validation
Beatrice Barra, Sara El Hadji, Elena De Momi, et al.
Several neurosurgical procedures, such as Artero Venous Malformations (AVMs), aneurysm embolizations and StereoElectroEncephaloGraphy (SEEG) require accurate reconstruction of the cerebral vascular tree, as well as the classification of arteries and veins, in order to increase the safety of the intervention. Segmentation of arteries and veins from 4D CT perfusion scans has already been proposed in different studies. Nonetheless, such procedures require long acquisition protocols and the radiation dose given to the patient is not negligible. Hence, space is open to approaches attempting to recover the dynamic information from standard Contrast Enhanced Cone Beam Computed Tomography (CE-CBCT) scans. The algorithm proposed by our team is called ART 3.5 D. It is a novel algorithm based on the postprocessing of both the angiogram and the raw data of a standard Digital Subtraction Angiography from a CBCT (DSACBCT) allowing arteries and veins segmentation and labeling without requiring any additional radiation exposure for the patient and neither lowering the resolution. In addition, while in previous versions of the algorithm just the distinction of arteries and veins was considered, here the capillary phase simulation and identification is introduced, in order to increase further information useful for more precise vasculature segmentation.
Integration of sparse electrophysiological measurements with preoperative MRI using 3D surface estimation in deep brain stimulation surgery
Andreas Husch, Peter Gemmar, Johan Thunberg, et al.
Intraoperative microelectrode recordings (MER) have been used for several decades to guide neurosurgeons during the implantation of Deep Brain Stimulation (DBS) electrodes, especially when targeting the subthalamic nucleus (STN) to suppress the symptoms of Parkinson’s Disease. The standard approach is to use an array of up to five MER electrodes in a fixed configuration. Interpretation of the recorded signals yields a spatially very sparse set of information about the morphology of the respective brain structures in the targeted area. However, no aid is currently available for surgeons to intraoperatively integrate this information with other data available on the patient’s individual morphology (e.g. MR imaging data used for surgical planning). This integration might allow surgeons to better determine the most probable position of the electrodes within the target structure during surgery. This paper suggests a method for reconstructing a surface patch from the sparse MER dataset utilizing additional a priori knowledge about the geometrical configuration of the measurement electrodes. The conventional representation of MER measurements as intervals of target region/non-target region is therefore transformed into an equivalent boundary set representation, allowing ecient point-based calculations. Subsequently, the problem is to integrate the resulting patch with a preoperative model of the target structure, which can be formulated as registration problem minimizing a distance measure between the two surfaces. When restricting this registration procedure to translations, which is reasonable given certain geometric considerations, the problem can be solved globally by employing an exhaustive search with arbitrary precision in polynomial time. The proposed method is demonstrated using bilateral STN/Substantia Nigra segmentation data from preoperative MRIs of 17 Patients with simulated MER electrode placement. When using simulated data of heavily perturbed electrodes and subsequent MER measurements, our optimization resulted in an improvement of the electrode position within 1 mm of the ground truth in 80.29% of the cases.
Spine Interventions
icon_mobile_dropdown
Localization of the transverse processes in ultrasound for spinal curvature measurement
Shahrokh Kamali, Tamas Ungi, Andras Lasso, et al.
PURPOSE: In scoliosis monitoring, tracked ultrasound has been explored as a safer imaging alternative to traditional radiography. The use of ultrasound in spinal curvature measurement requires identification of vertebral landmarks such as transverse processes, but as bones have reduced visibility in ultrasound imaging, skeletal landmarks are typically segmented manually, which is an exceedingly laborious and long process. We propose an automatic algorithm to segment and localize the surface of bony areas in the transverse process for scoliosis in ultrasound.

METHODS: The algorithm uses cascade of filters to remove low intensity pixels, smooth the image and detect bony edges. By applying first differentiation, candidate bony areas are classified. The average intensity under each area has a correlation with the possibility of a shadow, and areas with strong shadow are kept for bone segmentation. The segmented images are used to reconstruct a 3-D volume to represent the whole spinal structure around the transverse processes. RESULTS: A comparison between the manual ground truth segmentation and the automatic algorithm in 50 images showed 0.17 mm average difference. The time to process all 1,938 images was about 37 Sec. (0.0191 Sec. / Image), including reading the original sequence file.

CONCLUSION: Initial experiments showed the algorithm to be sufficiently accurate and fast for segmentation transverse processes in ultrasound for spinal curvature measurement. An extensive evaluation of the method is currently underway on images from a larger patient cohort and using multiple observers in producing ground truth segmentation.
Toward dynamic lumbar punctures guidance based on single element synthetic tracked aperture ultrasound imaging
Haichong K. Zhang, Melissa Lin, Younsu Kim, et al.
Lumbar punctures (LPs) are interventional procedures used to collect cerebrospinal fluid (CSF), a bodily fluid needed to diagnose central nervous system disorders. Most lumbar punctures are performed blindly without imaging guidance. Because the target window is small, physicians can only accurately palpate the appropriate space about 30% of the time and perform a successful procedure after an average of three attempts. Although various forms of imaging based guidance systems have been developed to aid in this procedure, these systems complicate the procedure by including independent image modalities and requiring image-to-needle registration to guide the needle insertion. Here, we propose a simple and direct needle insertion platform utilizing a single ultrasound element within the needle through dynamic sensing and imaging. The needle-shaped ultrasound transducer can not only sense the distance between the tip and a potential obstacle such as bone, but also visually locate structures by combining transducer location tracking and back projection based tracked synthetic aperture beam-forming algorithm. The concept of the system was validated through simulation first, which revealed the tolerance to realistic error. Then, the initial prototype of the single element transducer was built into a 14G needle, and was mounted on a holster equipped with a rotation tracking encoder. We experimentally evaluated the system using a metal wire phantom mimicking high reflection bone structures and an actual spine bone phantom with both the controlled motion and freehand scanning. An ultrasound image corresponding to the model phantom structure was reconstructed using the beam-forming algorithm, and the resolution was improved compared to without beam-forming. These results demonstrated the proposed system has the potential to be used as an ultrasound imaging system for lumbar puncture procedures.
Identification and tracking of vertebrae in ultrasound using deep networks with unsupervised feature learning
Jorden Hetherington, Mehran Pesteie, Victoria A. Lessoway, et al.
Percutaneous needle insertion procedures on the spine often require proper identification of the vertebral level in order to effectively deliver anesthetics and analgesic agents to achieve adequate block. For example, in obstetric epidurals, the target is at the L3-L4 intervertebral space. The current clinical method involves “blind” identification of the vertebral level through manual palpation of the spine, which has only 30% accuracy. This implies the need for better anatomical identification prior to needle insertion. A system is proposed to identify the vertebrae, assigning them to their respective levels, and track them in a standard sequence of ultrasound images, when imaged in the paramedian plane. Machine learning techniques are developed to identify discriminative features of the laminae. In particular, a deep network is trained to automatically learn the anatomical features of the lamina peaks, and classify image patches, for pixel-level classification. The chosen network utilizes multiple connected auto-encoders to learn the anatomy. Pre-processing with ultrasound bone enhancement techniques is done to aid the pixel-level classification performance. Once the lamina are identified, vertebrae are assigned levels and tracked in sequential frames. Experimental results were evaluated against an expert sonographer. Based on data acquired from 15 subjects, vertebrae identification with sensitivity of 95% and precision of 95% was achieved within each frame. Between pairs of subsequently analyzed frames, matches of predicted vertebral level labels were correct in 94% of cases, when compared to matches of manually selected labels
Visualization of scoliotic spine using ultrasound-accessible skeletal landmarks
Ben Church, Andras Lasso, Christopher Schlenger, et al.
PURPOSE: Ultrasound imaging is an attractive alternative to X-ray for scoliosis diagnosis and monitoring due to its safety and inexpensiveness. The transverse processes as skeletal landmarks are accessible by means of ultrasound and are sufficient for quantifying scoliosis, but do not provide an informative visualization of the spine. METHODS: We created a method for visualization of the scoliotic spine using a 3D transform field, resulting from thin-spline interpolation of a landmark-based registration between the transverse processes that we localized in both the patient’s ultrasound and an average healthy spine model. Additional anchor points were computationally generated to control the thin-spline interpolation, in order to gain a transform field that accurately represents the deformation of the patient’s spine. The transform field is applied to the average spine model, resulting in a 3D surface model depicting the patient’s spine. We applied ground truth CT from pediatric scoliosis patients in which we reconstructed the bone surface and localized the transverse processes. We warped the average spine model and analyzed the match between the patient’s bone surface and the warped spine. RESULTS: Visual inspection revealed accurate rendering of the scoliotic spine. Notable misalignments occurred mainly in the anterior-posterior direction, and at the first and last vertebrae, which is immaterial for scoliosis quantification. The average Hausdorff distance computed for 4 patients was 2.6 mm. CONCLUSIONS: We achieved qualitatively accurate and intuitive visualization to depict the 3D deformation of the patient’s spine when compared to ground truth CT.
Cochlear Implantation
icon_mobile_dropdown
Evaluation of a high-resolution patient-specific model of the electrically stimulated cochlea
Cochlear implants (CIs) are considered standard treatment for patients who experience sensorineural hearing loss. Although these devices have been remarkably successful at restoring hearing, it is rare to achieve natural fidelity, and many patients experience poor outcomes. Our group has developed the first image-guided CI programming (IGCIP) technique where the positions of the electrodes are found in CT images and used to estimate neural activation patterns, which is unique information that audiologists can use to define patient-specific processor settings. In our current system, neural activation is estimated using only the distance from each electrode to the neural activation sites. This approach might be less accurate than using a high-resolution electro-anatomical model (EAM) of the electrically stimulated cochlea to perform physics-based estimation of neural activation. In this work, we propose a patientcustomized EAM approach where the EAM is spatially and electrically adapted to a patient-specific configuration. Spatial adaptation is done through non-rigid registration of the model with the patient CT image. Electrical adaptation is done by adjusting tissue resistivity parameters so that the intra-cochlear voltage distributions predicted by the model best match those directly measured for the patient via their implant. We demonstrated our approach for N=7 patients. We found that our approach results in mean percent differences between direct and simulated measurements of voltage distributions of 11%. In addition, visual comparison shows the simulated and measured voltage distributions are qualitatively in good agreement. This represents a crucial step toward developing and validating the first in vivo patient-specific cochlea EAMs.
A cochlear implant phantom for evaluating CT acquisition parameters
Srijata Chakravorti, Brian J. Bussey, Yiyuan Zhao, et al.
Cochlear Implants (CIs) are surgically implantable neural prosthetic devices used to treat profound hearing loss. Recent literature indicates that there is a correlation between the positioning of the electrode array within the cochlea and the ultimate hearing outcome of the patient, indicating that further studies aimed at better understanding the relationship between electrode position and outcomes could have significant implications for future surgical techniques, array design, and processor programming methods. Post-implantation high resolution CT imaging is the best modality for localizing electrodes and provides the resolution necessary to visually identify electrode position, albeit with an unknown degree of accuracy depending on image acquisition parameters, like the HU range of reconstruction, radiation dose, and resolution of the image. In this paper, we report on the development of a phantom that will both permit studying which CT acquisition parameters are best for accurately identifying electrode position and serve as a ground truth for evaluating how different electrode localization methods perform when using different CT scanners and acquisition parameters. We conclude based on our tests that image resolution and HU range of reconstruction strongly affect how accurately the true position of the electrode array can be found by both experts and automatic analysis techniques. The results presented in this paper demonstrate that our phantom is a versatile tool for assessing how CT acquisition parameters affect the localization of CIs.
An image guidance system for positioning robotic cochlear implant insertion tools
Trevor L. Bruns, Robert J. Webster III
Cochlear implants must be inserted carefully to avoid damaging the delicate anatomical structures of the inner ear. This has motivated several approaches to improve the safety and efficacy of electrode array insertion by automating the process with specialized robotic or manual insertion tools. When such tools are used, they must be positioned at the entry point to the cochlea and aligned with the desired entry vector. This paper presents an image guidance system capable of accurately positioning a cochlear implant insertion tool. An optical tracking system localizes the insertion tool in physical space while a graphical user interface incorporates this with patient- specific anatomical data to provide error information to the surgeon in real-time. Guided by this interface, novice users successfully aligned the tool with an mean accuracy of 0.31 mm.
Micro-stereotactic frame utilizing bone cement for individual fabrication: an initial investigation of its accuracy
Thomas S. Rau, G. Jakob Lexow, Denise Blume, et al.
A new method for template-guided cochlear implantation surgery is proposed which has been developed to create a minimally invasive access to the inner ear. A first design of the surgical template was drafted, built, and finally tested regarding its accuracy. For individual finalization of the micro-stereotactic frame bone cement is utilized as this well-known and well-established material suggests ease of use as well as high clinical acceptance and enables both sterile and rapid handling. The new concept includes an alignment device, based on a passive hexapod with manually adjustable legs for temporary fixation of the separate parts in the patient-specific pose until the bone cement is spread and finally cured. Additionally, a corresponding evaluation method was developed to determine the accuracy of the microstereotactic frame in some initial experiments. In total 18 samples of the surgical template were fabricated based on previously planned trajectories. The mean positioning error at the target point was 0.30 mm with a standard deviation of 0.25 mm.
Selecting electrode configurations for image-guided cochlear implant programming using template matching
Cochlear implants (CIs) are used to treat patients with severe-to-profound hearing loss. In surgery, an electrode array is implanted in the cochlea. After implantation, the CI processor is programmed by an audiologist. One factor that negatively impacts outcomes and can be addressed by programming is cross-electrode neural stimulation overlap (NSO). In the recent past, we have proposed a system to assist the audiologist in programming the CI that we call Image-Guided CI Programming (IGCIP). IGCIP permits using CT images to detect NSO and recommend which subset of electrodes should be active to avoid NSO. In an ongoing clinical study, we have shown that IGCIP leads to significant improvement in hearing outcomes. Most of the IGCIP steps are robustly automated but electrode configuration selection still sometimes requires expert intervention. With expertise, Distance-Vs-Frequency (DVF) curves, which are a way to visualize the spatial relationship learned from CT between the electrodes and the nerves they stimulate, can be used to select the electrode configuration. In this work, we propose an automated technique for electrode configuration selection. It relies on matching new patients’ DVF curves to a library of DVF curves for which electrode configurations are known. We compare this approach to one we have previously proposed. We show that, generally, our new method produces results that are as good as those obtained with our previous one while being generic and requiring fewer parameters.
Keynote and Percutaneous Procedures
icon_mobile_dropdown
Toward integrated image guided liver surgery
W. R. Jarnagin, Amber L. Simpson, M. I. Miga
While clinical neurosurgery has benefited from the advent of frameless image guidance for over three decades, the translation of image guided technologies to abdominal surgery, and more specifically liver resection, has been far more limited. Fundamentally, the workflow, complexity, and presentation have confounded development. With the first real efforts in translation beginning at the turn of the millennia, the work in developing novel augmented technologies to enhance screening, planning, and surgery has come to realization for the field. In this paper, we will review several examples from our own work that demonstrate the impact of image-guided procedure methods in eight clinical studies that speak to: (1) the accuracy in planning for liver resection, (2) enhanced surgical planning with portal vein embolization impact, (3) linking splenic volume changes to post-hepatectomy complications, (4) enhanced intraoperative localization in surgically occult lesions, (5) validation of deformation correction, and a (6) a novel blinded study focused at the value of deformation correction. All six of these studies were achieved in human systems and show the potential impact image guided methodologies could make on liver tissue resection procedures.
Enabling image fusion for a CT guided needle placement robot
Reza Seifabadi, Sheng Xu, Fereshteh Aalamifar, et al.
Purpose: This study presents development and integration of hardware and software that enables ultrasound (US) and computer tomography (CT) fusion for a FDA-approved CT-guided needle placement robot. Having real-time US image registered to a priori-taken intraoperative CT image provides more anatomic information during needle insertion, in order to target hard-to-see lesions or avoid critical structures invisible to CT, track target motion, and to better monitor ablation treatment zone in relation to the tumor location. Method: A passive encoded mechanical arm is developed for the robot in order to hold and track an abdominal US transducer. This 4 degrees of freedom (DOF) arm is designed to attach to the robot end-effector. The arm is locked by default and is released by a press of button. The arm is designed such that the needle is always in plane with US image. The articulated arm is calibrated to improve its accuracy. Custom designed software (OncoNav, NIH) was developed to fuse real-time US image to a priori-taken CT. Results: The accuracy of the end effector before and after passive arm calibration was 7.07mm ± 4.14mm and 1.74mm ±1.60mm, respectively. The accuracy of the US image to the arm calibration was 5mm. The feasibility of US-CT fusion using the proposed hardware and software was demonstrated in an abdominal commercial phantom. Conclusions: Calibration significantly improved the accuracy of the arm in US image tracking. Fusion of US to CT using the proposed hardware and software was feasible.
Training with Perk Tutor improves ultrasound-guided in-plane needle insertion skill
Hillary Lia, Zsuzsanna Keri, Matthew S. Holden, et al.
PURPOSE: The open-source Perk Tutor training platform has been shown to improve trainee performance in interventions that require ultrasound guidance. Our goal was to determine if needle coordination of medical trainees can be improved by training with Perk Tutor compared to training with ultrasound only. METHODS: Twenty participants with no previous experience were randomized into two groups; the Perk Tutor group and the Control group. The Perk Tutor group had access to the 3D visualization while the Control group used ultrasound only during their training. Performance was analyzed, measured and compared by Perk Tutor with regards to four needle coordination metrics. None of the groups had access to 3D visualization during performance testing. RESULTS: The needle tracking measurements showed, for the Perk Tutor group, lower average distance between the needle tip and ultrasound (1.2 [0.9 – 2.8] mm vs 2.7 [2.3 – 4.0] mm, respectively; P = 0.023) and lower maximum distance between the needle tip and ultrasound (2.2 [1.9 – 3.2] mm vs 4.6 [3.9 – 6.2] mm, respectively; P = 0.013). There was no significant difference in average needle to ultrasound plane angle and maximum needle to ultrasound plane distance. All participants were successful in the procedure. CONCLUSION: The Perk Tutor group had significantly reduced distance from the needle tip to the ultrasound plane. Training with Perk Tutor can improve trainees’ needle and ultrasound coordination.
Real-time MRI-guided needle intervention for cryoablation: a phantom study
Wenpeng Gao, Baichuan Jiang, Dan F. Kacher, et al.
MRI-guided needle intervention for cryoablation is a promising way to relieve the pain and treat the cancer. However, the limited size of MRI bore makes it impossible for clinicians to perform the operation in the bore. The patients had to be moved into the bore for scanning to verify the position of the needle’s tip and out for adjusting the needle’s trajectory. Real-time needle tracking and shown in MR images is of importance for clinicians to perform the operation more efficiently. In this paper, we have instrumented the cryotherapy needle with a MRI-safe electromagnetic (EM) sensor and optical sensor to measure the needle’s position and orientation. To overcome the limitation of line-of-sight for optical sensor and the poor dynamic performance of the EM sensor, Kalman filter based data fusion is developed. Further, we developed a navigation system in open-source software, 3D Slicer, to provide accurate visualization of the needle and the surrounding anatomy. Experiment of simulation the needle intervention at the entrance was performed with a realistic spine phantom to quantify the accuracy of the navigation using the retrospective analysis method. Eleven trials of needle insertion were performed independently. The target accuracy with the navigation using only EM sensor, only optical sensor and data fusion are 2.27 ±1.60 mm, 4.11 ± 1.77 mm and 1.91 − 1.10 mm, respectively.
Optical Sensing
icon_mobile_dropdown
Image-guided smart laser system for precision implantation of cells in cartilage
State-of-the-art treatment for joint diseases like osteoarthritis focus on articular cartilage repair/regeneration by stem cell implantation therapy. However, the technique is limited by a lack of precision in the physician’s imaging and cell deposition toolkit. We describe a novel combination of high-resolution, rapid scan-rate optical coherence tomography (OCT) alongside a short-pulsed nanosecond thulium (Tm) laser for precise cell seeding in cartilage. The superior beam quality of thulium lasers and wavelength of operation 1940 nm offers high volumetric tissue removal rates and minimizes the residual thermal footprint. OCT imaging enables targeted micro-well placement, precise cell deposition, and feature contrast. A bench-top system is constructed using a 15 W, 1940 nm, nanosecond-pulsed Tm fiber laser (500 μJ pulse energy, 100 ns pulse duration, 30kHz repetition rate) for removing tissue, and a swept source laser (1310 ± 70 nm, 100 kHz sweep rate) for OCT imaging, forming a combined Tm/OCT system – a “smart laser knife”. OCT assists the smart laser knife user in characterizing cartilage to inform micro-well placement. The Tm laser creates micro-wells (2.35 mm diameter length, 1.5 mm width, 300 μm deep) and micro-incisions (1 mm wide, 200 μm deep) while OCT image-guidance assists and demonstrates this precision cutting and cell deposition with real-time feedback. To test micro-well creation and cell deposition protocol, gelatin phantoms are constructed mimicking cartilage optical properties and physiological structure. Cell viability is then assessed to illustrate the efficacy of the hydrogel deposition. Automated OCT feedback is demonstrated for cutting procedures to avoid important surface/subsurface structures. This bench-top smart laser knife system described here offers a new image-guided approach to precise stem cell seeding that can enhance the efficacy of articular cartilage repair.
Feature tracking for automated volume of interest stabilization on 4D-OCT images
Max-Heinrich Laves, Andreas Schoob, Lüder A. Kahrs, et al.
A common representation of volumetric medical image data is the triplanar view (TV), in which the surgeon manually selects slices showing the anatomical structure of interest. In addition to common medical imaging such as MRI or computed tomography, recent advances in the field of optical coherence tomography (OCT) have enabled live processing and volumetric rendering of four-dimensional images of the human body. Due to the region of interest undergoing motion, it is challenging for the surgeon to simultaneously keep track of an object by continuously adjusting the TV to desired slices. To select these slices in subsequent frames automatically, it is necessary to track movements of the volume of interest (VOI). This has not been addressed with respect to 4DOCT images yet. Therefore, this paper evaluates motion tracking by applying state-of-the-art tracking schemes on maximum intensity projections (MIP) of 4D-OCT images. Estimated VOI location is used to conveniently show corresponding slices and to improve the MIPs by calculating thin-slab MIPs. Tracking performances are evaluated on an in-vivo sequence of human skin, captured at 26 volumes per second. Among investigated tracking schemes, our recently presented tracking scheme for soft tissue motion provides highest accuracy with an error of under 2.2 voxels for the first 80 volumes. Object tracking on 4D-OCT images enables its use for sub-epithelial tracking of microvessels for image-guidance.
Don't get burned: thermal monitoring of vessel sealing using a miniature infrared camera
Shan Lin, Loris Fichera, Mitchell J. Fulton, et al.
Miniature infrared cameras have recently come to market in a form factor that facilitates packaging in endoscopic or other minimally invasive surgical instruments. If absolute temperature measurements can be made with these cameras, they may be useful for non-contact monitoring of electrocautery-based vessel sealing, or other thermal surgical processes like thermal ablation of tumors. As a first step in evaluating the feasibility of optical medical thermometry with these new cameras, in this paper we explore how well thermal measurements can be made with them. These cameras measure the raw flux of incoming IR radiation, and we perform a calibration procedure to map their readings to absolute temperature values in the range between 40 and 150 °C. Furthermore, we propose and validate a method to estimate the spatial extent of heat spread created by a cautery tool based on the thermal images.
Imaging with a single-element forward-looking steerable IVUS catheter using optical shape sensing (Conference Presentation)
Jovana Janjic, Frits Mastik, Merel Leistikow, et al.
Complex intravascular lesions, such as chronic total occlusions (CTOs), require forward-looking imaging. We propose to use a 25 MHz single-element transducer and an optical shape sensing (OSS) fiber integrated into a steerable catheter to achieve intravascular imaging in a forward-looking approach. A tissue-mimicking phantom with three hollow channels (3, 2 and 1 mm in diameter) and two steel spheres (1.5 mm in diameter) is used as imaging target. Ultrasound data and OSS data are simultaneously acquired while steering and rotating a 8.5 F catheter with bidirectional tip flexion. The obtained ultrasound data are reconstructed in 3D space using the position and direction information from the OSS data. Afterwards, the sparsely sampled ultrasound data are projected on a 2D plane and interpolated using normalized convolution (NC), which has been shown previously to perform well on irregularly sampled data [1]. The front surface of the phantom together with the location of two of the three channels and the two steel spheres are successfully reconstructed. The ability to reconstruct different components and their location in space is very important during CTOs crossing. This type of information can aid the crossing procedure providing insights about the best entry point, such as channels location, and helping in avoiding highly calcified areas, which usually are displayed in ultrasound imaging as highly scattering regions.
Novel Robots and Robotic Procedures
icon_mobile_dropdown
Co-robotic ultrasound imaging: a cooperative force control approach
Rodolfo Finocchi, Fereshteh Aalamifar, Ting Yun Fang, et al.
Ultrasound (US) imaging remains one of the most commonly used imaging modalities in medical practice. However, due to the physical effort required to perform US imaging tasks, 63-91% of ultrasonographers develop musculoskeletal disorders throughout their careers. The goal of this work is to provide ultrasonographers with a system that facilitates and reduces strain in US image acquisition. To this end, we propose a system for admittance force robot control that uses the six-degree-of-freedom UR5 industrial robot. A six-axis force sensor is used to measure the forces and torques applied by the sonographer on the probe. As the sonographer pushes against the US probe, the robot complies with these forces, following the user's desired path. A one-axis load cell is used to measure contact forces between the patient and the probe in real time. When imaging, the robot augments the axial forces applied by the user, lessening the physical effort required. User studies showed an overall decrease in hand tremor while imaging at high forces, improvements in image stability, and a decrease in difficulty and strenuousness.
Concentric agonist-antagonist robots for minimally invasive surgeries
Kaitlin Oliver-Butler, Zane H. Epps, Daniel Caleb Rucker
We present a novel continuum robot design concept, Concentric Agonist-Antagonist Robots (CAAR), that uses push-pull, agonist-antagonist action of a pair of concentric tubes. The CAAR tubes are designed to have noncentral, offset neutral axes, and they are fixed together at their distal ends. Axial base translations then induce bending in the device. A CAAR segment can be created by selectively cutting asymmetric notches into the profile of two stock tubes, which relocates the neutral bending plane away from the center of the inner lumen. Like conventional concentric-tube robots (CTRs) based on counter-rotating precurved tubes, a CAAR can be made at very small scales and contain a large, open lumen. In contrast with CTRs, the CAAR concept has no elastic stability issues, offers a larger range of motion, and has lower overall stiffness. Furthermore, by varying the position of the neutral axes along the length of each tube, arbitrary, variable curvature actuation modes can be achieved. Precurving the tubes can additionally increase the workspace of a single segment. A single two-tube assembly can be used to create 3 degree-of-freedom (DOF) robot segments, and multiple segments can be deployed concentrically. Both additive manufacturing and traditional machining of stock tubes can create and customize the geometry and performance of the CAAR. In this paper, we explore the CAAR concept, provide kinematic and static models, and experimentally evaluate the model with a both a straight and a precurved CAAR. We conclude with a discussion of the significance and our plans for future work.
Robotically assisted ureteroscopy for kidney exploration
Hadi F. Talari, Reza Monfaredi, Emmanuel Wilson, et al.
Ureteroscopy is a minimally invasive procedure for diagnosis and treatment of urinary tract pathology. Ergonomic and visualization challenges as well as radiation exposure are limitations to conventional ureteroscopy. Therefore, we have developed a robotic system to “power drive” a flexible ureteroscope with 3D tip tracking and pre-operative image overlay. The proposed system was evaluated using a kidney phantom registered to pre-operative MR images. Initial experiments show the potential of the device to provide additional assistance, precision, and guidance during urology procedures.
Optimized positioning of autonomous surgical lamps
Jörn Teuber, Rene Weller, Ron Kikinis, et al.
We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.
Analysis of a concentric-tube robot design and feasibility for endoscopic deployment
Ryan Ponten, Caroline B. Black, Andrew J. Russ, et al.
An intraluminal endoscopic approach is desirable for most colonoscopic procedures and is growing in favor for other surgeries as tools are enhanced. Flexible robotic manipulators could further enhance the dexterity and precision of commercial endoscopic systems. In this paper, we explore the capabilities of concentric tube robots to work as tool manipulators at the tip of a colonoscope to perform endoscopic submucousal dissection (ESD) and endoscopic full thickness resection (EFTR). We provide an overview of the kinematic modeling of these manipulators, a design of a prototype manipulator and the transmission actuation system. Our analysis examines the workspace and stiffness of these manipulators being controlled at the tip of a colonoscope. We compare the results to reported surgical requirements and propose solutions for enhancing their effectiveness including notching tubes with a larger Young’s Modulus. We also determine the resolution and accuracy of the actuation system.
Cardiac Procedures
icon_mobile_dropdown
Patient-specific pediatric silicone heart valve models based on 3D ultrasound
Anna Ilina, Andras Lasso, Matthew A. Jolley, et al.
PURPOSE: Patient-specific heart and valve models have shown promise as training and planning tools for heart surgery, but physically realistic valve models remain elusive. Available proprietary, simulation-focused heart valve models are generic adult mitral valves and do not allow for patient-specific modeling as may be needed for rare diseases such as congenitally abnormal valves. We propose creating silicone valve models from a 3D-printed plastic mold as a solution that can be adapted to any individual patient and heart valve at a fraction of the cost of direct 3D-printing using soft materials. METHODS: Leaflets of a pediatric mitral valve, a tricuspid valve in a patient with hypoplastic left heart syndrome, and a complete atrioventricular canal valve were segmented from ultrasound images. A custom software was developed to automatically generate molds for each valve based on the segmentation. These molds were 3D-printed and used to make silicone valve models. The models were designed with cylindrical rims of different sizes surrounding the leaflets, to show the outline of the valve and add rigidity. Pediatric cardiac surgeons practiced suturing on the models and evaluated them for use as surgical planning and training tools. RESULTS: Five out of six surgeons reported that the valve models would be very useful as training tools for cardiac surgery. In this first iteration of valve models, leaflets were felt to be unrealistically thick or stiff compared to real pediatric leaflets. A thin tube rim was preferred for valve flexibility. CONCLUSION: The valve models were well received and considered to be valuable and accessible tools for heart valve surgery training. Further improvements will be made based on surgeons’ feedback.
Patient-specific indirectly 3D printed mitral valves for pre-operative surgical modelling
Olivia Ginty, John Moore, Wenyao Xia, et al.
Significant mitral valve regurgitation affects over 2% of the population. Over the past few decades, mitral valve (MV) repair has become the preferred treatment option, producing better patient outcomes than MV replacement, but requiring more expertise. Recently, 3D printing has been used to assist surgeons in planning optimal treatments for complex surgery, thus increasing the experience of surgeons and the success of MV repairs. However, while commercially available 3D printers are capable of printing soft, tissue-like material, they cannot replicate the demanding combination of echogenicity, physical flexibility and strength of the mitral valve. In this work, we propose the use of trans-esophageal echocardiography (TEE) 3D image data and inexpensive 3D printing technology to create patient specific mitral valve models. Patient specific 3D TEE images were segmented and used to generate a profile of the mitral valve leaflets. This profile was 3D printed and integrated into a mold to generate a silicone valve model that was placed in a dynamic heart phantom. Our primary goal is to use silicone models to assess different repair options prior to surgery, in the hope of optimizing patient outcomes. As a corollary, a database of patient specific models can then be used as a trainer for new surgeons, using a beating heart simulator to assess success. The current work reports preliminary results, quantifying basic morphological properties. The models were assessed using 3D TEE images, as well as 2D and 3D Doppler images for comparison to the original patient TEE data.
Real-time catheter localization and visualization using three-dimensional echocardiography
Pawel Kozlowski, Raja Sekhar Bandaru, Jan D'hooge, et al.
Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.
Integrating atlas and graph cut methods for right ventricle blood-pool segmentation from cardiac cine MRI
Segmentation of right ventricle from cardiac MRI images can be used to build pre-operative anatomical heart models to precisely identify regions of interest during minimally invasive therapy. Furthermore, many functional parameters of right heart such as right ventricular volume, ejection fraction, myocardial mass and thickness can also be assessed from the segmented images. To obtain an accurate and computationally efficient segmentation of right ventricle from cardiac cine MRI, we propose a segmentation algorithm formulated as an energy minimization problem in a graph. Shape prior obtained by propagating label from an average atlas using affine registration is incorporated into the graph framework to overcome problems in ill-defined image regions. The optimal segmentation corresponding to the labeling with minimum energy configuration of the graph is obtained via graph-cuts and is iteratively refined to produce the final right ventricle blood pool segmentation. We quantitatively compare the segmentation results obtained from our algorithm to the provided gold-standard expert manual segmentation for 16 cine-MRI datasets available through the MICCAI 2012 Cardiac MR Right Ventricle Segmentation Challenge according to several similarity metrics, including Dice coefficient, Jaccard coefficient, Hausdorff distance, and Mean absolute distance error.
Patient-specific atrium models for training and pre-procedure surgical planning
Justin Laing, John Moore, Daniel Bainbridge, et al.
Minimally invasive cardiac procedures requiring a trans-septal puncture such as atrial ablation and MitraClip® mitral valve repair are becoming increasingly common. These procedures are performed on the beating heart, and require clinicians to rely on image-guided techniques. For cases of complex or diseased anatomy, in which fluoroscopic and echocardiography images can be difficult to interpret, clinicians may benefit from patient-specific atrial models that can be used for training, surgical planning, and the validation of new devices and guidance techniques. Computed tomography (CT) images of a patient’s heart were segmented and used to generate geometric models to create a patient-specific atrial phantom. Using rapid prototyping, the geometric models were converted into physical representations and used to build a mold. The atria were then molded using tissue-mimicking materials and imaged using CT. The resulting images were segmented and used to generate a point cloud data set that could be registered to the original patient data. The absolute distance of the two point clouds was compared and evaluated to determine the model’s accuracy. The result when comparing the molded model point cloud to the original data set, resulted in a maximum Euclidean distance error of 4.5 mm, an average error of 0.5 mm and a standard deviation of 0.6 mm. Using our workflow for creating atrial models, potential complications, particularly for complex repairs, may be accounted for in pre-operative planning. The information gained by clinicians involved in planning and performing the procedure should lead to shorter procedural times and better outcomes for patients.
Joint Session with Conferences 10135 and 10139: Ultrasound Image Guidance
icon_mobile_dropdown
Intraoperative 3D ultrasound guidance system for permanent breast seed implantation
Justin Michael, Daniel Morton, Deidre Batchelar, et al.
Permanent breast seed implantation (PBSI) is a single-visit technique for accelerated partial breast irradiation that uses a template and needles to implant seeds of Pd-103 under 2D ultrasound (US) guidance. The short treatment time is advantageous given the widely hypothesized link between treatment burden and mastectomy use. However, limitations of 2D US contribute to high operator dependence and seed placement error that we aim to address by developing a 3D US guidance system. A 3D US scanner for PBSI and a mechanism for template localization have been developed and validated. The 3D US system mechatronically moves and tracks a 2D US transducer over a 5 cm translation and 60° tilt, reconstructing the 2D images into a 3D volume as they are acquired. Additionally, a localizing arm, tracked via encoded joints and mounted to the scanner, determines template position by localizing divots on a modified needle template. Volume reconstruction was validated using linear measurements of a grid phantom and volumetric measurements of two surgical cavity phantoms. Localizing arm measurement accuracy was established using a testing jig with divots at known positions. Imaging volume was rigidly registered to scanner geometry using a string phantom mounted to a test jig. Lastly, volunteer scans were conducted to demonstrate clinical applicability. Median linear and average volumetric measurements were within ±1.4% of nominal and ±4.1% of water displacement measurements, respectively. Median measurement accuracy of the localizing arm was 0.475 mm. Imaging volume target registration error was 0.458 mm. Volunteer scans produced clinical quality images.
Evaluation of an interactive ultrasound-based breast tumor contouring workflow
Aniqah T. Mair, Thomas A. Vaughan, Tamas Ungi, et al.
PURPOSE: Computer-navigated breast tumor excision using tracked ultrasound is a technique for performing lumpectomies during early-stage breast cancer. An interactive method is used to contour tumors intra-operatively for excision. We evaluated this method’s effectiveness in contouring the entire tumor with minimal inclusion of healthy tissue in the excision volume. Additionally, we investigated the possibility of adding a safety margin to the contoured volume to ensure that the entire tumor is contained by the contour. METHODS: We conducted a study in which 10 participants contoured 5 tumors each using the intra-operative breast tumor contouring system. We analyzed their interactions with the system and their opinions of the contouring workflow. RESULTS: We found that only 0.19% of the tumor volume was not contained by the contour on average. The addition of a 0.4 mm safety margin to the final tumor contour guaranteed that the entire tumor would be contained. We also found a correlation between the amount of time spent on contour verification and excess healthy tissue included in the contour. Users’ perceptions of how well they excluded excess healthy tissue strongly correlated with reality. CONCLUSIONS: This workflow ensures that only a small amount of tumor volume is not contained by the contour and allows the radiologist confidence that they have contained the entire tumor in their contour. With the addition of a safety margin to the resulting tumor contour, the tumor can be completely contained.
Models of temporal enhanced ultrasound data for prostate cancer diagnosis: the impact of time-series order
Layan Nahlawi, Caroline Goncalves, Farhad Imani, et al.
Recent studies have shown the value of Temporal Enhanced Ultrasound (TeUS) imaging for tissue characterization in transrectal ultrasound-guided prostate biopsies. Here, we present results of experiments designed to study the impact of temporal order of the data in TeUS signals. We assess the impact of variations in temporal order on the ability to automatically distinguish benign prostate-tissue from malignant tissue. We have previously used Hidden Markov Models (HMMs) to model TeUS data, as HMMs capture temporal order in time series. In the work presented here, we use HMMs to model malignant and benign tissues; the models are trained and tested on TeUS signals while introducing variation to their temporal order. We first model the signals in their original temporal order, followed by modeling the same signals under various time rearrangements. We compare the performance of these models for tissue characterization. Our results show that models trained over the original order-preserving signals perform statistically significantly better for distinguishing between malignant and benign tissues, than those trained on rearranged signals. The performance degrades as the amount of temporal-variation increases. Specifically, accuracy of tissue characterization decreases from 85% using models trained on original signals to 62% using models trained and tested on signals that are completely temporally-rearranged. These results indicate the importance of order in characterization of tissue malignancy from TeUS data.
Anatomical Measurement and Respiratory Tracking
icon_mobile_dropdown
Interpolation of 3D slice volume data for 3D printing
Samuel Littley, Irina Voiculescu
Medical imaging from CT and MRI scans has become essential to clinicians for diagnosis, treatment planning and even prevention of a wide array of conditions. The presentation of image data volumes as 2D slice series provides some challenges with visualising internal structures. 3D reconstructions of organs and other tissue samples from data with low scan resolution leads to a ‘stepped’ appearance. This paper demonstrates how to improve 3D visualisation of features and automated preparation for 3D printing from such low resolution data, using novel techniques for morphing from one slice to the next. The boundary of the starting contour is grown until it matches the boundary of the ending contour by adapting a variant of the Fast Marching Method (FMM). Our spoke based approach generates scalar speed field for FMM by estimating distances to boundaries with line segments connecting the two boundaries. These can be regularly spaced radial spokes or spokes at radial extrema. We introduce clamped FMM by running the algorithm outwards from the smaller boundary and inwards from the larger boundary and combining the two runs to achieve FMM growth stability near the two region boundaries. Our method inserts a series of uniformly distributed intermediate contours between each pair of consecutive slices from the scan volume thus creating smoother feature boundaries. Whilst hard to quantify, our overall results give clinicians an evidently improved tangible and tactile representation of the tissues, that they can examine more easily and even handle.
Optimization of real-time rigid registration motion compensation for prostate biopsies using 2D/3D ultrasound
Derek J. Gillies, Lori Gardi, Ren Zhao, et al.
During image-guided prostate biopsy, needles are targeted at suspicious tissues to obtain specimens that are later examined histologically for cancer. Patient motion causes inaccuracies when using MR-transrectal ultrasound (TRUS) image fusion approaches used to augment the conventional biopsy procedure. Motion compensation using a single, user initiated correction can be performed to temporarily compensate for prostate motion, but a real-time continuous registration offers an improvement to clinical workflow by reducing user interaction and procedure time. An automatic motion compensation method, approaching the frame rate of a TRUS-guided system, has been developed for use during fusion-based prostate biopsy to improve image guidance. 2D and 3D TRUS images of a prostate phantom were registered using an intensity based algorithm utilizing normalized cross-correlation and Powell’s method for optimization with user initiated and continuous registration techniques. The user initiated correction performed with observed computation times of 78 ± 35 ms, 74 ± 28 ms, and 113 ± 49 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.5 ± 0.5 mm, 1.5 ± 1.4 mm, and 1.5 ± 1.6°. The continuous correction performed significantly faster (p < 0.05) than the user initiated method, with observed computation times of 31 ± 4 ms, 32 ± 4 ms, and 31 ± 6 ms for in-plane, out-of-plane, and roll motions, respectively, corresponding to errors of 0.2 ± 0.2 mm, 0.6 ± 0.5 mm, and 0.8 ± 0.4°.
Open-source software for collision detection in external beam radiation therapy
Vinith M. Suriyakumar, Renee Xu, Csaba Pinter, et al.
PURPOSE: Collision detection for external beam radiation therapy (RT) is important for eliminating the need for dryruns that aim to ensure patient safety. Commercial treatment planning systems (TPS) offer this feature but they are expensive and proprietary. Cobalt-60 RT machines are a viable solution to RT practice in low-budget scenarios. However, such clinics are hesitant to invest in these machines due to a lack of affordable treatment planning software. We propose the creation of an open-source room’s eye view visualization module with automated collision detection as part of the development of an open-source TPS. METHODS: An openly accessible linac 3D geometry model is sliced into the different components of the treatment machine. The model’s movements are based on the International Electrotechnical Commission standard. Automated collision detection is implemented between the treatment machine’s components. RESULTS: The room’s eye view module was built in C++ as part of SlicerRT, an RT research toolkit built on 3D Slicer. The module was tested using head and neck and prostate RT plans. These tests verified that the module accurately modeled the movements of the treatment machine and radiation beam. Automated collision detection was verified using tests where geometric parameters of the machine’s components were changed, demonstrating accurate collision detection. CONCLUSION: Room’s eye view visualization and automated collision detection are essential in a Cobalt-60 treatment planning system. Development of these features will advance the creation of an open-source TPS that will potentially help increase the feasibility of adopting Cobalt-60 RT.
Feature-based respiratory motion tracking in native fluoroscopic sequences for dynamic roadmaps during minimally invasive procedures in the thorax and abdomen
Martin G. Wagner, Paul F. Laeseke, Tilman Schubert, et al.
Fluoroscopic image guidance for minimally invasive procedures in the thorax and abdomen suffers from respiratory and cardiac motion, which can cause severe subtraction artifacts and inaccurate image guidance. This work proposes novel techniques for respiratory motion tracking in native fluoroscopic images as well as a model based estimation of vessel deformation. This would allow compensation for respiratory motion during the procedure and therefore simplify the workflow for minimally invasive procedures such as liver embolization. The method first establishes dynamic motion models for both the contrast-enhanced vasculature and curvilinear background features based on a native (non-contrast) and a contrast-enhanced image sequence acquired prior to device manipulation, under free breathing conditions. The model of vascular motion is generated by applying the diffeomorphic demons algorithm to an automatic segmentation of the subtraction sequence. The model of curvilinear background features is based on feature tracking in the native sequence. The two models establish the relationship between the respiratory state, which is inferred from curvilinear background features, and the vascular morphology during that same respiratory state. During subsequent fluoroscopy, curvilinear feature detection is applied to determine the appropriate vessel mask to display. The result is a dynamic motioncompensated vessel mask superimposed on the fluoroscopic image. Quantitative evaluation of the proposed methods was performed using a digital 4D CT-phantom (XCAT), which provides realistic human anatomy including sophisticated respiratory and cardiac motion models. Four groups of datasets were generated, where different parameters (cycle length, maximum diaphragm motion and maximum chest expansion) were modified within each image sequence. Each group contains 4 datasets consisting of the initial native and contrast enhanced sequences as well as a sequence, where the respiratory motion is tracked. The respiratory motion tracking error was between 1.00 % and 1.09 %. The estimated dynamic vessel masks yielded a Sørensen-Dice coefficient between 0.94 and 0.96. Finally, the accuracy of the vessel contours was measured in terms of the 99th percentile of the error, which ranged between 0.64 and 0.96 mm. The presented results show that the approach is feasible for respiratory motion tracking and compensation and could therefore considerably improve the workflow of minimally invasive procedures in the thorax and abdomen
Upper ankle joint space detection on low contrast intraoperative fluoroscopic C-arm projections
Sarina Thomas, Marc Schnetzke, Michael Brehler, et al.
Intraoperative mobile C-arm fluoroscopy is widely used for interventional verification in trauma surgery, high flexibility combined with low cost being the main advantages of the method. However, the lack of global device-to- patient orientation is challenging, when comparing the acquired data to other intrapatient datasets. In upper ankle joint fracture reduction accompanied with an unstable syndesmosis, a comparison to the unfractured contralateral site is helpful for verification of the reduction result. To reduce dose and operation time, our approach aims at the comparison of single projections of the unfractured ankle with volumetric images of the reduced fracture. For precise assessment, a pre-alignment of both datasets is a crucial step. We propose a contour extraction pipeline to estimate the joint space location for a prealignment of fluoroscopic C-arm projections containing the upper ankle joint. A quadtree-based hierarchical variance comparison extracts potential feature points and a Hough transform is applied to identify bone shaft lines together with the tibiotalar joint space. By using this information we can define the coarse orientation of the projections independent from the ankle pose during acquisition in order to align those images to the volume of the fractured ankle. The proposed method was evaluated on thirteen cadaveric datasets consisting of 100 projections each with manually adjusted image planes by three trauma surgeons. The results show that the method can be used to detect the joint space orientation. The correlation between angle deviation and anatomical projection direction gives valuable input on the acquisition direction for future clinical experiments.
Segmentation
icon_mobile_dropdown
Boundary overlap for medical image segmentation evaluation
Varduhi Yeghiazaryan, Irina Voiculescu
All medical image segmentation algorithms need to be validated and compared, and yet no evaluation framework is widely accepted within the imaging community. Collections of segmentation results often need to be compared and ranked by their effectiveness. Evaluation measures which are popular in the literature are based on region overlap or boundary distance. None of these are consistent in the way they rank segmentation results: they tend to be sensitive to one or another type of segmentation error (size, location, shape) but no single measure covers all error types. We introduce a new family of measures, with hybrid characteristics. These measures quantify similarity/difference of segmented regions by considering their overlap around the region boundaries. This family is more sensitive than other measures in the literature to combinations of segmentation error types. We compare measure performance on collections of segmentation results sourced from carefully compiled 2D synthetic data, and also on 3D medical image volumes. We show that our new measure: (1) penalises errors successfully, especially those around region boundaries; (2) gives a low similarity score when existing measures disagree, thus avoiding overly inflated scores; and (3) scores segmentation results over a wider range of values. We consider a representative measure from this family and the effect of its only free parameter on error sensitivity, typical value range, and running time.
DeepInfer: open-source deep learning deployment toolkit for image-guided therapy
Alireza Mehrtash, Mehran Pesteie, Jorden Hetherington, et al.
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.
Deep convolutional neural network for prostate MR segmentation
Automatic segmentation of the prostate in magnetic resonance imaging (MRI) has many applications in prostate cancer diagnosis and therapy. We propose a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage based on prostate MR images and the corresponding ground truths, and learns to make inference for pixel-wise segmentation. Experiments were performed on our in-house data set, which contains prostate MR images of 20 patients. The proposed CNN model obtained a mean Dice similarity coefficient of 85.3%±3.2% as compared to the manual segmentation. Experimental results show that our deep CNN model could yield satisfactory segmentation of the prostate.
Deep residual networks for automatic segmentation of laparoscopic videos of the liver
Eli Gibson, Maria R. Robu, Stephen Thompson, et al.
Motivation: For primary and metastatic liver cancer patients undergoing liver resection, a laparoscopic approach can reduce recovery times and morbidity while offering equivalent curative results; however, only about 10% of tumours reside in anatomical locations that are currently accessible for laparoscopic resection. Augmenting laparoscopic video with registered vascular anatomical models from pre-procedure imaging could support using laparoscopy in a wider population. Segmentation of liver tissue on laparoscopic video supports the robust registration of anatomical liver models by filtering out false anatomical correspondences between pre-procedure and intra-procedure images. In this paper, we present a convolutional neural network (CNN) approach to liver segmentation in laparoscopic liver procedure videos. Method: We defined a CNN architecture comprising fully-convolutional deep residual networks with multi-resolution loss functions. The CNN was trained in a leave-one-patient-out cross-validation on 2050 video frames from 6 liver resections and 7 laparoscopic staging procedures, and evaluated using the Dice score. Results: The CNN yielded segmentations with Dice scores ≥0.95 for the majority of images; however, the inter-patient variability in median Dice score was substantial. Four failure modes were identified from low scoring segmentations: minimal visible liver tissue, inter-patient variability in liver appearance, automatic exposure correction, and pathological liver tissue that mimics non-liver tissue appearance. Conclusion: CNNs offer a feasible approach for accurately segmenting liver from other anatomy on laparoscopic video, but additional data or computational advances are necessary to address challenges due to the high inter-patient variability in liver appearance.
Spine segmentation from C-arm CT data sets: application to region-of-interest volumes for spinal interventions
C. Buerger, C. Lorenz, D. Babic, et al.
Spinal fusion is a common procedure to stabilize the spinal column by fixating parts of the spine. In such procedures, metal screws are inserted through the patients back into a vertebra, and the screws of adjacent vertebrae are connected by metal rods to generate a fixed bridge. In these procedures, 3D image guidance for intervention planning and outcome control is required. Here, for anatomical guidance, an automated approach for vertebra segmentation from C-arm CT images of the spine is introduced and evaluated. As a prerequisite, 3D C-arm CT images are acquired covering the vertebrae of interest. An automatic model-based segmentation approach is applied to delineate the outline of the vertebrae of interest. The segmentation approach is based on 24 partial models of the cervical, thoracic and lumbar vertebrae which aggregate information about (i) the basic shape itself, (ii) trained features for image based adaptation, and (iii) potential shape variations. Since the volume data sets generated by the C-arm system are limited to a certain region of the spine the target vertebra and hence initial model position is assigned interactively. The approach was trained and tested on 21 human cadaver scans. A 3-fold cross validation to ground truth annotations yields overall mean segmentation errors of 0.5 mm for T1 to 1.1 mm for C6. The results are promising and show potential to support the clinician in pedicle screw path and rod planning to allow accurate and reproducible insertions.
Liver segmentation in color images
Burton Ma, T. Peter Kingham, Michael I. Miga, et al.
We describe the use of a deep learning method for semantic segmentation of the liver from color images. Our intent is to eventually embed a semantic segmentation method into a stereo-vision based navigation system for open liver surgery. Semantic segmentation of the stereo images will allow us to reconstruct a point cloud containing the liver surfaces and excluding all other non-liver structures. We trained a deep learning algorithm using 136 images and 272 augmented images computed by rotating the original images. We tested the trained algorithm on 27 images that were not used for training purposes. The method achieves an 88% median pixel labeling accuracy over the test images.
Poster Session
icon_mobile_dropdown
Integration of myocardial scar identified by preoperative delayed contrast-enhanced MRI into a high-resolution mapping system for planning and guidance of VT ablation procedures
Myocardial scarring creates a substrate for reentrant circuits which can lead to ventricular tachycardia. In ventricular catheter ablation therapy, regions of myocardial scarring are targeted to interrupt arrhythmic electrical pathways. Low voltage regions are a surrogate for myocardial scar and are identified by generating an electro anatomic map at the start of the procedure. Recent efforts have focussed on integration of preoperative scar information generated from delayed contrast-enhanced MR imaging to augment intraprocedural information. In this work, we describe an initial feasibility study of integration of a preoperative MRI derived scar maps into a high-resolution mapping system to improve planning and guidance of VT ablation procedures.
A system for endobronchial video analysis
Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient’s airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.
Evaluation of lung tumor motion management in radiation therapy with dynamic MRI
Seyoun Park, Rana Farah, Steven M. Shea, et al.
Surrogate-based tumor motion estimation and tracing methods are commonly used in radiotherapy despite the lack of continuous real time 3D tumor and surrogate data. In this study, we propose a method to simultaneously track the tumor and external surrogates with dynamic MRI, which allows us to evaluate their reproducible correlation. Four MRIcompatible fiducials are placed on the patient’s chest and upper abdomen, and multi-slice 2D cine MRIs are acquired to capture the lung and whole tumor, followed by two-slice 2D cine MRIs to simultaneously track the tumor and fiducials, all in sagittal orientation. A phase-binned 4D-MRI is first reconstructed from multi-slice MR images using body area as a respiratory surrogate and group-wise registration. The 4D-MRI provides 3D template volumes for different breathing phases. 3D tumor position is calculated by 3D-2D template matching in which 3D tumor templates in 4D-MRI reconstruction and the 2D cine MRIs from the two-slice tracking dataset are registered. 3D trajectories of the external surrogates are derived via matching a 3D geometrical model to the fiducial segmentations on the 2D cine MRIs. We tested our method on five lung cancer patients. Internal target volume from 4D-CT showed average sensitivity of 86.5% compared to the actual tumor motion for 5 min. 3D tumor motion correlated with the external surrogate signal, but showed a noticeable phase mismatch. The 3D tumor trajectory showed significant cycle-to-cycle variation, while the external surrogate was not sensitive enough to capture such variations. Additionally, there was significant phase mismatch between surrogate signals obtained from fiducials at different locations.
Automatic detection of measurement points for non-contact vibrometer-based diagnosis of cardiac arrhythmias
Jürgen Metzler, Kristian Kroschel, Dieter Willersinn
Monitoring of the heart rhythm is the cornerstone of the diagnosis of cardiac arrhythmias. It is done by means of electrocardiography which relies on electrodes attached to the skin of the patient. We present a new system approach based on the so-called vibrocardiogram that allows an automatic non-contact registration of the heart rhythm. Because of the contactless principle, the technique offers potential application advantages in medical fields like emergency medicine (burn patient) or premature baby care where adhesive electrodes are not easily applicable. A laser-based, mobile, contactless vibrometer for on-site diagnostics that works with the principle of laser Doppler vibrometry allows the acquisition of vital functions in form of a vibrocardiogram. Preliminary clinical studies at the Klinikum Karlsruhe have shown that the region around the carotid artery and the chest region are appropriate therefore. However, the challenge is to find a suitable measurement point in these parts of the body that differs from person to person due to e. g. physiological properties of the skin. Therefore, we propose a new Microsoft Kinect-based approach. When a suitable measurement area on the appropriate parts of the body are detected by processing the Kinect data, the vibrometer is automatically aligned on an initial location within this area. Then, vibrocardiograms on different locations within this area are successively acquired until a sufficient measuring quality is achieved. This optimal location is found by exploiting the autocorrelation function.
Physiology informed virtual surgical planning: a case study with a virtual airway surgical planner and BioGears
Lucas Potter, Sreekanth Arikatla, Aaron Bray, et al.
Stenosis of the upper airway affects approximately 1 in 200,000 adults per year1 , and occurs in neonates as well2 . Its treatment is often dictated by institutional factors and clinicians’ experience or preferences 3 . Objective and quantitative methods of evaluating treatment options hold the potential to improve care in stenosis patients. Virtual surgical planning software tools are critically important for this. The Virtual Pediatric Airway Workbench (VPAW) is a software platform designed and evaluated for upper airway stenosis treatment planning. It incorporates CFD simulation and geometric authoring with objective metrics from both that help in informed evaluation and planning. However, this planner currently lacks physiological information which could impact the surgical planning outcomes. In this work, we integrated a lumped parameter, model based human physiological engine called BioGears with VPAW. We demonstrated the use of physiology informed virtual surgical planning platform for patient-specific stenosis treatment planning. The preliminary results show that incorporating patient-specific physiology in the pretreatment plan would play important role in patient-specific surgical trainers and planners in airway surgery and other types of surgery that are significantly impacted by physiological conditions during surgery.
Is pose-based pivot calibration superior to sphere fitting?
Burton Ma, Niloofar Banihaveb, Joy Choi, et al.
Calibrating a pointing stylus tracked via a dynamic reference frame (DRF) is often performed by pivoting the stylus on its tip in a divot located at a fixed position in the tracking system coordinate frame. The calibration problem is solved by estimating the location of the fixed divot position. Recent work (Yaniv,, Which pivot calibration?, Proceedings of SPIE Vol. 941527, 2015) provided evidence that solving the calibration problem using the measured poses of the stylus during pivoting is superior to fitting the measured position of the DRF during pivoting to a sphere. We constructed apparatus to acquire very high quality pivoting measurements over a much wider range of pivoting angles than could be obtained by pivoting in a divot. We tested pose-based calibration and sphere fitting and found that sphere fitting was as precise as (if not better than) pose-based calibration when using high-quality pivoting data. We performed a simple simulation where the stylus tip location deviated by a small amount during pivoting and found that the precision of sphere fitting degraded much more than pose-based calibration which suggests that pose-based calibration should be favored over sphere fitting when non-perfect pivoting is not possible.
Online C-arm calibration using a marked guide wire for 3D reconstruction of pulmonary arteries
Étienne Vachon, Joaquim Miró, Luc Duong
3D reconstruction of vessels from 2D X-ray angiography is highly relevant to improve the visualization and the assessment of vascular structures such as pulmonary arteries by interventional cardiologists. However, to ensure a robust and accurate reconstruction, C-arm gantry parameters must be properly calibrated to provide clinically acceptable results. Calibration procedures often rely on calibration objects and complex protocol which is not adapted to an intervention context. In this study, a novel calibration algorithm for C-arm gantry is presented using the instrumentation such as catheters and guide wire. This ensures the availability of a minimum set of correspondences and implies minimal changes to the clinical workflow. The method was evaluated on simulated data and on retrospective patient datasets. Experimental results on simulated datasets demonstrate a calibration that allows a 3D reconstruction of the guide wire up to a geometric transformation. Experiments with patients datasets show a significant decrease of the retro projection error to 0.17 mm 2D RMS. Consequently, such procedure might contribute to identify any calibration drift during the intervention.
On pattern selection for laparoscope calibration
Stephen Thompson, Yannic Meuer, Eddie Edwards, et al.
Camera calibration is a key requirement for augmented reality in surgery. Calibration of laparoscopes provides two challenges that are not sufficiently addressed in the literature. In the case of stereo laparoscopes the small distance (less than 5mm) between the channels means that the calibration pattern is an order of magnitude more distant than the stereo separation. For laparoscopes in general, if an external tracking system is used, hand-eye calibration is difficult due to the long length of the laparoscope. Laparoscope intrinsic, stereo and hand-eye calibration all rely on accurate feature point selection and accurate estimation of the camera pose with respect to a calibration pattern. We compare 3 calibration patterns, chessboard, rings, and AprilTags. We measure the error in estimating the camera intrinsic parameters and the camera poses. Accuracy of camera pose estimation will determine the accuracy with which subsequent stereo or hand-eye calibration can be done. We compare the results of repeated real calibrations and simulations using idealised noise, to determine the expected accuracy of different methods and the sources of error. The results do indicate that feature detection based on rings is more accurate than a chessboard, however this doesn’t necessarily lead to a better calibration. Using a grid with identifiable tags enables detection of features nearer the image boundary, which may improve calibration.
On the nature of data collection for soft-tissue image-to-physical organ registration: a noise characterization study
Jarrod A. Collins, Jon S. Heiselman, Jared A. Weis, et al.
In image-guided liver surgery (IGLS), sparse representations of the anterior organ surface may be collected intraoperatively to drive image-to-physical space registration. Soft tissue deformation represents a significant source of error for IGLS techniques. This work investigates the impact of surface data quality on current surface based IGLS registration methods. In this work, we characterize the robustness of our IGLS registration methods to noise in organ surface digitization. We study this within a novel human-to-phantom data framework that allows a rapid evaluation of clinically realistic data and noise patterns on a fully characterized hepatic deformation phantom. Additionally, we implement a surface data resampling strategy that is designed to decrease the impact of differences in surface acquisition. For this analysis, n=5 cases of clinical intraoperative data consisting of organ surface and salient feature digitizations from open liver resection were collected and analyzed within our human-to-phantom validation framework. As expected, results indicate that increasing levels of noise in surface acquisition cause registration fidelity to deteriorate. With respect to rigid registration using the raw and resampled data at clinically realistic levels of noise (i.e. a magnitude of 1.5 mm), resampling improved TRE by 21%. In terms of nonrigid registration, registrations using resampled data outperformed the raw data result by 14% at clinically realistic levels and were less susceptible to noise across the range of noise investigated. These results demonstrate the types of analyses our novel human-to-phantom validation framework can provide and indicate the considerable benefits of resampling strategies.
Using an Android application to assess registration strategies in open hepatic procedures: a planning and simulation tool
Sparse surface digitization with an optically tracked stylus for use in an organ surface-based image-to-physical registration is an established approach for image-guided open liver surgery procedures. However, variability in sparse data collections during open hepatic procedures can produce disparity in registration alignments. In part, this variability arises from inconsistencies with the patterns and fidelity of collected intraoperative data. The liver lacks distinct landmarks and experiences considerable soft tissue deformation. Furthermore, data coverage of the organ is often incomplete or unevenly distributed. While more robust feature-based registration methodologies have been developed for image-guided liver surgery, it is still unclear how variation in sparse intraoperative data affects registration. In this work, we have developed an application to allow surgeons to study the performance of surface digitization patterns on registration. Given the intrinsic nature of soft-tissue, we incorporate realistic organ deformation when assessing fidelity of a rigid registration methodology. We report the construction of our application and preliminary registration results using four participants. Our preliminary results indicate that registration quality improves as users acquire more experience selecting patterns of sparse intraoperative surface data.
Slice-to-volume parametric image registration models with applications to MRI-guided cardiac procedures
L.W. Lorraine Ma, Mehran Ebrahimi
A mathematical formulation for intensity-based slice-to-volume registration is proposed. The approach is flexible and accommodates various regularization schemes, similarity measures, and optimizers. The framework is evaluated by registering 2D and 3D cardiac magnetic resonance (MR) images obtained in vivo, aimed at real- time MR-guided applications. Rigid-body and affine transformations are used to validate the parametric model. Target registration error (TRE), Jaccard, and Dice indices are used to evaluate the algorithm and demonstrate the accuracy of the registration scheme on both simulated and clinical data. Registration with the affine model appeared to be more robust than with the rigid model in controlled cases. By simply extending the rigid model to an affine model, alignment of the cardiac region generally improved, without the need for complex dissimilarity measures or regularizers.
Virtual landmarks
Yubing Tong, Jayaram K. Udupa, Dewey Odhner, et al.
Much has been published on finding landmarks on object surfaces in the context of shape modeling. While this is still an open problem, many of the challenges of past approaches can be overcome by removing the restriction that landmarks must be on the object surface. The virtual landmarks we propose may reside inside, on the boundary of, or outside the object and are tethered to the object. Our solution is straightforward, simple, and recursive in nature, proceeding from global features initially to local features in later levels to detect landmarks. Principal component analysis (PCA) is used as an engine to recursively subdivide the object region. The object itself may be represented in binary or fuzzy form or with gray values. The method is illustrated in 3D space (although it generalizes readily to spaces of any dimensionality) on four objects (liver, trachea and bronchi, and outer boundaries of left and right lungs along pleura) derived from 5 patient computed tomography (CT) image data sets of the thorax and abdomen. The virtual landmark identification approach seems to work well on different structures in different subjects and seems to detect landmarks that are homologously located in different samples of the same object. The approach guarantees that virtual landmarks are invariant to translation, scaling, and rotation of the object/image. Landmarking techniques are fundamental for many computer vision and image processing applications, and we are currently exploring the use virtual landmarks in automatic anatomy recognition and object analytics.
Skull registration for prone patient position using tracked ultrasound
Grace Underwood, Tamas Ungi, Zachary Baum, et al.
PURPOSE: Tracked navigation has become prevalent in neurosurgery. Problems with registration of a patient and a preoperative image arise when the patient is in a prone position. Surfaces accessible to optical tracking on the back of the head are unreliable for registration. We investigated the accuracy of surface-based registration using points accessible through tracked ultrasound. Using ultrasound allows access to bone surfaces that are not available through optical tracking. Tracked ultrasound could eliminate the need to work (i) under the table for registration and (ii) adjust the tracker between surgery and registration. In addition, tracked ultrasound could provide a non-invasive method in comparison to an alternative method of registration involving screw implantation. METHODS: A phantom study was performed to test the feasibility of tracked ultrasound for registration. An initial registration was performed to partially align the pre-operative computer tomography data and skull phantom. The initial registration was performed by an anatomical landmark registration. Surface points accessible by tracked ultrasound were collected and used to perform an Iterative Closest Point Algorithm. RESULTS: When the surface registration was compared to a ground truth landmark registration, the average TRE was found to be 1.6±0.1mm and the average distance of points off the skull surface was 0.6±0.1mm. CONCLUSION: The use of tracked ultrasound is feasible for registration of patients in prone position and eliminates the need to perform registration under the table. The translational component of error found was minimal. Therefore, the amount of TRE in registration is due to a rotational component of error.
Comparison of texture synthesis methods for content generation in ultrasound simulation for training
Oliver Mattausch, Elizabeth Ren, Michael Bajka, et al.
Navigation and interpretation of ultrasound (US) images require substantial expertise, the training of which can be aided by virtual-reality simulators. However, a major challenge in creating plausible simulated US images is the generation of realistic ultrasound speckle. Since typical ultrasound speckle exhibits many properties of Markov Random Fields, it is conceivable to use texture synthesis for generating plausible US appearance. In this work, we investigate popular classes of texture synthesis methods for generating realistic US content. In a user study, we evaluate their performance for reproducing homogeneous tissue regions in B-mode US images from small image samples of similar tissue and report the best-performing synthesis methods. We further show that regression trees can be used on speckle texture features to learn a predictor for US realism.
Consistent evaluation of an ultrasound-guided surgical navigation system by utilizing an active validation platform
Younsu Kim, Sungmin Kim, Emad M. Boctor
An ultrasound image-guided needle tracking systems have been widely used due to their cost-effectiveness and nonionizing radiation properties. Various surgical navigation systems have been developed by utilizing state-of-the-art sensor technologies. However, ultrasound transmission beam thickness causes unfair initial evaluation conditions due to inconsistent placement of the target with respect to the ultrasound probe. This inconsistency also brings high uncertainty and results in large standard deviations for each measurement when we compare accuracy with and without the guidance. To resolve this problem, we designed a complete evaluation platform by utilizing our mid-plane detection and time of flight measurement systems. The evaluating system uses a PZT element target and an ultrasound transmitting needle. In this paper, we evaluated an optical tracker-based surgical ultrasound-guided navigation system whereby the optical tracker tracks marker frames attached on the ultrasound probe and the needle. We performed ten needle trials of guidance experiment with a mid-plane adjustment algorithm and with a B-mode segmentation method. With the midplane adjustment, the result showed a mean error of 1.62±0.72mm. The mean error increased to 3.58±2.07mm without the mid-plane adjustment. Our evaluation system can reduce the effect of the beam-thickness problem, and measure ultrasound image-guided technologies consistently with a minimal standard deviation. Using our novel evaluation system, ultrasound image-guided technologies can be compared under equal initial conditions. Therefore, the error can be evaluated more accurately, and the system provides better analysis on the error sources such as ultrasound beam thickness.
Computational modeling of radiofrequency ablation: evaluation on ex vivo data using ultrasound monitoring
Chloé Audigier, Younsu Kim, Austin Dillow, et al.
Radiofrequency ablation (RFA) is the most widely used minimally invasive ablative therapy for liver cancer, but it is challenged by a lack of patient-specific monitoring. Inter-patient tissue variability and the presence of blood vessels make the prediction of the RFA difficult. A monitoring tool which can be personalized for a given patient during the intervention would be helpful to achieve a complete tumor ablation. However, the clinicians do not have access to such a tool, which results in incomplete treatment and a large number of recurrences. Computational models can simulate the phenomena and mechanisms governing this therapy. The temperature evolution as well as the resulted ablation can be modeled. When combined together with intraoperative measurements, computational modeling becomes an accurate and powerful tool to gain quantitative understanding and to enable improvements in the ongoing clinical settings. This paper shows how computational models of RFA can be evaluated using intra-operative measurements. First, simulations are used to demonstrate the feasibility of the method, which is then evaluated on two ex vivo datasets. RFA is simulated on a simplified geometry to generate realistic longitudinal temperature maps and the resulted necrosis. Computed temperatures are compared with the temperature evolution recorded using thermometers, and with temperatures monitored by ultrasound (US) in a 2D plane containing the ablation tip. Two ablations are performed on two cadaveric bovine livers, and we achieve error of 2.2 °C on average between the computed and the thermistors temperature and 1.4 °C and 2.7 °C on average between the temperature computed and monitored by US during the ablation at two different time points (t = 240 s and t = 900 s).
Needle tip visibility in 3D ultrasound images
Muhammad Arif, Adriaan Moelker, Theo van Walsum
Needle visibility is of crucial importance for ultrasound guided interventional procedures. However, several factors, such as shadowing by bone or gas and tissue echogenic properties similar to needles, may compromise needle visibility. Additionally, small angle between the ultrasound beam and the needle, as well as small gauged needles may reduce visibility. Variety in needle tips design may also affect needle visibility. Whereas several studies have investigated needle visibility in 2D ultrasound imaging, no data is available for 3D ultrasound imaging, a modality that has great potential for image guidance interventions1. In this study, we evaluated needle visibility using a 3D ultrasound transducer. We examined different needles in a tissue mimicking liver phantom at three angles (200, 550 and 900) and quantify their visibility. The liver phantom was made by 5% polyvinyl alcohol solution containing 1% Silica gel particles to act as ultrasound scattering particles. We used four needles; two biopsy needles (Quick core 14G and 18G), one Ablation needle (Radiofrequency Ablation 17G), and Initial puncture needle (IP needle 17G). The needle visibility was quantified by calculating contrast to noise ratio. The results showed that the visibility for all needles were almost similar at large angles. However the difference in visibility at lower angles is more prominent. Furthermore, the visibility increases with the increase in angle of ultrasound beam with needles.
Catheter tracking in an interventional photoacoustic surgical system
Alexis Cheng, Yuttana Itsarachaiyot, Younsu Kim, et al.
In laparoscopic medical procedures, accurate tracking of interventional tools such as catheters are necessary. Current practice for tracking catheters often involve using fluoroscopy, which is best avoided to minimize radiation dose to the patient and the surgical team. Photoacoustic imaging is an emerging imaging modality that can be used for this purpose and does not currently have a general tool tracking solution. Photoacoustic-based catheter tracking would increase its attractiveness, by providing both an imaging and tracking solution. We present a catheter tracking method based on the photoacoustic effect. Photoacoustic markers are simultaneously observed by a stereo camera as well as a piezoelectric element attached to the tip of a catheter. The signals received by the piezoelectric element can be used to compute its position relative to the photoacoustic markers using multilateration. This combined information can be processed to localize the position of the piezoelectric element with respect to the stereo camera system. We presented the methods to enable this work and demonstrated precisions of 1-3mm and a relative accuracy of less than 4% in four independent locations, which are comparable to conventional systems. In addition, we also showed in another experiment a reconstruction precision up to 0.4mm and an estimated accuracy up to 0.5mm. Future work will include simulations to better evaluate this method and its challenges and the development of concurrent photoacoustic marker projection and its associated methods.
Study into the displacement of tumor localization needle during navigated breast cancer surgery
Christina Yan, Tamas Ungi, Gabrielle Gauvin, et al.
PURPOSE: Early stage breast cancer is typically treated with lumpectomy. During lumpectomy, electromagnetic tracking can be used to monitor tumor position using a localization needle with an electromagnetic sensor fixed on the needle shaft. This needle is stabilized in the tumor with tissue locking wire hooks, which are deployed once the needle is inserted. The localization needle may displace from its initial position of insertion due to mechanical forces, providing false spatial information about the tumor position and increasing the probability of an incomplete resection. This study investigates whether gravitational and mechanical forces affected the magnitude of needle displacement. METHODS: Ten ultrasound scans were evaluated to measure needle displacement in vivo. Needle position was approximated by the distance between the needle tip and the tumor boundary on a 2D ultrasound image, and needle displacement was defined by the change in position. The angle between the localization needle and the coronal plane was computed in an open-source platform. RESULTS: A significant relationship (p = 0.04) was found between the needle to coronal plane angle and increased needle displacement. Needles inserted vertically, pointing towards the operating room ceiling, tended to exhibit greater needle displacement. Average needle displacement was 1.7 ±1.2 mm. CONCLUSION: Angle between the needle and the horizontal plane has been shown to affect needle displacement, and should be taken into consideration when inserting the localization needle. Future works can be directed towards improving the clinical workflow and mechanical design of the localization needle to reduce slippage during surgery.
Ultrasound guidance system for prostate biopsy
Johann Hummel, Reinhard Kerschner, Marcus Kaar, et al.
We designed a guidance system for prostate biopsy based on PET/MR images and 3D ultrasound (US). With our proposed method common inter-modal MR-US (or CT-US in case of PET/CTs) registration can be replaced by an intra-modal 3D/3D-US/US registration and an optical tracking system (OTS). On the pre-operative site, a PET/MR calibration allows to link both hybrid modalities with an abdominal 3D-US. On the interventional site, another abdominal 3D US is taken to merge the pre-operative images with the real-time 3D-US via 3D/3D-US/US registration. Finally, the images of a tracked trans-rectal US probe can be displayed with the pre-operative images by overlay. For PET/MR image fusion we applied a point-to-point registration between PET and OTS and MR and OTS, respectively. 3D/3D-US/US registration was evaluated for images taken in supine and lateral patient position. To enable table shifts between PET/MR and US image acquisition a table calibration procedure is presented. We found fiducial registration errors of 0.9 mm and 2.8 mm, respectively, with respect to the MR and PET calibration. A target registration error between MR and 3D US amounted to 1.4 mm. The registration error for the 3D/3D-US/US registration was found to be 3.7 mm. Furthermore, we have shown that ultrasound is applicable in an MR environment.
Motorized fusion guided prostate biopsy: phantom study
Reza Seifabadi, Sheng Xu, Fereshteh Aalamifar, et al.
Purpose: Fusion of Magnetic Resonance Imaging (MRI) with intraoperative real-time Ultrasound (US) during prostate biopsy has significantly improved the sensitivity of transrectal ultrasound (TRUS) guided cancer detection. Currently, sweeping of the TRUS probe to build a 3D volume as part of the fusion process and the TRUS probe manipulation for needle guidance are both done manually. A motorized, joystick controlled, probe holder was custom fabricated that can potentially reduce inter-operator variability, provide standardization of needle placement, improve repeatability and uniformity of needle placement, which may have impacts upon the learning curve after clinical deployment of this emerging approach. Method: a 2DOF motorized probe holder was designed to provide translation and rotation of a triplane TRUS end firing probe for prostate biopsy. The probe holder was joystick controlled and can assist manipulation of the probe during needle insertion as well as in acquiring a smoother US 2D to 3D sweep in which the 3D US volume for fusion is built. A commercial MRI-US fusion platform was used. Three targets were specified on MR image of a commercial prostate phantom. After performing the registration, two operators performed targeting, once manually and once with the assistance of the motorized probe holder. They repeated these tasks 5 times resulting in a total of 30 targeting events. Time of completion and mechanical error i.e. distance of the target from the needle trajectory in the software user interface were measured. Repeatability in reaching a given target in a systematic and consistent way was measured using a scatter plot showing all targets in the US coordinate system. Pearson product-moment correlation coefficient (PPMCC) was used to demonstrate the probe steadiness during targeting. Results: the completion time was 25±17 sec, 25±24 sec, and 27±15 sec for free hand and 24±10 sec, 22.5±10 sec, and 37±10 sec for motorized insertion, for target 1, 2, and 3, respectively. The mechanical error was 0.75±0.4 mm, 0.45±0.4 mm, and 0.55±0.4 mm, for free hand approach while it was 1.0±0.57 mm, 0.45±0.4 mm, and 0.35±0.25 mm, for motorized approach, for target 1, 2, and 3, respectively. PPMCC remained almost at 1.0 for the motorized approach while having a variation between 0.9 and 1.0 for the free hand approach. Conclusions: motorized fusion guided prostate biopsy in a phantom study was feasible and non-inferior or comparable to the free hand manual approach in terms of accuracy and speed of targeting, while being superior in terms of repeatability and steadiness.
Safe electrode trajectory planning in SEEG via MIP-based vessel segmentation
Davide Scorza, Sara Moccia, Giuseppe De Luca, et al.
Stereo-ElectroEncephaloGraphy (SEEG) is a surgical procedure that allows brain exploration of patients affected by focal epilepsy by placing intra-cerebral multi-lead electrodes. The electrode trajectory planning is challenging and time consuming. Various constraints have to be taken into account simultaneously, such as absence of vessels at the electrode Entry Point (EP), where bleeding is more likely to occur. In this paper, we propose a novel framework to help clinicians in defining a safe trajectory and focus our attention on EP. For each electrode, a Maximum Intensity Projection (MIP) image was obtained from Computer Tomography Angiography (CTA) slices of the brain first centimeter measured along the electrode trajectory. A Gaussian Mixture Model (GMM), modified to include neighborhood prior through Markov Random Fields (GMM-MRF), is used to robustly segment vessels and deal with the noisy nature of MIP images. Results are compared with simple GMM and manual global Thresholding (Th) by computing sensitivity, specificity, accuracy and Dice similarity index against manual segmentation performed under the supervision of an expert surgeon. In this work we present a novel framework which can be easily integrated into manual and automatic planner to help surgeon during the planning phase. GMM-MRF qualitatively showed better performance over GMM in reproducing the connected nature of brain vessels also in presence of noise and image intensity drops typical of MIP images. With respect Th, it is a completely automatic method and it is not influenced by inter-subject variability.
Automated location detection of injection site for preclinical stereotactic neurosurgery procedure
Shiva Abbaszadeh, Hemmings C. H. Wu
Currently, during stereotactic neurosurgery procedures, the manual task of locating the proper area for needle insertion or implantation of electrode/cannula/optic fiber can be time consuming. The requirement of the task is to quickly and accurately find the location for insertion. In this study we investigate an automated method to locate the entry point of region of interest. This method leverages a digital image capture system, pattern recognition, and motorized stages. Template matching of known anatomical identifiable regions is used to find regions of interest (e.g. Bregma) in rodents. For our initial study, we tackle the problem of automatically detecting the entry point.
Straight trajectory planning for keyhole neurosurgery in sheep with automatic brain structures segmentation
Alberto Favaro, Akash Lad, Davide Formenti, et al.
In a translational neuroscience/neurosurgery perspective, sheep are considered good candidates to study because of the similarity between their brain and the human one. Automatic planning systems for safe keyhole neurosurgery maximize the probe/catheter distance from vessels and risky structures. This work consists in the development of a trajectories planner for straight catheters placement intended to be used for investigating the drug diffusivity mechanisms in sheep brain. Automatic brain segmentation of gray matter, white matter and cerebrospinal fluid is achieved using an online available sheep atlas. Ventricles, midbrain and cerebellum segmentation have been also carried out. The veterinary surgeon is asked to select a target point within the white matter to be reached by the probe and to define an entry area on the brain cortex. To mitigate the risk of hemorrhage during the insertion process, which can prevent the success of the insertion procedure, the trajectory planner performs a curvature analysis of the brain cortex and wipes out from the poll of possible entry points the sulci, as part of brain cortex where superficial blood vessels are naturally located. A limited set of trajectories is then computed and presented to the surgeon, satisfying an optimality criteria based on a cost function which considers the distance from critical brain areas and the whole trajectory length. The planner proved to be effective in defining rectilinear trajectories accounting for the safety constraints determined by the brain morphology. It also demonstrated a short computational time and good capability in segmenting gyri and sulci surfaces.
Association between hemodynamic modifications and clinical outcome of intracranial aneurysms treated using flow diverters
Nikhil Paliwal, Robert J. Damiano, Jason M. Davies, et al.
Treatment of intracranial aneurysms (IAs) has been revolutionized by the advent of endovascular Flow Diverters (FDs), which disrupt blood flow within the aneurysm to induce pro-thrombotic conditions, and serves as a scaffold for endothelial ingrowth and arterial remodeling. Despite good clinical success of FDs, complications like incomplete occlusion and post-treatment rupture leading to subarachnoid hemorrhage have been reported. In silico computational fluid dynamic analysis of the pre- and post-treated geometries of IA patients can shed light on the contrasting blood hemodynamics associated with different clinical outcomes. In this study, we analyzed hemodynamic modifications in 15 IA patients treated using a single FD; 10 IAs were completely occluded (successful) and 5 were partially occluded (unsuccessful) at 12-month follow-up. An in-house virtual stenting workflow was used to recapitulate the clinical intervention on these cases, followed by CFD to obtain pre- and post-treatment hemodynamics. Bulk hemodynamic parameters showed comparable reductions in both groups with average inflow rate and aneurysmal velocity reduction of 40.3% and 52.4% in successful cases, and 34.4% and 49.2% in unsuccessful cases. There was a substantial reduction in localized parameter like vortex coreline length and Energy Loss for successful cases, 38.2% and 42.9% compared to 10.1% and 10.5% for unsuccessful cases. This suggest that for successfully treated IAs, the localized complex blood flow is disrupted more prominently by the FD as compared to unsuccessful cases. These localized hemodynamic parameters can be potentially used in prediction of treatment outcome, thus aiding the clinicians in a priori assessment of different treatment strategies.
Integrated system for point cloud reconstruction and simulated brain shift validation using tracked surgical microscope
Xiaochen Yang, Logan W. Clements, Ma Luo, et al.
Intra-operative soft tissue deformation, referred to as brain shift, compromises the application of current imageguided surgery (IGS) navigation systems in neurosurgery. A computational model driven by sparse data has been used as a cost effective method to compensate for cortical surface and volumetric displacements. Stereoscopic microscopes and laser range scanners (LRS) are the two most investigated sparse intra-operative imaging modalities for driving these systems. However, integrating these devices in the clinical workflow to facilitate development and evaluation requires developing systems that easily permit data acquisition and processing. In this work we present a mock environment developed to acquire stereo images from a tracked operating microscope and to reconstruct 3D point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space in order to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. Our experimental results report approximately 2mm average displacement error compared with the optical tracking system. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to LRS to collect sufficient intraoperative information for brain shift correction.
Face-based smoothed finite element method for real-time simulation of soft tissue
Andrea Mendizabal, Rémi Bessard Duparc, Huu Phuoc Bui, et al.
In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney’s deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney’s deformation.
Automatic intraoperative fiducial-less patient registration using cortical surface
Xiaoyao Fan, David W. Roberts, Jonathan D. Olson, et al.
In image-guided neurosurgery, patient registration is typically performed in the operating room (OR) at the beginning of the procedure to establish the patient-to-image transformation. The accuracy and efficiency of patient registration are crucial as they are associated with surgical outcome, workflow, and healthcare costs. In this paper, we present an automatic fiducial-less patient registration (FLR) by directly registering cortical surface acquired from intraoperative stereovision (iSV) with preoperative MR (pMR) images without incorporating any prior information, and illustrate the method using one patient example. T1-weighted MR images were acquired prior to surgery and the brain was segmented. After dural opening, an image pair of the exposed cortical surface was acquired using an intraoperative stereovision (iSV) system, and a three-dimensional (3D) texture-encoded profile of the cortical surface was reconstructed. The 3D surface was registered with pMR using a multi-start binary registration method to determine the location and orientation of the iSV patch with respect to the segmented brain. A final transformation was calculated to establish the patient-to-MR relationship. The total computational time was ~30 min, and can be significantly improved through code optimization, parallel computing, and/or graphical processing unit (GPU) acceleration. The results show that the iSV texture map aligned well with pMR using the FLR transformation, while misalignment was evident with fiducial-based registration (FBR). The difference between FLR and FBR was calculated at the center of craniotomy and the resulting distance was 4.34 mm. The results presented in this paper suggest potential for clinical application in the future.
Real-time interactive tractography analysis for multimodal brain visualization tool: MultiXplore
Most debilitating neurological disorders can have anatomical origins. Yet unlike other body organs, the anatomy alone cannot easily provide an understanding of brain functionality. In fact, addressing the challenge of linking structural and functional connectivity remains in the frontiers of neuroscience. Aggregating multimodal neuroimaging datasets may be critical for developing theories that span brain functionality, global neuroanatomy and internal microstructures. Functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) are main such techniques that are employed to investigate the brain under normal and pathological conditions. FMRI records blood oxygenation level of the grey matter (GM), whereas DTI is able to reveal the underlying structure of the white matter (WM). Brain global activity is assumed to be an integration of GM functional hubs and WM neural pathways that serve to connect them. In this study we developed and evaluated a two-phase algorithm. This algorithm is employed in a 3D interactive connectivity visualization framework and helps to accelerate clustering of virtual neural pathways. In this paper, we will detail an algorithm that makes use of an index-based membership array formed for a whole brain tractography file and corresponding parcellated brain atlas. Next, we demonstrate efficiency of the algorithm by measuring required times for extracting a variety of fiber clusters, which are chosen in such a way to resemble all sizes probable output data files that algorithm will generate. The proposed algorithm facilitates real-time visual inspection of neuroimaging data to further the discovery in structure-function relationship of the brain networks.
C-arm positioning using virtual fluoroscopy for image-guided surgery
T. de Silva, J. Punnoose, A. Uneri, et al.
Introduction: Fluoroscopically guided procedures often involve repeated acquisitions for C-arm positioning at the cost of radiation exposure and time in the operating room. A virtual fluoroscopy system is reported with the potential of reducing dose and time spent in C-arm positioning, utilizing three key advances: robust 3D-2D registration to a preoperative CT; real-time forward projection on GPU; and a motorized mobile C-arm with encoder feedback on C-arm orientation.

Method: Geometric calibration of the C-arm was performed offline in two rotational directions (orbit α, orbit β). Patient registration was performed using image-based 3D-2D registration with an initially acquired radiograph of the patient. This approach for patient registration eliminated the requirement for external tracking devices inside the operating room, allowing virtual fluoroscopy using commonly available systems in fluoroscopically guided procedures within standard surgical workflow. Geometric accuracy was evaluated in terms of projection distance error (PDE) in anatomical fiducials. A pilot study was conducted to evaluate the utility of virtual fluoroscopy to aid C-arm positioning in image guided surgery, assessing potential improvements in time, dose, and agreement between the virtual and desired view.

Results: The overall geometric accuracy of DRRs in comparison to the actual radiographs at various C-arm positions was PDE (mean ± std) = 1.6 ± 1.1 mm. The conventional approach required on average 8.0 ± 4.5 radiographs spent “fluoro hunting” to obtain the desired view. Positioning accuracy improved from 2.6o ± 2.3o (in α) and 4.1o ± 5.1o (in β) in the conventional approach to 1.5o ± 1.3o and 1.8o ± 1.7o, respectively, with the virtual fluoroscopy approach.

Conclusion: Virtual fluoroscopy could improve accuracy of C-arm positioning and save time and radiation dose in the operating room. Such a system could be valuable to training of fluoroscopy technicians as well as intraoperative use in fluoroscopically guided procedures.
Patient identification using a near-infrared laser scanner
Jirapong Manit, Christina Bremer, Achim Schweikard, et al.
We propose a new biometric approach where the tissue thickness of a person's forehead is used as a biometric feature. Given that the spatial registration of two 3D laser scans of the same human face usually produces a low error value, the principle of point cloud registration and its error metric can be applied to human classification techniques. However, by only considering the spatial error, it is not possible to reliably verify a person's identity. We propose to use a novel near-infrared laser-based head tracking system to determine an additional feature, the tissue thickness, and include this in the error metric. Using MRI as a ground truth, data from the foreheads of 30 subjects was collected from which a 4D reference point cloud was created for each subject. The measurements from the near-infrared system were registered with all reference point clouds using the ICP algorithm. Afterwards, the spatial and tissue thickness errors were extracted, forming a 2D feature space. For all subjects, the lowest feature distance resulted from the registration of a measurement and the reference point cloud of the same person.

The combined registration error features yielded two clusters in the feature space, one from the same subject and another from the other subjects. When only the tissue thickness error was considered, these clusters were less distinct but still present. These findings could help to raise safety standards for head and neck cancer patients and lays the foundation for a future human identification technique.
Interactive planning of miniplates
Markus Gall, Knut Reinbacher, Jürgen Wallner, et al.
In this contribution, a novel method for computer aided surgery planning of facial defects by using models of purchasable MedArtis Modus 2.0 miniplates is proposed. Implants of this kind, which belong to the osteosynthetic material, are commonly used for treating defects in the facial area. By placing them perpendicular on the defect, the miniplates are fixed on the healthy bone, bent with respect to the surface, to stabilize the defective area. Our software is able to fit a selection of the most common implant models to the surgeon's desired position in a 3D computer model. The fitting respects the local surface curvature and adjusts direction and position in any desired way. Conventional methods use Computed Tomography (CT) scans to generate STereoLithic (STL) models serving as bending template for the implants or use a bending tool during the surgery for readjusting the implant several times. Both approaches lead to undesirable expenses in time. With our visual planning tool, surgeons are able to pre-plan the final implant within just a few minutes. The resulting model can be stored in STL format, which is the commonly used format for 3D printing. With this technology, surgeons are able to print the implant just in time or use it for generating a bending tool, both leading to an exactly bent miniplate.
Phantom-based evaluation method for surgical assistance devices in minimally invasive cochlear implantation
G. Jakob Lexow, Marcel Kluge, Omid Majdani, et al.
Several research groups have proposed individual solutions for surgical assistance devices to perform minimally invasive cochlear implantation. The main challenge is the drilling of a small bore hole from the surface of the skull to the inner ear at submillimetric accuracy. Each group tested the accuracy of their device in their respective test bench or in a small number of temporal bone specimens. This complicates the comparison of the different approaches. Thus, a simple and inexpensive phantom based evaluation method is proposed which resembles clinical conditions. The method is based on half-skull phantoms made of bone-substitute material – optionally equipped with an artificial skin replica to include skin incision within the evaluation procedure. Anatomical structures of the temporal bone derived from segmentations using clinical imaging data are registered into a computer tomographic scan of the skull phantom and used for the planning of the drill trajectory. Drilling is performed with the respective device under conditions close to the intraoperative setting. Evaluation of accuracy can either be performed through postoperative imaging or by means of added targets on the inside of the skull model. Two different targets are proposed: simple reference marks only for measuring the accuracy of the device and a target containing a scala tympani model for evaluation of the complete workflow including the insertion of the electrode carrier. Experiments using the presented method take place under reproducible conditions thus allowing the comparison of the different approaches. In addition, artificial phantoms are easier to obtain and handle than human specimens.
Temporal bone dissection simulator for training pediatric otolaryngology surgeons
Pooneh R. Tabrizi, Hongqiang Sang, Hadi F. Talari, et al.
Cochlear implantation is the standard of care for infants born with severe hearing loss. Current guidelines approve the surgical placement of implants as early as 12 months of age. Implantation at a younger age poses a greater surgical challenge since the underdeveloped mastoid tip, along with thin calvarial bone, creates less room for surgical navigation and can result in increased surgical risk. We have been developing a temporal bone dissection simulator based on actual clinical cases for training otolaryngology fellows in this delicate procedure. The simulator system is based on pre-procedure CT (Computed Tomography) images from pediatric infant cases (<12 months old) at our hospital. The simulator includes: (1) simulation engine to provide the virtual reality of the temporal bone surgery environment, (2) a newly developed haptic interface for holding the surgical drill, (3) an Oculus Rift to provide a microscopic-like view of the temporal bone surgery, and (4) user interface to interact with the simulator through the Oculus Rift and the haptic device. To evaluate the system, we have collected 10 representative CT data sets and segmented the key structures: cochlea, round window, facial nerve, and ossicles. The simulator will present these key structures to the user and warn the user if needed by continuously calculating the distances between the tip of surgical drill and the key structures.
Planning acetabular fracture reduction using patient-specific multibody simulation of the hip
Hadrien Oliveri, Mehdi Boudissa, Jerome Tonetti, et al.
Acetabular fractures are a challenge in orthopedic surgery. Computer-aided solutions were proposed to segment bone fragments, simulate the fracture reduction or design the osteosynthesis fixation plates. This paper addresses the simulation part, which is usually carried out by freely moving bone fragments with six degrees of freedom to reproduce the pre-fracture state. Instead we propose a different paradigm, closer to actual surgeon's requirements: to simulate the surgical procedure itself rather than the desired result. A simple, patient-specific, biomechanical multibody model is proposed, integrating the main ligaments and muscles of the hip joint while accounting for contacts between bone fragments. Main surgical tools and actions can be simulated, such as clamps, Schanz screws or traction of the femur. Simulations are computed interactively, which enables clinicians to evaluate different strategies for an optimal surgical planning. Six retrospective cases were studied, with simple and complex fracture patterns. After interactively building the models from preoperative CT, gestures from the surgical reports were reproduced. Results of the simulations could then be compared with postoperative CT data. A qualitative study shows the model behavior is excellent and the simulated reductions fit the observed data. A more quantitative analysis is currently being completed. Two cases are particularly significant, for which the surgical reduction actually failed. Simulations show it was indeed not possible to reduce these fractures with the chosen approach. Had our simulator being used, a better planning may have avoided a second surgery to these patients.
Statistical shape modeling based renal volume measurement using tracked ultrasound
Autosomal dominant polycystic kidney disease (ADPKD) is the fourth most common cause of kidney transplant worldwide accounting for 7-10% of all cases. Although ADPKD usually progresses over many decades, accurate risk prediction is an important task.1 Identifying patients with progressive disease is vital to providing new treatments being developed and enable them to enter clinical trials for new therapy. Among other factors, total kidney volume (TKV) is a major biomarker predicting the progression of ADPKD. Consortium for Radiologic Imaging Studies in Polycystic Kidney Disease (CRISP)2 have shown that TKV is an early, and accurate measure of cystic burden and likely growth rate. It is strongly associated with loss of renal function.3 While ultrasound (US) has proven as an excellent tool for diagnosing the disease; monitoring short-term changes using ultrasound has been shown to not be accurate. This is attributed to high operator variability and reproducibility as compared to tomographic modalities such as CT and MR (Gold standard). Ultrasound has emerged as one of the standout modality for intra-procedural imaging and with methods for spatial localization has afforded us the ability to track 2D ultrasound in physical space which it is being used. In addition to this, the vast amount of recorded tomographic data can be used to generate statistical shape models that allow us to extract clinical value from archived image sets. In this work, we aim at improving the prognostic value of US in managing ADPKD by assessing the accuracy of using statistical shape model augmented US data, to predict TKV, with the end goal of monitoring short-term changes.
Monitoring electromagnetic tracking error using redundant sensors
Vinyas Harish, Eden Bibic, Andras Lasso, et al.
PURPOSE: The intraoperative measurement of tracking error is crucial to ensure the reliability of electromagnetically navigated procedures. For intraoperative use, methods need to be quick to set up, easy to interpret, and not interfere with the ongoing procedure. Our goal was to evaluate the feasibility of using redundant electromagnetic sensors to alert users to tracking error in a navigated intervention setup. METHODS: Electromagnetic sensors were fixed to a rigid frame around a region of interest and on surgical tools. A software module was designed to detect tracking error by comparing real-time measurements of the differences between inter-sensor distances and angles to baseline measurements. Once these measurements were collected, a linear support vector machine-based classifier was used to predict tracking errors from redundant sensor readings. RESULTS: Measuring the deviation in the reported inter-sensor distance and angle between the needle and cautery served as a valid indicator for electromagnetic tracking error. The highest classification accuracy, 86%, was achieved based on readings from the cautery when the two sensors on the cautery were close together. The specificity of this classifier was 93% and the sensitivity was 82%. CONCLUSION: Placing redundant electromagnetic sensors in a workspace seems to be feasible for the intraoperative detection of electromagnetic tracking error in controlled environments. Further testing should be performed to optimize the measurement error threshold used for classification in the support vector machine, and improve the sensitivity of our method before application in real procedures.
Visual tracking for multi-modality computer-assisted image guidance
Ehsan Basafa, Pezhman Foroughi, Martin Hossbach, et al.
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture – low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes – allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Usability of a real-time tracked augmented reality display system in musculoskeletal injections
Zachary Baum, Tamas Ungi, Andras Lasso, et al.
PURPOSE: Image-guided needle interventions are seldom performed with augmented reality guidance in clinical practice due to many workspace and usability restrictions. We propose a real-time optically tracked image overlay system to make image-guided musculoskeletal injections more efficient and assess its usability in a bed-side clinical environment. METHODS: An image overlay system consisting of an optically tracked viewbox, tablet computer, and semitransparent mirror allows users to navigate scanned patient volumetric images in real-time using software built on the open-source 3D Slicer application platform. A series of experiments were conducted to evaluate the latency and screen refresh rate of the system using different image resolutions. To assess the usability of the system and software, five medical professionals were asked to navigate patient images while using the overlay and completed a questionnaire to assess the system. RESULTS: In assessing the latency of the system with scanned images of varying size, screen refresh rates were approximately 5 FPS. The study showed that participants found using the image overlay system easy, and found the table-mounted system was significantly more usable and effective than the handheld system. CONCLUSION: It was determined that the system performs comparably with scanned images of varying size when assessing the latency of the system. During our usability study, participants preferred the table-mounted system over the handheld. The participants also felt that the system itself was simple to use and understand. With these results, the image overlay system shows promise for use in a clinical environment.
Breathing motion compensated registration of laparoscopic liver ultrasound to CT
João Ramalhinho, Maria Robu, Stephen Thompson, et al.
Laparoscopic Ultrasound (LUS) is regularly used during laparoscopic liver resection to locate critical vascular structures. Many tumours are iso-echoic, and registration to pre-operative CT or MR has been proposed as a method of image guidance. However, factors such as abdominal insufflation, LUS probe compression and breathing motion cause deformation of the liver, making this task far from trivial. Fortunately, within a smaller local region of interest a rigid solution can suffice. Also, the respiratory cycle can be expected to be consistent. Therefore, in this paper we propose a feature-based local rigid registration method to align tracked LUS data with CT while compensating for breathing motion. The method employs the Levenberg-Marquardt Iterative Closest Point (LMICP) algorithm, registers both on liver surface and vessels and requires two LUS datasets, one for registration and another for breathing estimation. Breathing compensation is achieved by fitting a 1D breathing model to the vessel points. We evaluate the algorithm by measuring the Target Registration Error (TRE) of three manually selected landmarks of a single porcine subject. Breathing compensation improves accuracy in 77% of the measurements. In the best case, TRE values below 3mm are obtained. We conclude that our method can potentially correct for breathing motion without gated acquisition of LUS and be integrated in the surgical workflow with an appropriate segmentation.
Emulation of the laparoscopic environment for image-guided liver surgery via an abdominal phantom system with anatomical ligamenture
In order to rigorously validate techniques for image-guided liver surgery (IGLS), an accurate mock representation of the intraoperative surgical scene with quantifiable localization of subsurface targets would be highly desirable. However, many attempts to reproduce the laparoscopic environment have encountered limited success due to neglect of several crucial design aspects. The laparoscopic setting is complicated by factors such as gas insufflation of the abdomen, changes in patient orientation, incomplete organ mobilization from ligaments, and limited access to organ surface data. The ability to accurately represent the influences of anatomical changes and procedural limitations is critical for appropriate evaluation of IGLS methodologies such as registration and deformation correction. However, these influences have not yet been comprehensively integrated into a platform usable for assessment of methods in laparoscopic IGLS. In this work, a mock laparoscopic liver simulator was created with realistic ligamenture to emulate the complexities of this constrained surgical environment for the realization of laparoscopic IGLS. The mock surgical system reproduces an insufflated abdominal cavity with dissectible ligaments, variable levels of incline matching intraoperative patient positioning, and port locations in accordance with surgical protocol. True positions of targets embedded in a tissue-mimicking phantom are measured from CT images. Using this setup, image-to-physical registration accuracy was evaluated for simulations of laparoscopic right and left lobe mobilization to assess rigid registration performance under more realistic laparoscopic conditions. Preliminary results suggest that non-rigid organ deformations and the region of organ surface data collected affect the ability to attain highly accurate registrations in laparoscopic applications.
Automatic transperineal ultrasound probe positioning based on CT scan for image guided radiotherapy
S. M. Camps, F. Verhaegen, G. Paiva Fonesca, et al.
Image interpretation is crucial during ultrasound image acquisition. A skilled operator is typically needed to verify if the correct anatomical structures are all visualized and with sufficient quality. The need for this operator is one of the major reasons why presently ultrasound is not widely used in radiotherapy workflows. To solve this issue, we introduce an algorithm that uses anatomical information derived from a CT scan to automatically provide the operator with a patient-specific ultrasound probe setup. The first application we investigated, for its relevance to radiotherapy, is 4D transperineal ultrasound image acquisition for prostate cancer patients. As initial test, the algorithm was applied on a CIRS multi-modality pelvic phantom. Probe setups were calculated in order to allow visualization of the prostate and adjacent edges of bladder and rectum, as clinically required. Five of the proposed setups were reproduced using a precision robotic arm and ultrasound volumes were acquired. A gel-filled probe cover was used to ensure proper acoustic coupling, while taking into account possible tilted positions of the probe with respect to the flat phantom surface. Visual inspection of the acquired volumes revealed that clinical requirements were fulfilled. Preliminary quantitative evaluation was also performed. The mean absolute distance (MAD) was calculated between actual anatomical structure positions and positions predicted by the CT-based algorithm. This resulted in a MAD of (2.8±0.4) mm for prostate, (2.5±0.6) mm for bladder and (2.8±0.6) mm for rectum. These results show that no significant systematic errors due to e.g. probe misplacement were introduced.
Fractional labelmaps for computing accurate dose volume histograms
Kyle Sunderland, Csaba Pinter, Andras Lasso, et al.
PURPOSE: In radiation therapy treatment planning systems, structures are represented as parallel 2D contours. For treatment planning algorithms, structures must be converted into labelmap (i.e. 3D image denoting structure inside/outside) representations. This is often done by triangulated a surface from contours, which is converted into a binary labelmap. This surface to binary labelmap conversion can cause large errors in small structures. Binary labelmaps are often represented using one byte per voxel, meaning a large amount of memory is unused. Our goal is to develop a fractional labelmap representation containing non-binary values, allowing more information to be stored in the same amount of memory. METHODS: We implemented an algorithm in 3D Slicer, which converts surfaces to fractional labelmaps by creating 216 binary labelmaps, changing the labelmap origin on each iteration. The binary labelmap values are summed to create the fractional labelmap. In addition, an algorithm is implemented in the SlicerRT toolkit that calculates dose volume histograms (DVH) using fractional labelmaps. RESULTS: We found that with manually segmented RANDO head and neck structures, fractional labelmaps represented structure volume up to 19.07% (average 6.81%) more accurately than binary labelmaps, while occupying the same amount of memory. When compared to baseline DVH from treatment planning software, DVH from fractional labelmaps had agreement acceptance percent (1% ΔD, 1% ΔV) up to 57.46% higher (average 4.33%) than DVH from binary labelmaps. CONCLUSION: Fractional labelmaps promise to be an effective method for structure representation, allowing considerably more information to be stored in the same amount of memory.
Evaluation of the Intel RealSense SR300 camera for image-guided interventions and application in vertebral level localization
Rachael House, Andras Lasso, Vinyas Harish, et al.
PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK’s built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.