Proceedings Volume 10576

Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling

Baowei Fei, Robert J. Webster III
cover
Proceedings Volume 10576

Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling

Baowei Fei, Robert J. Webster III
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 9 July 2018
Contents: 14 Sessions, 98 Papers, 52 Presentations
Conference: SPIE Medical Imaging 2018
Volume Number: 10576

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10576
  • Deep Learning
  • Keynote and Medical Robotics
  • Image Registration
  • WORKSHOP: Selected Papers from the Journal of Medical Imaging Special Issue
  • Neurological Procedures and Technologies
  • Ultrasound Imaging and Detection Methods
  • Enhanced Reality, Simulation, and Planning
  • Segmentation and Modeling
  • Cardiac and Lung Imaging and Tracking
  • Intraoperative Imaging and Technologies
  • Abdominal Imaging and Guidance Technologies
  • Validation, Simulation, and 3D Printing
  • Poster Session
Front Matter: Volume 10576
icon_mobile_dropdown
Front Matter: Volume 10576
This PDF file contains the front matter associated with SPIE Proceedings Volume 10576, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Deep Learning
icon_mobile_dropdown
Automatic slice segmentation of intraoperative transrectal ultrasound images using convolutional neural networks
Nooshin Ghavami, Yipeng Hu, Ester Bonmati, et al.
This paper, originally published on 12 March 2018, was replaced with a corrected/revised version on 1 June 2018. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. Clinically important targets for ultrasound-guided prostate biopsy and prostate cancer focal therapy can be defined on MRI. However, localizing these targets on transrectal ultrasound (TRUS) remains challenging. Automatic segmentation of the prostate on intraoperative TRUS images is an important step towards automating most MRI-TRUS image registration workflows so that they become more acceptable in clinical practice. In this paper, we propose a deep learning method using convolutional neural networks (CNNs) for automatic prostate segmentation in 2D TRUS slices and 3D TRUS volumes. The method was evaluated on a clinical cohort of 110 patients who underwent TRUS-guided targeted biopsy. Segmentation accuracy was measured by comparison to manual prostate segmentation in 2D on 4055 TRUS images and in 3D on the corresponding 110 volumes, in a 10-fold patient-level cross validation. The proposed method achieved a mean 2D Dice score coefficient (DSC) of 0.91±0.12 and a mean absolute boundary segmentation error of 1.23±1.46mm. Dice scores (0.91±0.04) were also calculated for 3D volumes on the patient level. These suggest a promising approach to aid a wide range of TRUS-guided prostate cancer procedures needing multimodality data fusion.
Generative adversarial networks for specular highlight removal in endoscopic images
Providing the surgeon with the right assistance at the right time during minimally-invasive surgery requires computer-assisted surgery systems to perceive and understand the current surgical scene. This can be achieved by analyzing the endoscopic image stream. However, endoscopic images often contain artifacts, such as specular highlights, which can hinder further processing steps, e.g., stereo reconstruction, image segmentation, and visual instrument tracking. Hence, correcting them is a necessary preprocessing step. In this paper, we propose a machine learning approach for automatic specular highlight removal from a single endoscopic image. We train a residual convolutional neural network (CNN) to localize and remove specular highlights in endoscopic images using weakly labeled data. The labels merely indicate whether an image does or does not contain a specular highlight. To train the CNN, we employ a generative adversarial network (GAN), which introduces an adversary to judge the performance of the CNN during training. We extend this approach by (1) adding a self-regularization loss to reduce image modification in non-specular areas and by (2) including a further network to automatically generate paired training data from which the CNN can learn. A comparative evaluation shows that our approach outperforms model-based methods for specular highlight removal in endoscopic images.
Tumor margin classification of head and neck cancer using hyperspectral imaging and convolutional neural networks
Martin Halicek, James V. Little, Xu Wang, et al.
One of the largest factors affecting disease recurrence after surgical cancer resection is negative surgical margins. Hyperspectral imaging (HSI) is an optical imaging technique with potential to serve as a computer aided diagnostic tool for identifying cancer in gross ex-vivo specimens. We developed a tissue classifier using three distinct convolutional neural network (CNN) architectures on HSI data to investigate the ability to classify the cancer margins from ex-vivo human surgical specimens, collected from 20 patients undergoing surgical cancer resection as a preliminary validation group. A new approach for generating the HSI ground truth using a registered histological cancer margin is applied in order to create a validation dataset. The CNN-based method classifies the tumor-normal margin of squamous cell carcinoma (SCCa) versus normal oral tissue with an area under the curve (AUC) of 0.86 for inter-patient validation, performing with 81% accuracy, 84% sensitivity, and 77% specificity. Thyroid carcinoma cancer-normal margins are classified with an AUC of 0.94 for inter-patient validation, performing with 90% accuracy, 91% sensitivity, and 88% specificity. Our preliminary results on a limited patient dataset demonstrate the predictive ability of HSI-based cancer margin detection, which warrants further investigation with more patient data and additional processing techniques to optimize the proposed deep learning method.
Inverse biomechanical modeling of the tongue via machine learning and synthetic training data
Aniket A. Tolpadi, Maureen L. Stone, Aaron Carass, et al.
The tongue’s deformation during speech can be measured using tagged magnetic resonance imaging, but there is no current method to directly measure the pattern of muscles that activate to produce a given motion. In this paper, the activation pattern of the tongue’s muscles is estimated by solving an inverse problem using a random forest. Examples describing different activation patterns and the resulting deformations are generated using a finite-element model of the tongue. These examples form training data for a random forest comprising 30 decision trees to estimate contractions in 262 contractile elements. The method was evaluated on data from tagged magnetic resonance data from actual speech and on simulated data mimicking flaps that might have resulted from glossectomy surgery. The estimation accuracy was modest (5.6% error), but it surpassed a semimanual approach (8.1% error). The results suggest that a machine learning approach to contraction pattern estimation in the tongue is feasible, even in the presence of flaps.
Cine cardiac MRI slice misalignment correction towards full 3D left ventricle segmentation
Accurate segmentation of the left ventricle (LV) blood-pool and myocardium is required to compute cardiac function assessment parameters or generate personalized cardiac models for pre-operative planning of minimally invasive therapy. Cardiac Cine Magnetic Resonance Imaging (MRI) is the preferred modality for high resolution cardiac imaging thanks to its capability of imaging the heart throughout the cardiac cycle, while providing tissue contrast superior to other imaging modalities without ionizing radiation. However, there exists an inevitable misalignment between the slices in cine MRI due to the 2D + time acquisition, rendering 3D segmentation methods ineffective. A large part of published work on cardiac MR image segmentation focuses on 2D segmentation methods that yield good results in mid-slices, however with less accurate results for the apical and basal slices. Here, we propose an algorithm to correct for the slice misalignment using a Convolutional Neural Network (CNN)-based regression method, and then perform a 3D graph-cut based segmentation of the LV using atlas shape prior. Our algorithm is able to reduce the median slice misalignment error from 3.13 to 2.07 pixels, and obtain the blood-pool segmentation with an accuracy characterized by a 0.904 mean dice overlap and 0.56 mm mean surface distance with respect to the gold-standard blood-pool segmentation for 9 test cine MR datasets.
Keynote and Medical Robotics
icon_mobile_dropdown
Toward image-guided partial nephrectomy with the da Vinci robot: exploring surface acquisition methods for intraoperative re-registration
James M. Ferguson, Leon Y. Cai, Alexander Reed, et al.
Our overarching goal is to facilitate wider adoption of robot-assisted partial nephrectomy through image- guidance, which can enable a surgeon to visualize subsurface features and instrument locations in real time intraoperatively. This is motivated by the observation that while there are compelling lifelong health benefits of partial nephrectomy, radical nephrectomy remains an overused surgical approach for many kidney cancers. Image-guidance may facilitate wider adoption of the procedure because it has the potential to increase surgeons' confidence in efficiently and safely exposing critical structures as well as achieving negative margins with maximal benign tissue sparing, particularly in a minimally invasive setting. To maintain the accuracy of image-guidance during the procedure as the kidney moves, periodic re-registration of medical image data to kidney anatomy is necessary. In this paper, we evaluate three registration approaches for the da Vinci Surgical System that have the potential to enable real-time updates to the display of segmented preoperative images within its console. Specifically, we compare the use of surface ink fiducials triangulated from stereo endoscope images, point clouds obtained without fiducials using a stereoscopic depth mapping algorithm, and points obtained by lightly tracing the da Vinci tool tip over the kidney surface. We compare and contrast the three approaches from both an accuracy and a workflow perspective.
Technical note: feasibility of photoacoustic guided hysterectomies with the da Vinci robot
This technical note provides an overview of our work to explore the combination of photoacoustic imaging with the da Vinci surgical robot, which is often used to perform teleoperated hysterectomies (i.e., surgical removal of the uterus). Hysterectomies are the prevailing solution to treat medical conditions such as uterine cancer, endometriosis, and uterine prolapse. One complication of hysterectomies is accidental injury to the ureters located within millimeters of the uterine arteries that are severed and cauterized to hinder blood flow and enable full uterus removal. By introducing photoacoustic imaging, we aim to visualize the uterine arteries (and potentially the ureter) during this surgery. We developed a specialized light delivery system to surround a da Vinci curved scissor tool and an ultrasound probe was placed externally, representing a transvaginal approach to receive the resulting acoustic signals. Photoacoustic images were acquired while sweeping the tool across a custom 3D uterine vessel model covered in ex vivo bovine tissue that was placed between the 3D model and the light delivery system, as well as between the ultrasound probe and the 3D model (to introduce optical and acoustic scattering). Four tool orientations were explored with the scissors in either open or closed configurations. The optimal tool orientation was determined to be closed scissors with no bending of the tool’s wrist, based on measurements of signal contrast and background signal-to-noise ratios in the corresponding photoacoustic images. We also introduce a new metric, dθ, to determine when the image will change during a sweep, based on the tool position and orientation (i.e., pose), relative to previous poses. Overall, results indicate that photoacoustic imaging is a promising approach to enable visualization of the uterine arteries and thereby guide hysterectomies (and other gynecological surgeries). In addition, results can be extended to other minimally invasive da Vinci surgeries and laparoscopic instruments with similar tip geometry.
Known-component registration for robotic drill guide positioning in spine pedicle screw placement (Conference Presentation)
Thomas Yi, Vignesh Ramchandran, Jeffrey H. Siewerdsen, et al.
Purpose. A novel form of x-ray-guided robotic positioning of surgical instruments is reported and evaluated in preclinical studies of spine pedicle screw placement with the aim of improving and automating safe, accurate delivery of transpedicle K-wire and screw. Methods. The known-component registration (KC-Reg) algorithm was used to register the 3D patient CT and CAD model of a pedicle drill guide to intraoperatively acquired 2D radiographs. The resulting transformations, with offline hand-eye calibration, drive a robot-ically-held pedicle drill guide to target trajectories established in the preoperative CT of the patient. The proposed method was as-sessed in comparison to a more conventional tracker-guided approach, and robustness to different clinically realistic errors (e.g., suboptimal fiducial arrangements, gross anatomical deformation) was tested in phantom and cadaver studies. Results. Analyzing the target registration error (TRE) in terms of the deviation from the target plan, the KC-Reg approach resulted in 1.51 ± 0.51 mm error at the tooltip and 1.01 ± 0.92° in approach angle, showing comparable performance to that of the tracker-guided workflow. In cadaver studies with anatomical deformation, TRE of 2.31 ± 1.05 mm and 0.66 ± 0.62° were observed, with statistically improved performance over a surgical tracker through registration of locally rigid bony anatomy. Conclusions. Novel x-ray guidance offers an accurate means of driving robotic systems that is naturally compatible with conven-tional workflow in fluoroscopically guided procedures. Moreover, the method was robust against anatomical deformation due to the radiographic scene’s local nature used in 3D-2D registration, presenting a potentially major benefit in surgeries.
Image Registration
icon_mobile_dropdown
Clustered iterative sub-atlas registration for improved deformable registration using statistical shape models
B. Ramsay, T. De Silva, R. Han, et al.
Purpose: Statistical atlases provide a valuable basis for registration and guidance in orthopaedic surgery – for example, automatic anatomical segmentation and planning via atlas-to-patient registration. We report the construction of a statistical shape model for the pelvis containing annotations of common surgical trajectories and investigate a novel method for deformable registration that takes advantage of sub-types that may exist within the atlas and uses them in active shape model registration according to sub-atlas similarity of principal components between atlas members and the target (patient) pelvis.

Methods: CT images from 41 subjects (21 males, 20 females) were derived from the Cancer Imaging Archive (TCIA) and segmented using manual/semi-automatic methods. A statistical shape model was constructed and incorporated in an active shape model (ASM) registration framework for atlas-to-patient registration. Further, we introduce a registration method that exploits clusters in the underlying distribution to iteratively perform registrations after selecting a patient relevant cluster (sub-atlas) that represents similar shape characteristics to the image being registered. Experiments were performed to evaluate surface-to-surface and atlas-to patient registration algorithms using this clustered iterative model. Initial investigation of improved registration based on using similar shapes, was first explored through the use of gender as a categorical way of selecting a possible sub-atlas for registration.

Results: The RMSE surface-to-surface registration error (mean ± std) was reduced from (2.1 ± 0.2) mm when registering according to the entire atlas (N=40 members) to (1.8 ± 0.1) mm when registering within clusters based on similarity of principal components (N=20 members), showing improved accuracy (p<0.001) with fewer atlas members – an efficiency gained by virtue of the proposed approach. The atlas showed clear clusters in the first two principal components corresponding to gender, and the proposed method demonstrated improved accuracy when using ASM registration as well as when applied to a coherent-point drift (CPD) non-rigid deformable registration.

Conclusions: The proposed framework improved atlas-to-patient registration accuracy and increased the efficiency of statistical shape models (i.e., equivalent registration using fewer atlas members) by guiding member selection according to similarity in principal components.
Technical note: nonrigid registration for laparoscopic liver surgery using sparse intraoperative data
Soft tissue deformation can be a major source of error for image-guided interventions. Deformations associated with laparoscopic liver surgery can be substantially different from those concomitant with open approaches due to intraoperative practices such as abdominal insufflation and variable degrees of mobilization from the supporting ligaments of the liver. This technical note outlines recent contributions towards nonrigid registration for laparoscopic liver surgery published in the Journal of Medical Imaging special issue on image-guided procedures, robotic interventions, and modeling [10]. In particular, we review (1) a characterization of intraoperative liver deformation from clinically-acquired sparse digitizations of the organ surface through a series of laparoscopic-to-open conversions, and (2) a novel deformation correction strategy that leverages a set of control points placed across anatomical regions of mechanical support provided to the organ. Perturbations of these control points on a finite element model were used to iteratively reconstruct the intraoperative deformed organ shape from sparse measurements of the liver surface. These characterization and correction methods for laparoscopic deformation were applied to a retrospective clinical series of 25 laparoscopic-to-open conversions performed under image guidance and a phantom validation framework.
Real-time image-based 3D-2D registration for ultrasound-guided spinal interventions
T. De Silva, A. Uneri, X. Zhang, et al.
Introduction: Ultrasound (US) is a promising low-cost, real-time, portable imaging modality suitable for guidance in spine pain procedures. However, suboptimal image quality and US artifacts confound visualization of deep bony anatomy and have limited its widespread use. Real-time fusion of US images with pre-procedure MRI could provide valuable assistance to guide needle targeting in 3D. To achieve this goal, we propose a fast, entirely image-based 3D-2D rigid registration framework that operates without external hardware tracking and can estimate US probe pose relative to patient position in real-time.

Method: Registration of 2D US (slice) images is performed via the initialization obtained from a fast dictionary search that determines probe pose within a predefined set of pose configurations. 2D slices are extracted from a static 3D US (volume) image to construct a feature dictionary representing different probe poses. Haar features are computed in a fourlevel pyramid that transforms 2D image intensities to a 1D feature vector, which are in turn matched to the 2D target image. 3D-2D registration was performed with the Haar-based initialization with normalized cross-correlation as the metric and Powell’s method as the optimizer. Reduction to 1D feature vectors presents the potential for major gains in speed compared to registration of the 3D and 2D images directly. The method was validated in experiments conducted in a lumbar spine phantom and a cadaver specimen with known translations imparted by a computerized motion stage.

Results: The Haar feature matching method demonstrated initialization accuracy (mean ± std) = (1.9 ± 1.4) mm and (2.1 ± 1.2) mm in phantom and cadaver studies, respectively. The overall registration accuracy was (2.0 ± 1.3) mm and (1.7 ± 0.9) mm, and the initialization was a necessary and important step in the registration process.

Conclusions: The proposed image-based registration method demonstrated promising results for compensating motion of the US probe. This image-based solution could be an important step toward an entirely image-based, real-time registration method of 2D US to 3D US and pre-procedure MRI, eliminating hardware-based tracking systems in a manner more suitable to clinical workflow.
Influence of 4D CT motion artifacts on correspondence model-based 4D dose accumulation
Thilo Sothmann, Tobias Gauer, René Werner
In radiotherapy (RT) of moving targets, motion artifacts in 4D CT planning data can be hypothesized to influence accuracy of RT treatment planning steps. Especially results of deformable image registration (DIR) of 4D CT phase images and DIR-based dose accumulation/4D dose simulation can be assumed to be directly affected. In this study, the influence of typical 4D CT "double structure" and "interpolation" artifacts on correspondence model-based 4D dose simulation is investigated. The correspondence model correlates patient-specific DIR-based internal motion information and external breathing signals, which allows for integration of respiratory variability into 4D dose simulation. Artifact-free 4D CT data of 6 lung and liver cancer patients were manipulated to contain mentioned artifacts. Correspondence model-based dose accumulation was performed in both artifact-free and artifact-affected data sets. Overall, the effect of "double structure" artifacts was negligible, whereas "interpolation" artifacts noticeably influenced dose accumulation accuracy.
Deformable registration of radiation isodose lines to delayed contrast-enhanced magnetic resonance images for assessment of myocardial lesion formation following proton beam therapy
Ventricular tachycardia is increasingly treated with ablation therapy, a technique in which catheters are guided into the ventricle and radiofrequency energy is delivered into the myocardial tissue to interrupt arrhythmic electrical pathways. Recent efforts have investigated the use of noninvasive external beam therapy for treatment of ventricular tachycardia where target regions are identified in the myocardium and treated using external beams. The relationship between the planned dose map and myocardial tissue change, however, has not yet been quantified. In this work, we use a deformable registration algorithm to align dose maps planned from baseline computed-tomography scans to delayed contrast-enhanced magnetic resonance imaging scans taken at 4 week intervals following proton beam therapy. From this data, the relationship between the planned dose and image enhancement, which serves as a surrogate for tissue change, can be quantified.
WORKSHOP: Selected Papers from the Journal of Medical Imaging Special Issue
icon_mobile_dropdown
Technical note: on-the-fly augmented reality for orthopaedic surgery using a multi-modal fiducial
Sebastian Andress, Alex Johson, Mathias Unberath, et al.
C-Arm X-Ray systems are the workhorse modality for guidance of percutaneous orthopaedic surgical procedures. However, two-dimensional observations of the three-dimensional anatomy suffer from the effects of projective simplification. Consequently, many X-Ray images from various orientations need to be acquired for the surgeon to accurately assess the spatial relations between the patient’s anatomy and the surgical tools.

In this paper, we present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasi-unprepared operating rooms. The proposed system builds upon a multi-modality marker and simultaneous localization and mapping technique to co-calibrate an optical see-through head mounted display to a C-Arm fluoroscopy system. Then, annotations on the 2-D X-Ray images can be rendered as virtual objects in 3-D providing surgical guidance. In a feasibility study on a semi-anthropomorphic phantom we found the accuracy of our system to be comparable to the traditional image-guided technique while substantially reducing the number of acquired X-Ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects, that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed towards common orthopaedic interventions.
Technical note: probabilistic visual and electromagnetic data fusion for robust drift-free sequential mosaicking: application to fetoscopy
Marcel Tella-Amo, Loic Peter, Dzhoshkun I. Shakir, et al.
The most e↵ective treatment for Twin-to-Twin Transfusion Syndrome is laser photocoagulation of the shared vascular anastomoses in the placenta. Vascular connections are extremely challenging to locate due to their caliber and the reduced field of view of the fetoscope. Therefore, mosaicking techniques are beneficial to expand the scene, facilitate navigation and allow vessel photocoagulation decision-making. Local vision-based mosaicking algorithms inherently drift over time due to the use of pairwise transformations. We propose the use of an electromagnetic tracker (EMT) sensor mounted at the tip of the fetoscope to obtain camera pose measurements, which we incorporate into a probabilistic framework with frame-to-frame visual information to achieve globally consistent sequential mosaics. We parametrize the problem in terms of plane and camera poses constrained by EMT measurements to enforce global consistency while leveraging pairwise image relationships in a sequential fashion through the use of Local Bundle Adjustment. We show that our approach is drift-free and performs similarly to state-of-the-art global alignment techniques like Bundle Adjustment albeit with much less computational burden. Additionally, we propose a version of Bundle Adjustment that uses EMT information. We demonstrate the robustness to EMT noise and loss of visual information and evaluate mosaics for synthetic, phantom-based and ex vivo datasets.
Technical note: an augmented reality system for total hip arthroplasty
Javad Fotouhi, Clayton P. Alexander, Mathias Unberath, et al.
Proper implant alignment is a critical step in total hip arthroplasty (THA) procedures. In current practice, correct alignment of the acetabular cup is verified in C-arm X-ray images that are acquired in an anteriorposterior (AP) view. Favorable surgical outcome is, therefore, heavily dependent on the surgeon’s experience in understanding the 3D orientation of a hemispheric implant from 2D AP projection images. This work proposes an easy to use intra-operative component planning system based on two C-arm X-ray images that is combined with 3D augmented reality (AR) visualization that simplifies impactor and cup placement according to the planning by providing a real-time RGBD data overlay. We evaluate the feasibility of our system in a user study comprising four orthopedic surgeons at the Johns Hopkins Hospital, and also report errors in translation, anteversion, and abduction as low as 1.98 mm, 1.10°, and 0.53°, respectively. The promising performance of this AR solution shows that deploying this system could eliminate the need for excessive radiation, simplify the intervention, and enable reproducibly accurate placement of acetabular implants.
Technical note: automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalising neural network
Ester Bonmati, Yipeng Hu, Nikhil Sindhwani, et al.
Segmentation of the levator hiatus in ultrasound allows to extract biometrics which are of importance for pelvic floor disorder assessment. In this work, we present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a 2D image extracted from a 3D ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalising activation function. SELU has important advantages such as being parameter-free and mini-batch independent. A dataset with 91 images from 35 patients all labelled by three operators, is used for training and evaluation in a leave-one-patient-out cross-validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams’ index of 1.03), and outperforming a U-Net architecture without the need for batch normalisation. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semi-automatic approach.
Technical note: known-component registration for robotic drill guide positioning
T. Yi, V. Ramchandran, J. H. Siewerdsen, et al.
A method for x-ray-guided robotic positioning of surgical instruments is reported and evaluated in preclinical studies of spine pedicle screw placement with the aim of improving delivery of transpedicle drills and screws. The known-component registration (KC-Reg) algorithm was used to register the 3D patient CT and the surface model of a drill guide to intraoperatively acquired 2D radiographs. Resulting transformations, combined with offline hand-eye calibration, drive a robotically-held drill guide to target trajectories established in the preoperative patient CT. The proposed method was assessed against more conventional surgical tracker guidance, and robustness to clinically realistic errors was tested in phantom and cadaver studies. Target registration error (TRE) was computed as drill guide deviation from the planned trajectory. The KC-Reg approach resulted in 1.51 ± 0.51 mm error at tooltip and 1.01 ± 0.92° in approach angle, showing comparable performance to the tracker-guided approach. In cadaver studies with anatomical deformation, TRE of 2.31 ± 1.05 mm and 0.66 ± 0.62° were observed, with statistically improved performance over a surgical tracker through registration of locally rigid bony anatomy. X-ray guidance offers an accurate means of driving robotic systems that is compatible with conventional fluoroscopic workflow. Specifically, such procedures involve multi-planar fluoroscopic views that are qualitatively interpreted by the surgeon; the KC-Reg approach accomplishes this using the same multi-planar views to provide greater quantitative accuracy and valuable guidance and QA. The method was robust against anatomical deformation due to the radiographic scene’s local nature used in registration, presenting a potentially major surgical benefit.
Technical note: design and validation of an open-source library of dynamic reference frames for research and education in optical tracking
A. Brown, A. Uneri, T. De Silva, et al.
Purpose: Dynamic reference frames (DRFs) are a common component of surgical tracking systems, but there is a limited number of commercially available, valid tool designs, presenting a limitation to researchers in image-guided surgery and other communities. This work presents the development and validation of a large, open-source library of DRFs for passive optical tracking systems. Methods: Ten groups of DRF designs were generated according to an algorithm based on intra- and inter-tool design constraints. Validation studies were performed using a Polaris Vicra tracker (NDI) to compare the performance of each DRF in group A to a standard commercially available reference tool, including: tool-tip pivot calibration and measurement of fiducial registration error (FRE) on a computercontrolled bench Results: The resulting library of DRFs includes 10 groups - one with 10 DRFs and nine with 6. Each group includes one tool geometrically equivalent to a common commercially available DRF (NDI #8700339). Fiducial registration error (FRE) was 0.15 ± 0.03 mm, indistinguishable from the reference. Conclusions: The library of custom DRF designs perform equivalently to common, commercially available reference DRFs and present a multitude of distinct, simultaneously-trackable DRF designs. The open-source library contains files suitable to 3D printing as well as tool definition files ready to download for research purposes.
Technical note: on cardiac ablation lesion visualization for image-guided therapy monitoring
The delivery of insufficient thermal dose is a significant contributor to incomplete tissue ablation and leads to arrhythmia recurrence and a large number of patients requiring repeat procedures. In concert with ongoing research efforts aimed at better characterizing the RF energy delivery, here we propose a method that entails modeling and visualization of the lesions in real time. The described image-based ablation model relies on classical heat transfer principles to estimate tissue temperature in response to the ablation parameters, tissue properties, and duration. The ablation lesion quality, geometry, and overall progression is quantified on a voxelby-voxel basis according to each voxel’s cumulative temperature and time exposure. The model was evaluated both numerically under different parameter conditions, as well as experimentally, using ex vivo bovine tissue samples. This study suggests that the proposed technique provides reasonably accurate and sufficiently fast visualizations of the delivered ablation lesions.
Technical note: a radiomic signature of infiltration in peritumoral edema predicts subsequent recurrence in glioblastoma
Saima Rathore, Hamed Akbari, Jimit Doshi, et al.
Standard surgical resection of glioblastoma, mainly guided by the enhancement on post-contrast T1-weighted magnetic resonance imaging (MRI), disregards infiltrating tumor within the peritumoral edema region. Subsequent radiotherapy typically delivers uniform radiation to peritumoral FLAIR-hyperintense regions, without attempting to target areas likely to be infiltrated more heavily. Non-invasive in vivo delineation of the areas of tumor infiltration and prediction of early recurrence in peritumoral edema region could assist in targeted intensification of local therapies, thereby potentially delaying recurrence and prolonging survival. This paper presents a method for estimating peritumoral edema infiltration using radiomic signatures determined via machine learning methods, and tests it on 90 patients with de novo glioblastoma. The generalizability of the proposed predictive model was evaluated via cross-validation in a discovery cohort (n=31), and was subsequently evaluated in a replication cohort (n=59). Spatial maps representing the likelihood of tumor infiltration and future early recurrence were compared with regions of recurrence on postresection follow-up studies. The cross-validated accuracy of our predictive infiltration model on the discovery and replication cohorts was 87.51% (odds ratio=10.22, sensitivity=80.65, specificity=87.63) and 89.54% (odds ratio=13.66, sensitivity=97.06, specificity = 76.73), respectively. The radiomic signature of the recurrent tumor region revealed higher vascularity and cellularity when compared with the nonrecurrent region. The proposed model shows evidence that multi-parametric pattern analysis from clinical MRI sequences can assist in in vivo estimation of the spatial extent and pattern of tumor recurrence in peritumoral edema, which may guide supratotal resection and/or intensification of postoperative radiation therapy.
Neurological Procedures and Technologies
icon_mobile_dropdown
Model-based correction for brain shift in deep brain stimulation burr hole procedures: a comparison using interventional magnetic resonance imaging
Ma Luo, Saramati Narasimhan, Alastair J. Martin, et al.
Deep brain stimulation (DBS) is an effective treatment for movement disorders, e.g. Parkinson’s disease. The quality of DBS treatment is dependent on the implantation accuracy of DBS electrode leads into target structures. However, brain shift during burr hole procedures has been documented and hypothesized to negatively impact treatment quality. Several approaches have been proposed to compensate for brain shift in DBS, namely microelectrode recording (MER) and interventional magnetic resonance (iMR) imaging. Though both demonstrate benefits in guiding accurate electrode placement, they suffer drawbacks such as prolonged procedures and in the latter, cost considerations. Hence, we are exploring a model-based brain shift compensation strategy in DBS to improve targeting accuracy for surgical navigation. Our method is a deformation-atlas-based approach, i.e. potential intraoperative deformations are pre-computed via biomechanical model under varying conditions, combined with an inverse problem driven by sparse intraoperative data for estimating volumetric brain deformations. In this preliminary feasibility study, we examine our model’s ability to predict brain shift in DBS by comparing with iMR in one patient. The evaluation includes: (1) a subsurface deformation comparison where subsurface shifts measured by iMR are compared to model-predicted counterparts; (2) a second comparison at surgical targets where the atlas-method is compared to deformations measured by non-rigid image-to-image registration using preoperative image and iMR. For the former, the model reduces alignment error from 8.6 ± 1.4 to 3.6 ± 0.8 mm, representing ~58.6% correction. For the latter, model estimated brain shifts at surgical targets are 2.4 and 0.6 mm, consistent with clinical observations.
Resection-induced brain-shift compensation using vessel-based methods
Fanny Morin, Hadrien Courtecuisse, Ingerid Reinertsen, et al.
Most brain-shift compensation methods address the problem of updating preoperative images to reflect brain deformations following the craniotomy and dura opening. However, fewer enable to take into account the resection-induced deformations occuring all along the tumor removal procedure. This paper evaluates the use of two existing methods to tackle that problem. Both techniques rely on blood vessels segmented then skeletonized from preoperative MR Angiography and navigated Doppler Ultrasound images acquired during resection. While the first one proposes to register the vascular trees using a rigid modified ICP algorithm, the second method relies on a non-rigid constrained-based biomechanical approach. Quantitative results are provided, based on distances between paired landmarks set on blood vessels then anatomical structures delineated on medical images. A qualitative evaluation of the compensation is also presented using initial and updated images. An analysis on three cases of surface tumor shows both methods, especially the biomechanical one, can compensate up to 63% of the brain-shift, with an error in the range of 2 mm. However, these results are not reproduced on a more complex case of deep tumor. While more patients must be included, these preliminary results show that vesselbased methods are well suited to compensate for resection-induced brain-shift, but better outcomes in complex cases still require to improve the methods to take the resection into account.
X-ray image guidance workflow development for in-vivo aneurysm treatment using a new retrievable asymmetric flow diverter (RAFD)
Ciprian N. Ionita, Ashwin Venkataraman, Alexander Podgorsak, et al.
The vascular procedures during in–vivo aneurysms treatment with the Retrievable Asymmetric Flow Diverter (RAFD) prototype, require accurate x-ray image guidance and flow diversion assessment using angiography. The new device is made of a high porosity scaffold which supports a low-porosity patch used to divert the blood flow from the aneurysm. Two platinum markers have been added to allow stent placement in the longitudinal and azimuthal direction with regard to the aneurysm ostium. The retrievability of the device allows for multiple re-deployments until the placement is optimal. Eleven elastase aneurysms were created and treated with the new device. Placement was done using high definition x-ray fluoroscopy and optimal blood flow diversion was verified using angiography. Once the flow diversion was confirmed, the devices were deployed using electrolytic detachment. Angiograms pre- and post-stent placement were analyzed using parametric imaging based on dye dilution curves of injected contrast. Average values of the area under the curve (AUC), Meant Transit Time (MTT) and Peak Value (PV) for the aneurysms were measured and normalized to the values recorded in the main vessel. Fluoroscopy time for the device deployment was 15.30±5.30 minutes. Angiographic analysis indicated that average normalized values for: MTT increased 227%, AUC decreased 51% times while PV decreased 30%. In conclusion, the device was successfully deployed in eleven rabbits. Based on angiogram analysis, significant flow diversion has been observed. Overall this report demonstrates that the imaging workflow we developed for the new device placement was implemented successfully.
Image updating for brain deformation compensation: cross-validation with intraoperative ultrasound
Xiaoyao Fan, David W. Roberts, Jonathan D. Olson, et al.
Intraoperative image guidance using preoperative MR images (pMR) is widely used in neurosurgery, but the accuracy can be compromised by brain deformation as soon as the dura is open. Biomechanical finite element models (FEM) have been developed to compensate for brain deformation that occurs at different surgical stages. Intraoperative sparse data extracted from the exposed cortical surface and/or from deeper brain is used to drive the FEM model to compute wholebrain deformation field and produce model-updated MR (uMR) that matches the surgical scene. In previous studies, we quantified the accuracy using model-data misfit (i.e., the root-mean-square error between model estimates and sparse data), as well as target registration errors (TRE) of surface features (such as vessel junctions), and showed that the accuracy on the cortical surface was ~1-2 mm. However, the accuracy in deeper brain has not been investigated, as it is challenging to obtain subsurface features during surgery for accuracy assessment. In this study, we used intraoperative stereovision (iSV) to extract sparse data, which was employed to drive the FEM model and produce uMR, and acquired co-registered intraoperative ultrasound images (iUS) at different surgical stages in 2 cases for cross validation. We quantify model-data misfit, and compare model updated MR with iUS for qualitative assessment of accuracy in deeper brain. The results show that the model-data misfit was 2.39 and 0.64 mm, respectively, for the 2 cases reported, and uMR aligned well with both iSV and iUS, indicating a good accuracy both on the surface and in deeper brain.
Neurosurgical burr hole placement using the Microsoft HoloLens
Emily Rae, Andras Lasso, Matthew S. Holden, et al.
PURPOSE: Tracked navigation systems are generally impractical in bedside neurosurgical procedures, such as a twist-drill crainiostomy for the removal of a subdural hematoma, where the use of navigation could optimize the placement of the drill in relation to the underlying fluid. We use the Microsoft HoloLens to display a hologram floating in the patient’s head to mark a burr hole on the skull. METHODS: A 3D model of the head, hematoma and burr hole is created from CT and imported to the HoloLens. The hologram is interactively registered to the patient and the burr hole is marked on the skull. 3D Slicer, Unity, and Visual Studio were used for software development. The system was tested by 6 inexperienced and 1 experienced users. They each performed 6 registrations on phantoms with fiducial markers placed at 3 plausible burr hole locations on each side of the head. Registration accuracy was determined by measuring the distance between the holographic and physical markers. RESULTS: Inexperienced users placed 98% of the markers within the clinically acceptable range of 10 mm in an average time of 4:46 min. The experienced user placed 100% of the markers within the acceptable range in an average time of 2:52 min. CONCLUSION: It is feasible to mark a neurosurgical burr hole location with clinically acceptable accuracy using the Microsoft HoloLens, within an acceptable length of time. This technology may also prove useful for procedures that require higher accuracy of drill location and drain trajectory such as the placement of external ventricular drains.
Ultrasound Imaging and Detection Methods
icon_mobile_dropdown
3D ultrasound guidance system for permanent breast seed implantation: integrated system performance and phantom procedure
Justin Michael, Jessica R. Rodgers, Daniel Morton, et al.
Permanent breast seed implantation (PBSI) is a single visit accelerated partial breast irradiation method that uses needles inserted via a template to distribute Pd-103 radioactive seeds with two-dimensional (2D) ultrasound (US) guidance. This guidance approach is limited by its dependence on the operator and average seed placement errors greater than benchmark values established by dosimetric studies. We propose the use of a three-dimensional (3D) US imaging approach for needle guidance with integrated template tracking. We previously described the preliminary development and validation of the 3D US mechatronic system. The present work demonstrates the accuracy of the integrated system by quantifying agreement between tracking and imaging sub-systems and its use guiding a phantom procedure. Tracking error was measured by inserting a needle a known distance through the template and comparing expected tip position from tracking to observed tip position from imaging. Mean ± standard deviation differences in needle tip position and angle were 2.90 ± 0.76 mm and 1.77 ± 0.98°, respectively, validating the needle tracking accuracy of the developed system. The system was used to guide 15 needles into a patient-specific phantom according to the accompanying treatment plan and micro-CT images taken before and after to evaluate placement accuracy. Seed positions were modelled using needle positions and the resulting dosimetry compared to a procedure specific benchmark. The mean tip difference was 2.08 mm while the mean angular difference was 2.6°, resulting in acceptable dosimetric coverage. These results demonstrate 3D US as a potentially feasible technique for PBSI guidance.
Feature study on catheter detection in three-dimensional ultrasound
The usage of three-dimensional ultrasound (3D US) during image-guided interventions for e.g. cardiac catheterization has increased recently. To accurately and consistently detect and track catheters or guidewires in the US image during the intervention, additional training of the sonographer or physician is needed. As a result, image-based catheter detection can be beneficial to the sonographer to interpret the position and orientation of a catheter in the 3D US volume. However, due to the limited spatial resolution of 3D cardiac US and complex anatomical structures inside the heart, image-based catheter detection is challenging. In this paper, we study 3D image features for image-based catheter detection using supervised learning methods. To better describe the catheter in 3D US, we extend the Frangi vesselness feature into a multi-scale Objectness feature and a Hessian element feature, which extract more discriminative information about catheter voxels in a 3D US volume. In addition, we introduce a multi-scale statistical 3D feature to enrich and enhance the information for voxel-based classification. Extensive experiments on several in-vitro and ex-vivo datasets show that our proposed features improve the precision to at least 69% when compared to the traditional multi-scale Frangi features (from 45% to 76% at a high recall rate 75%). As for clinical application, the high accuracy of voxel-based classification enables more robust catheter detection in complex anatomical structures.
Coherent needle detection in ultrasound volumes using 3D conditional random fields
3D ultrasound (US) transducers will improve the quality of image-guided medical interventions if an automated detection of the needle becomes possible. Image-based detection of the needle is challenging due to the presence of other echogenic structures in the acquired data, inconsistent visibility of needle parts and the low quality in US imaging. As the currently applied approaches for needle detection classify each voxel individually, they do not consider the global relations between the voxels. In this work, we introduce coherent needle labeling by using dense conditional random fields over a volume, along with 3D space-frequency features. The proposal includes long-distance dependencies in voxel pairs according to their similarities in the feature space and their spatial distance. This post-processing stage leads to better label assignment of volume voxels and a more compact and coherent segmented region. Our ex-vivo experiments based on measuring the F-1, F-2 and IoU scores show that the performance improves a significant 10-20 % compared with only using the linear SVM as a baseline for voxel classification.
Compliant joint echogenicity in ultrasound images: towards highly visible steerable needles
Nick J. van de Berg, Juan A. Sánchez-Margallo, Thomas Langø, et al.
Radio frequency ablation is commonly used in the treatment of hepatocellular carcinoma. Clinicians rely on imaging techniques, such as medical ultrasound, to confirm an accurate needle placement. This accuracy may improve by means of active needle steering techniques, which are currently in development. Needle steering will likely increase the clinician’s reliance on imaging techniques. This has motivated the study of the echogenicity of steerable needle joint structures. Two needles were manufactured with arrays of kerfs, similar to the compliant joint structures found in steerable needles. The needle visibility was compared to a smooth surface needle and a commercially available RFA needle. The visibility was quantified for both the shaft and tip, by means of a contrastto- noise ratio (CNR). CNR data were obtained for three insertion angles. The results show that the CNRs of the compliant joint structures were consistently higher than those of the smooth surface needle, whereas they were either higher than or comparable to those of the RFA needle. For acute insertion angles, the bevel tip of the RFA needle had a higher CNR than the conical tip of the kerfed needles, motivating the extension of this visibility study to the full needle design.
Real-time transverse process detection in ultrasound
Csaba Pinter, Bryan Travers, Zachary Baum, et al.
PURPOSE: Ultrasound offers a safe radiation-free approach to visualize the spine and measure or assess scoliosis. However, ultrasound assessment also poses major challenges. We propose a real-time algorithm and software implementation to automatically delineate the posterior surface patches of transverse processes in tracked ultrasound; a necessary step toward the ultimate goal of spinal curvature measurement.

METHODS: Following a pre-filtering of each captured ultrasound image, the shadows cast by each transverse process bone is examined and contours which are likely posterior bone surface are kept. From these contours, a threedimensional volume of the bone surfaces is created in real-time as the operator acquires the images. The processing algorithm was implemented on the PLUS and 3D Slicer open-source software platforms.

RESULTS: The algorithm was tested with images captured using the SonixTouch ultrasound scanner, Ultrasonix C5-2 curvilinear transducer and NDI trakSTAR electromagnetic tracker. Ultrasound data was collected from patients presenting with idiopathic adolescent scoliosis. The system was able to produce posterior surface patches of the transverse process in real-time, as the images were acquired by a non-expert sonographer. The resulting transverse process surface patches were compared with manual segmentation by an expert. The average Hausdorff distance was 3.0 mm when compared to the expert segmentation.

CONCLUSION: The resulting surface patches are expected to be sufficiently accurate for driving a deformable registration between the ultrasound space and a generic spine model, to allow for three-dimensional visualization of the spine and measuring its curvature.
Visual aid for identifying vertebral landmarks in ultrasound
Zachary Baum, Tamas Ungi, Andras Lasso, et al.
PURPOSE: Vertebral landmark identification with ultrasound is notoriously difficult. We propose to assist the user in identifying vertebral landmarks by overlaying a visual aid in the ultrasound image space during the identification process. METHODS: The operator first identifies a few salient landmarks. From those, a generic healthy spine model is deformably registered to the ultrasound space and superimposed on the images, providing visual aid to the operator in finding additional landmarks. The registration is re-computed with each identified landmark. A spatially tracked ultrasound system and associated software were developed. To evaluate the system, six operators identified vertebral landmarks using ultrasound images, and using ultrasound images paired with 3D spine visualizations. Operator performance and inter-operator variability were analyzed. Software usability was assessed following the study, through questionnaire. RESULTS: In assessing the effectiveness of 3D spine visualization in landmark identification, operators were significantly more successful in landmark identification using visualizations and ultrasound than with ultrasound only (82 [72 – 94] % vs 51 [37 – 67] %, respectively; p = 0.0012). Time to completion was higher using visualizations and ultrasound than with ultrasound only 842 [448 – 1136] s vs 612 [434 – 785] s, respectively; p = 0.0468). Operators felt that 3D visualizations helped them identify landmarks, and visualize the spine and vertebrae. CONCLUSION: A three-dimensional visual aid was developed to assist in vertebral landmark identification using a tracked ultrasound system by deformably registering and visualizing a healthy spine model in ultrasound space. Operators found the visual aids useful and they were able to identify significantly more vertebral landmarks than without it.
Enhanced Reality, Simulation, and Planning
icon_mobile_dropdown
Assisted needle guidance using smart see-through glasses
Ming Li, Sheng Xu, Brad J. Wood
Accurate needle placement largely depends on physicians’ visuospatial skills in CT-guided interventions. To reduce the reliance on operator experience and enhance accuracy, we developed an augmented reality system using smart seethrough glasses to facilitate and assist bedside needle angle guidance. The AR system was developed using Unity and Vuforia SDK. It displays the planned needle angle on the glasses’ see-through screens in real-time based on the glasses orientation. The displayed angle is always referenced to the CT table and independent from the physical orientation of the glasses. The see-through feature allows the operator to compare the actual needle and the planned needle angle continuously. The glasses’ orientation was tracked by its built-in gyroscope. The offset between the embedded gyroscope and the glasses’ display frame was pre-calibrated. A quick one-touch calibration method between the glasses and CT frame was implemented. Hardware accuracy and guidance accuracy was evaluated in phantom studies. In the first test, a needle was inserted in the phantom and scanned with CT. The measured angle in the CT scan was set on the glasses. We took a snapshot from the lens and compared the needle vector and guideline in the saved snapshot. The hardware accuracy was within 0.98 ± 0.85 degree. In the second test, after each insertion guided by the glasses, a CT scan was taken to validate the insertion angle error. The accuracy of the guidance was within 1.33 ± 0.73 degree. Smart glasses can provide accurate guidance for needle based interventions with minimal disturbance of the standard clinical workflow.
Exploration using holographic hands as a modality for skills training in medicine
Regina Leung, Andras Lasso, Matthew S. Holden, et al.
PURPOSE: Gaining proficiency in technical skills involving specific hand motions is prevalent across all disciplines of medicine and particularly relevant in learning surgical skills such as knot tying. We propose a new form of self-directed learning where a pair of holographic hands is projected in front of the trainee using the Microsoft HoloLens and guides them through learning various basic hand motions relevant to surgery and medicine. This study looks at the feasibility and effectiveness of using holographic hands as a skills training modality for learning hand motions compared to the traditional methods of apprenticeship and video-based learning. METHODS: 9 participants were recruited and each learned 6 different hand motions from 3 different modalities (video, apprenticeship, HoloLens). Results of successful completion and feedback on effectiveness was obtained through a questionnaire. RESULTS: Participants had a considerable preference for learning from HoloLens and apprenticeship and a higher success rate of learning hand motions compared to video-based learning. Furthermore, learning with holographic hands was shown to be comparable to apprenticeship in terms of both effectiveness and success rate. However, more participants still selected apprenticeship as a preferred learning method compared to HoloLens. CONCLUSION: This initial pilot study shows promising results for using holographic hands as a new effective form of self-directed apprenticeship learning that can be applied to learning a wide variety of skills requiring hand motions in medicine. Work continues toward implementing this technology in knot tying and suture tutoring modules in our undergraduate medical curriculum.
High fidelity virtual reality orthognathic surgery simulator
Surgical simulators are powerful tools that assist in providing advanced training for complex craniofacial surgical procedures and objective skills assessment such as the ones needed to perform Bilateral Sagittal Split Osteotomy (BSSO). One of the crucial steps in simulating BSSO is accurately cutting the mandible in a specific area of the jaw, where surgeons rely on high fidelity visual and haptic cues. In this paper, we present methods to simulate drilling and cutting of the bone using the burr and the motorized oscillating saw respectively. Our method allows low computational cost bone drilling or cutting while providing high fidelity haptic feedback that is suitable for real-time virtual surgery simulation.
Augmented reality needle ablation guidance tool for irreversible electroporation in the pancreas
Timur Kuzhagaliyev, Neil T. Clancy, Mirek Janatka, et al.
Irreversible electroporation (IRE) is a soft tissue ablation technique suitable for treatment of inoperable tumours in the pancreas. The process involves applying a high voltage electric field to the tissue containing the mass using needle electrodes, leaving cancerous cells irreversibly damaged and vulnerable to apoptosis. Efficacy of the treatment depends heavily on the accuracy of needle placement and requires a high degree of skill from the operator. In this paper, we describe an Augmented Reality (AR) system designed to overcome the challenges associated with planning and guiding the needle insertion process. Our solution, based on the HoloLens (Microsoft, USA) platform, tracks the position of the headset, needle electrodes and ultrasound (US) probe in space. The proof of concept implementation of the system uses this tracking data to render real-time holographic guides on the HoloLens, giving the user insight into the current progress of needle insertion and an indication of the target needle trajectory. The operator’s field of view is augmented using visual guides and real-time US feed rendered on a holographic plane, eliminating the need to consult external monitors. Based on these early prototypes, we are aiming to develop a system that will lower the skill level required for IRE while increasing overall accuracy of needle insertion and, hence, the likelihood of successful treatment.
Augmented reality assistance in training needle insertions of different levels of difficulty
Caitlin T. Yeo, Tamas Ungi, Regina Leung, et al.
PURPOSE: Virtual reality and simulation training improve skill acquisition by allowing trainees the opportunity to deliberately practice procedures in a safe environment. The purpose of this study was to find if there was a difference in the amount of improvement the Perk Tutor, an augmented reality training tool, provided depending on the complexity of the procedure. METHODS: We conducted two sets of spinal procedure experiments with different levels of complexity with regards to instrument handling and mental reconstruction – the lumbar puncture and the facet joint injection. In both experiments subjects were randomized into two groups, Control or Perk Tutor. They were guided through a tutorial, given practice attempts with or without Perk Tutor, followed by testing without Perk Tutor augmentation. RESULTS: The Perk Tutor significantly improved trainee outcomes in the facet joint experiment, while the Perk Tutor and the control group performed comparably in the lumbar puncture experiment. CONCLUSION: Perk Tutor and other augmented training systems may be more beneficial for more complex skills that require mental reconstruction of 2-dimensional images or non-palpable anatomy.
Segmentation and Modeling
icon_mobile_dropdown
Automated segmentation and radiomic characterization of visceral fat on bowel MRIs for Crohn's disease
Iulia Barbur, Jacob Kurowski, Kaustav Bera, et al.
Crohn’s Disease is a relapsing and remitting disease involving chronic intestinal inflammation that is often characterized by hypertrophy of visceral adipose tissue (VAT). While an increased ratio of VAT to subcutaneous fat (SQF) has previously been identified as a predictor of worse outcomes in Crohn’s Disease, bowel-proximal fat regions have also been hypothesized to play a role in inflammatory response. However, there has been no detailed study of VAT and SQF regions on MRI to determine their potential utility in assessing Crohn’s Disease severity or guiding therapy. In this paper we present a fully-automated algorithm to segment and quantitatively characterize VAT and SQF via routinely acquired diagnostic bowel MRIs. Our automated segmentation scheme for VAT and SQF regions involved a combination of morphological processing and connected component analysis, and demonstrated DICE overlap scores of 0.86±0.05 and 0.91±0.04 respectively, when compared against expert annotations. Additionally, VAT regions proximal to the bowel wall (on diagnostic bowel MRIs) demonstrated a statistically significantly, higher expression of four unique radiomic features in pediatric patients with moderately active Crohn’s Disease. These features were also able to accurately cluster patients who required aggressive biologic therapy within a year of diagnosis from those who did not, with 87.5% accuracy. Our findings indicate that quantitative radiomic characterization of visceral fat regions on bowel MRIs may be highly relevant for guiding therapeutic interventions in Crohn’s Disease.
A semiautomatic algorithm for three-dimensional segmentation of the prostate on CT images using shape and local texture characteristics
Maysam Shahedi, Ling Ma, Martin Halicek, et al.
Prostate segmentation in computed tomography (CT) images is useful for planning and guidance of the diagnostic and therapeutic procedures. However, the low soft-tissue contrast of CT images makes the manual prostate segmentation a time-consuming task with high inter-observer variation. We developed a semi-automatic, three-dimensional (3D) prostate segmentation algorithm using shape and texture analysis and have evaluated the method against manual reference segmentations. In a training data set we defined an inter-subject correspondence between surface points in the spherical coordinate system. We applied this correspondence to model the globular and smoothly curved shape of the prostate with 86, well-distributed surface points using a point distribution model that captures prostate shape variation. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. For segmentation, we used the learned shape and texture characteristics of the prostate in CT images and we used a set of user inputs for prostate localization. We trained our algorithm using 23 CT images and tested it on 10 images. We evaluated the results compared with those of two experts’ manual reference segmentations using different error metrics. The average measured Dice similarity coefficient (DSC) and mean absolute distance (MAD) were 88 ± 2% and 1.9 ± 0.5 mm, respectively. The averaged inter-expert difference measured on the same dataset was 91 ± 4% (DSC) and 1.3 ± 0.6 mm (MAD). With no prior intra-patient information, the proposed algorithm showed a fast, robust and accurate performance for 3D CT segmentation.
Auto-contouring via automatic anatomy recognition of organs at risk in head and neck cancer on CT images
Contouring of the organs at risk is a vital part of routine radiation therapy planning. For the head and neck (H and N) region, this is more challenging due to the complexity of anatomy, the presence of streak artifacts, and the variations of object appearance. In this paper, we describe the latest advances in our Automatic Anatomy Recognition (AAR) approach, which aims to automatically contour multiple objects in the head and neck region on planning CT images. Our method has three major steps: model building, object recognition, and object delineation. First, the better-quality images from our cohort of H and N CT studies are used to build fuzzy models and find the optimal hierarchy for arranging objects based on the relationship between objects. Then, the object recognition step exploits the rich prior anatomic information encoded in the hierarchy to derive the location and pose for each object, which leads to generalizable and robust methods and mitigation of object localization challenges. Finally, the delineation algorithms employ local features to contour the boundary based on object recognition results. We make several improvements within the AAR framework, including finding recognition-error-driven optimal hierarchy, modeling boundary relationships, combining texture and intensity, and evaluating object quality. Experiments were conducted on the largest ensemble of clinical data sets reported to date, including 216 planning CT studies and over 2,600 object samples. The preliminary results show that on data sets with minimal (<4 slices) streak artifacts and other deviations, overall recognition accuracy reaches 2 voxels, with overall delineation Dice coefficient close to 0.8 and Hausdorff Distance within 1 voxel.
Optimal multimodal virtual bronchoscopy for convex-probe endobronchial ultrasound
William E. Higgins, Xiaonan Zang, Ronnarit Cheirsilp, et al.
Accurate staging of the central-chest lymph nodes is a major step in the management of lung-cancer patients. For this purpose, the physician uses videobronchoscopy to navigate through the airways and convex-probe endobronchial ultrasound (CP-EBUS) to localize extraluminal lymph nodes. Unfortunately, CP-EBUS proves to be difficult for many physicians. In this paper, we present a complete optimal multimodal planning and guidance system for image-guided CP-EBUS bronchoscopy. The system accepts a patient's 3D chest CT scan and an optional whole-body PET/CT study as inputs. System work flow proceeds in two stages: 1) optimal procedure planning and 2) multimodal image-guided bronchoscopy. Optimal procedure planning entails CT-based computation of guidance routes that enable maximal feasible tissue sampling (depth-of-sample) of selected lymph nodes. Multimodal image-guided bronchoscopy next occurs in the operating room. The guidance process draws upon a CT-based virtual multimodal bronchoscope that gives virtual views of videobronchoscopy and CP-EBUS, similar to those provided by a "real" linear integrated CP-EBUS bronchoscope. The system provides CT/PET-based graphical views along the guidance route toward a lymph node, per the two-stage process of videobronchoscopic navigation and CP-EBUS localization. The guidance views depict the depth-of-sample information dynamically to enable visualization of optimal tissue-biopsy sites. The localization process features a novel registration be- tween the virtual CP-EBUS views and live CP-EBUS views to enable synchronization. A lung-cancer patient pilot study demonstrated the feasibility, safety, and efficacy of the system. Procedure planning effectively derived optimal tissue-biopsy sites and also indicated sites where biopsy may not be safe, within preset constraints. During live bronchoscopy, we performed successful guidance to all selected lymph nodes.
Machine learning-based colon deformation estimation method for colonoscope tracking
Masahiro Oda, Takayuki Kitasaka, Kazuhiro Furukawa M.D., et al.
This paper presents a colon deformation estimation method, which can be used to estimate colon deformations during colonoscope insertions. Colonoscope tracking or navigation system that navigates a physician to polyp positions during a colonoscope insertion is required to reduce complications such as colon perforation. A previous colonoscope tracking method obtains a colonoscope position in the colon by registering a colonoscope shape and a colon shape. The colonoscope shape is obtained using an electromagnetic sensor, and the colon shape is obtained from a CT volume. However, large tracking errors were observed due to colon deformations occurred during colonoscope insertions. Such deformations make the registration difficult. Because the colon deformation is caused by a colonoscope, there is a strong relationship between the colon deformation and the colonoscope shape. An estimation method of colon deformations occur during colonoscope insertions is necessary to reduce tracking errors. We propose a colon deformation estimation method. This method is used to estimate a deformed colon shape from a colonoscope shape. We use the regression forests algorithm to estimate a deformed colon shape. The regression forests algorithm is trained using pairs of colon and colonoscope shapes, which contains deformations occur during colonoscope insertions. As a preliminary study, we utilized the method to estimate deformations of a colon phantom. In our experiments, the proposed method correctly estimated deformed colon phantom shapes.
Cardiac and Lung Imaging and Tracking
icon_mobile_dropdown
A real-time system for prosthetic valve tracking
Martin Wagner, Lindsay Bodart, Sebastian Schafer, et al.
Transcatheter aortic valve replacement is a minimally invasive technique for the treatment of valvular heart disease, where an artificial valve mounted on a balloon catheter is guided to the aortic valve annulus. The balloon catheter is then expanded and displaces the diseased valve. We recently proposed an algorithm to track the 3D position, orientation and shape of a prosthetic transcatheter aortic valve using biplane fluoroscopic imaging. In this work, we present a real time hardware and software implementation of this prosthetic valve tracking method. A prototype was implemented which gathers fluoroscopic images from the angiography system via a research interface. A dynamic point cloud model of the valve is then used to estimate the 3D position, orientation and shape by minimizing a cost function. The cost function is implemented using parallel processing on graphics processing units to improve the performance. The system includes 3D rendering of the valve model and additional anatomy for visualization. The timing performance of the system was evaluated using a plastic cylinder phantom and a prosthetic valve mounted on a balloon catheter. The total computation time per frame for tracking and visualization using two different valve models was 46.11 ms and 43.88 ms respectively. This would allow frame rates of up to 21.69 frames per second. The target registration error of the estimated valve model was 1.22 ± 0.29 mm. Combined with 3D echocardiographic imaging, this technique would enable real time image guidance in 3D, where both the prosthetic valve and the soft tissue of the heart are visible.
Determining in-silico left ventricular contraction force of myocardial infarct tissue using a composite material model
Sergio C. H. Dempsey, Abbas Samani
A computational method is presented in this paper for determining the severity of myocardial infarction of the left ventricle (LV) using its image data. In-silico generated displacement fields for a healthy and damaged LV are used to mimic imaging modalities by adding appropriate levels of noise. To reconstruct the contraction force from the displacement field, a composite material model of the LV is optimized using genetic algorithms and a neural network to return the contraction force and distribution of forces for infarct tissue. The healthy LV contraction force was accurately returned within 1% for all displacement field tests indicating that all imaging methods could be used to measure healthy patient LV displacement fields for the purpose of contraction force reconstruction. With the damaged LV, contraction forces of the healthy region, as well as infarct border and infarct regions were considered. The optimization model found the contraction force distribution within 2% for the healthy region, while for the border zone and infarct regions the average contraction force reconstruction errors were 8.4 kPa and 5.1 kPa, respectively. These errors are reasonably small while no significant SNR dependence was observed. The inverse problem algorithm provided good estimates regardless of the SNR, however, further training of the neural network system is required to improve the robustness of the inversion framework with low contraction forces, since the accuracy of the optimization limited the SNR response.
A machine learning approach for biomechanics-based tracking of lung tumor during external beam radiation therapy
Elham Karami, Stewart Gaede, Ting-Yim Lee, et al.
Lung cancer radiotherapy is prone to errors due to uncertainties caused by the respiratory motion. If not accounted for, these errors may lead to poor radiation dose distribution, including insufficient does to the tumor volume and excessive dose to the healthy lung parenchyma. One effective method to account for respiratory motion is motion modeling. In this paper, we present a hybrid motion model which consists of two parts: 1) a computational biomechanical model of the lung for real-time tumor location/deformation estimation and 2) a Neural Network (NN) for real-time estimation of loading and boundary conditions of the lung biomechanical model. The second part uses the chest and abdomen surface motion as surrogate for the loading and boundary conditions, and is the main driver of the lung’s biomechanical model of the lung. In practice, the tumor location/deformation data estimated using the proposed motion model can be fed to actuators that guide a radiation therapy LINAC for continuous lung tumor targeting. The focus of this paper is two-fold: 1) developing two NNs for predicting the lung BC’s, including the diaphragm motion and transpulmonary pressure and 2) incorporating the NNs into a previously developed lung FE model to determine tumor location/deformation. Results of these two steps show highly favorable accuracy of the NNs in estimating the lung BC’s and highly favorable accuracy of the proposed motion model in predicting the lung tumor motion. As such, the proposed tracking approach can be potentially used for managing lung respiratory motion/deformation necessary for effective EBRT.
Lung deformation between preoperative CT and intraoperative CBCT for thoracoscopic surgery: a case study
Pablo Alvarez, Matthieu Chabanas, Simon Rouzé, et al.
Video-Assisted Thoracoscopic Surgery (VATS) is a promising surgical treatment for early-stage lung cancer. With respect to standard thoracotomy, it is less invasive and provides better and faster patient recovery. However, a main issue is the accurate localization of small, subsolid nodules. While intraoperative Cone-Beam CT (CBCT) images can be acquired, they cannot be directly compared with preoperative CT images due to very large lung deformations occurring before and during surgery. This paper focuses on the quantification of deformations due to the change of positioning of the patient, from supine during CT acquisition to lateral decubitus in the operating room. A method is first introduced to segment the lung cavity in both CT and CBCT. The images are then registered in three steps: an initial alignment, followed by rigid registration and finally non-rigid registration, from which deformations are measured. Accuracy of the registration is quantified based on the Target Registration Error (TRE) between paired anatomical landmarks. Results of the registration process are on the order of 1.01 mm in median, with minimum and maximum errors 0.35 mm and 2.34 mm. Deformations on the parenchyma were measured to be up to 14 mm and approximately 7 mm in average for the whole lung structure. While this study is only a first step towards image-guided therapy, it highlights the importance of accounting for lung deformation between preoperative and intraoperative images, which is crucial for the intraoperative nodule localization.
Regional lung ventilation estimation based on supervoxel tracking
Adam Szmul, Bartlomiej W. Papiez, Tahreema Matin, et al.
In the case of lung cancer, an assessment of regional lung function has the potential to guide more accurate radiotherapy treatment. This could spare well-functioning parts of the lungs, as well as be used for follow up. In this paper we present a novel approach for regional lung ventilation estimation from dynamic lung CT imaging, which might be used during radiotherapy planning. Our method combines a supervoxel-based image representation with deformable image registration, performed between peak breathing phases, for which we track changes in intensity of previously extracted supervoxels. Such a region-oriented approach is expected to be more physiologically consistent with lung anatomy than previous methods relying on voxel-wise relationships, as it has the potential to mimic the lung anatomy. Our results are compared with static ventilation images acquired from hyperpolarized Xenon129 MRI. In our study we use three patient datasets consisting of 4DCT and XeMRI. We achieve higher correlation (0.487) compared to the commonly used method for estimating ventilation performed in a voxel-wise manner (0.423) on average based on global correlation coefficients. We also achieve higher correlation values for our method when ventilated/non-ventilated regions of lungs are investigated. The increase of the number of layers of supervoxels further improves our results, with one layer achieving 0.393, compared to 0.487 for 15 layers. Overall, we have shown that our method achieves higher correlation values compared to the previously used approach, when correlated with XeMRI.
Intraoperative Imaging and Technologies
icon_mobile_dropdown
Trackerless surgical image-guided system design using an interactive extension of 3D Slicer
Xiaochen Yang, Rohan Vijayan, Ma Luo, et al.
Conventional optical tracking systems use cameras sensitive to near-infra-red (NIR) light detecting cameras and passively/actively NIR-illuminated markers to localize instrumentation and the patient in the operating room (OR) physical space. This technology is widely-used within the neurosurgical theatre and is a staple in the standard of care in craniotomy planning. To accomplish, planning is largely conducted at the time of the procedure with the patient in a fixed OR head presentation orientation. In the work presented herein, we propose a framework to achieve this in the OR that is free of conventional tracking technology, i.e. a trackerless approach. Briefly, we are investigating a collaborative extension of 3D slicer that combines surgical planning and craniotomy designation in a novel manner. While taking advantage of the well-developed 3D slicer platform, we implement advanced features to aid the neurosurgeon in planning the location of the anticipated craniotomy relative to the preoperatively imaged tumor in a physical-to-virtual setup, and then subsequently aid the true physical procedure by correlating that physical-to-virtual plan with a novel intraoperative MR-to-physical registered field-of-view display. These steps are done such that the craniotomy can be designated without use of a conventional optical tracking technology. To test this novel approach, an experienced neurosurgeon performed experiments on four different mock surgical cases using our module as well as the conventional procedure for comparison. The results suggest that our planning system provides a simple, cost-efficient, and reliable solution for surgical planning and delivery without the use of conventional tracking technologies. We hypothesize that the combination of this early-stage craniotomy planning and delivery approach, and our past developments in cortical surface registration and deformation tracking using stereo-pair data from the surgical microscope may provide a fundamental new realization of an integrated trackerless surgical guidance platform.
Advanced image registration and reconstruction using the O-Arm system: dose reduction, image quality, and guidance using known-component models
Purpose. Model-based image registration and reconstruction offer strong potential for improved safety and precision in image-guided interventions. Advantages include reduced radiation dose, improved soft-tissue visibility (detection of complications), and accurate guidance with/without a dedicated navigation system. This work reports the development and performance of such methods on an O-arm system for intraoperative imaging and translates them to first clinical studies.

Methods. Two novel methodologies predicate the work: (1) Known-Component Registration (KC-Reg) for 3D localization of the patient and interventional devices from 2D radiographs; and (2) Penalized-Likelihood reconstruction (PLH) for improved 3D image quality and dose reduction. A thorough assessment of geometric stability, dosimetry, and image quality was performed to define algorithm parameters for imaging and guidance protocols. Laboratory studies included: evaluation of KC-Reg in localization of spine screws delivered in cadaver; and PLH performance in contrast, noise, and resolution in phantoms/cadaver compared to filtered backprojection (FBP).

Results. KC-Reg was shown to successfully register screw implants within ~1 mm based on as few as 3 radiographs. PLH was shown to improve soft-tissue visibility (61% improvement in CNR) compared to FBP at matched resolution. Cadaver studies verified the selection of algorithm parameters and the methods were successfully translated to clinical studies under an IRB protocol.

Conclusions. Model-based registration and reconstruction approaches were shown to reduce dose and provide improved visualization of anatomy and surgical instrumentation. Immediate future work will focus on further integration of KC-Reg and PLH for Known-Component Reconstruction (KC-Recon) to provide high-quality intraoperative imaging in the presence of dense instrumentation.
A system for automatic monitoring of surgical instruments and dynamic non-rigid surface deformations in breast cancer surgery
Winona L. Richey, Ma Luo, Sarah E. Goodale, et al.
When negative tumor margins are achieved at the time of resection, breast conserving therapy (lumpectomy followed with radiation therapy) offers patients improved cosmetic outcomes and quality of life with equivalent survival outcomes to mastectomy. However, high reoperation rates ranging 10-59% continue to challenge adoption and suggest that improved intraoperative tumor localization is a pressing need. We propose to couple an optical tracker and stereo camera system for automated monitoring of surgical instruments and non-rigid breast surface deformations. A bracket was designed to rigidly pair an optical tracker with a stereo camera, optimizing overlap volume. Utilizing both devices allowed for precise instrument tracking of multiple objects with reliable, workflow friendly tracking of dynamic breast movements. Computer vision techniques were employed to automatically track fiducials, requiring one-time initialization with bounding boxes in stereo camera images. Point based rigid registration was performed between fiducial locations triangulated from stereo camera images and fiducial locations recorded with an optically tracked stylus. We measured fiducial registration error (FRE) and target registration error (TRE) with two different stereo camera devices using a phantom breast with five fiducials. Average FREs of 2.7 ± 0.4 mm and 2.4 ± 0.6 mm with each stereo-camera device demonstrate considerable promise for this approach in monitoring the surgical field. Automated tracking was shown to reduce error when compared to manually selected fiducial locations in stereo camera image-based localization. The proposed instrumentation framework demonstrated potential for the continuous measurement of surgical instruments in relation to the dynamic deformations of a breast during lumpectomy.
Intraoperative deformation during laryngoscopy of irradiated and non-irradiated patients
In trans-oral surgeries, large intraoperative deformations limit the surgeons’ use of preoperative images to accurately resect tumors while traditional metal instruments render intraoperative images ineffective. A CT/MR compatible laryngoscopy system was developed previously to allow for the study of these deformations with intraoperative imaging. For this study, we compare the deformation analysis of two patient groups: those who had received prior radiation to the upper aerodigestive tract (irradiated) and those who have not (non-irradiated). We speculate that differences in tissue deformation exist between these two groups due to radiation-induced fibrosis (RIF) and that quantifying these distinct deformation patterns will lead to more patient-specific tissue modeling. Thirteen patients undergoing diagnostic laryngoscopy were recruited; five had been irradiated and eight had not. Artifact-free images were obtained and registered. Mandible, hyoid, and tongue region displacements were quantified. For the bony structures, significant differences were observed in certain displacement directions as well as magnitude, with the irradiated patient group experiencing less anatomical shift (non-irradiated vs irradiated: (Mandible) 12.6±3.6mm vs 7.9±2.8mm, p=0.029; (Hyoid) 13.3±3.1mm vs 9.0±1.8mm, p=0.019). For the tongue, average displacements of tongue fiducials were 26.2±11.1mm vs 22.9±8.4mm respectively (p=0.033). The data from this study can serve as ground truth to generate and evaluate upper aerodigestive tract deformation models to predict the intraoperative state and provide guidance to the surgeons.
Design and validation of a large, open-source library of rigid-body markers for surgical navigation (Conference Presentation)
Alisa J. V. Brown, Ali Uneri, Tharindu De Silva, et al.
Purpose: Rigid-body markers are a common component of surgical tracking systems, but there is a limited number of commercially available, valid marker designs, presenting a limitation to researchers developing novel navigation systems. This work presents the development and validation of a large, open-source library of rigid-body markers for passive marker tracking systems. Methods: Ten groups of rigid-body tool designs were generated according to an algorithm based on intra- and inter-body design constraints. Validation studies were performed using a Polaris Vicra tracker (NDI) to compare the performance of each rigid body to a standard commercially available reference tool, including: tool-tip pivot calibration; measurement of fiducial registration error (FRE) on a computer-controlled bench; and measurement of target registration error (TRE) on a CT head phantom. Results: The resulting library of rigid-body markers includes 10 groups - one with 10 markers and nine with 6. Each group includes one tool geometrically equivalent to a common commercially available rigid body (NDI #8700339)1. Pivot tests showed tool-tip calibration ~0.4 mm, indistinguishable from the reference tool. FRE was ~0.15 mm, again meeting that of the reference. TRE measurements showed registration in a CT head phantom with error ~0.95 mm, equivalent to that of the reference. Conclusions: The library of custom tool designs perform equivalently to common, commercially available reference markers and present a multitude of distinct, simultaneously trackable rigid-body marker designs. The library is available as open source CAD files suitable to 3D printing by researchers in image-guided surgery and other applications.
A novel small field of view hybrid gamma camera for scintigraphic imaging (Conference Presentation)
Mohammed S. Alqahtani, John E. Lees, Sarah L. Bugby, et al.
A novel small field of view Hybrid Gamma Camera (HGC) has been developed to facilitate the process of localizing radiopharmaceutical uptake during surgical procedures. The HGC is a scintillator-based detector consisting of an electron multiplying charge-coupled device coupled to a columnar scintillator (CsI[Tl]). This enables fusion scintigraphic and optical images offering new possibilities for assisting clinicians and surgeons in localising the site of uptake in a number of surgical procedures. This technology also offers bedside imaging for small organs in procedures such as thyroid scintigraphy. In this study, prototype anthropomorphic phantoms have been used to study the capability of the HGC. Images were acquired using a range of bespoke anthropomorphic phantoms. The gamma and hybrid optical images were acquired for the simulated sentinel lymph nodes and thyroid gland. The gamma images produced varied in terms of spatial resolution and detectability, however utilizing pinhole collimators of difference diameters (0.5 and 1.0mm) imaging was enhanced meeting the needs of small field gamma imaging. The hybrid images obtained demonstrated that the HGC is ideally suited for small organ imaging demonstrating good potential in clinical procedures, such as thyroid scintigraphy, when using acquisition times similar to those for conventional gamma imaging. Moreover, clinical scintigraphic images, from patients attending the nuclear medicine clinic, were acquired using the HGC and compared to images from a standard gamma camera. The results of our first clinical feasibility study using the HGC will be presented.
Abdominal Imaging and Guidance Technologies
icon_mobile_dropdown
Needle deflection in thermal ablation procedures of liver tumors: a CT image analysis
Tonke L. de Jong, Camiel Klink, Adriaan Moelker, et al.
Introduction: Accurate needle placement is crucial in image-guided needle interventions. A targeting error may be introduced due to undesired needle deflection upon insertion through tissue, caused by e.g. patient breathing, tissue heterogeneity, or asymmetric needle tip geometries. This paper aims to quantify needle deflection in thermal ablation procedures of liver tumors by means of a CT image analysis. Methods: Needle selection was done by using all clinical CT data that were made during thermal ablation procedures of the liver, ranging from 2008-2016, in the Erasmus MC, the Netherlands. The 3D needle shape was reconstructed for all selected insertions using manual segmentation. Subsequently, a straight line was computed between the entry point of the needle into the body and the needle tip. The maximal perpendicular distance between this straight line and the actual needle was used to calculate needle deflection. Results: In total, 365 needles were included in the analysis ranging from 14G to 17G in diameter. Average needle insertion depth was 95mm (range: 32 mm – 182 mm). Needle deflection was on average 1.3 mm (range: 0.0 mm – 6.5 mm). 54% of the needles (n=196) had a needle deflection of more than one millimeter, whereas 7% of the needles (n=25) showed a large needle deflection of more than three millimeters. Conclusions: Needle deflection in interventional radiology occurs in more than half of the needle insertions. Therefore, deflection should be taken into account when performing procedures and when defining design requirements for novel needles. Further, needle insertion models need to be developed that account for needle deflection.
Atomic force stiffness imaging: capturing differences in mechanical properties to identify and localize areas of prostate cancer tissue
Clara Essmann, Alex Freeman, Vijay M. Pawar, et al.
Prostate cancer is now the most commonly diagnosed cancer in men in western countries. Due to the difficulty for early detection, there are an estimated 10000 deaths a year in the UK from prostate cancer alone; whereby the only curative option is interventional treatment that aims to excise all diseased cells while preserving the neurovascular bundle. To date, several studies have shown that the mechanical properties of cancer cells and tissues i.e. adhesion, stiffness, roughness and viscoelasticity are significantly different from benign cells and regions of tissue that are healthy. Building upon these results, we believe novel methods of imaging the mechanical properties of prostate cancer samples can provide new surgical intervention opportunities beyond what is possible through vision alone. In this paper, we used an Atomic Force Microscope (AFM) to measure the stiffness and topography variations correlating to regions of prostate cancer at the surface of an excised sample at a cellular level. Preliminary results show that by using an AFM we can detect structural differences in non-homogeneous tissue samples, confirming previous results that cancerous tissues appear stiffer than benign areas. Through these results, we aim to develop a stiffness imaging protocol to aid the early detection of prostate cancer, in addition to force sensing surgical tools.
Automatic definition of surgical trajectories and acceptance window in pelvic trauma surgery using deformable registration
R. Han, B. Ramsay, T. De Silva, et al.
Purpose: Pelvic screw insertion for percutaneous fixation is a challenging surgical procedure that requires interpretation of complex 3D anatomy from 2D fluoroscopic images. Extensive surgical training is needed and trial and error often occurs in device placement, causing extended fluoroscopy time and increased radiation dose. A system is reported for automatic definition of acceptable surgical trajectories to facilitate guidance and quality assurance in a manner consistent with surgical workflow.

Methods: An atlas was constructed with segmented pelvis shapes containing standard reference trajectories for screw placement. A statistical shape model computed from the atlas is used for deformable registration to the patient’s preoperative CT (without segmentation). By transferring the reference trajectories and surrounding acceptance windows (i.e., volumetric corridors of acceptable device placement) from the atlas, the system automatically computes reliable Kwire and screw trajectories for guidance (overlay in fluoroscopy) and QA.

Results: A leave-one-out analysis was performed to evaluate the accuracy or registration and overlay. The registration achieved average surface registration accuracy of 1.82 ± 0.39 mm. Automatically determined trajectories conformed within acceptable cortical bone margins, maintaining 3.75 ± 0.68 mm distance from cortex in narrow bone corridors and demonstrating accurate registration and surgical trajectory definition without breaching cortex.

Conclusions: The framework proposed in this work allows for multi-atlas based automatic planning of surgical trajectory without tracker or manual segmentation. The planning information can be further used to facilitate intraoperative guidance and post-operatively quality assurance in a manner consistent with surgical workflow.
Intra-operative 360° 3D transvaginal ultrasound guidance during high-dose-rate interstitial gynecologic brachytherapy needle placement
In high-dose-rate (HDR) interstitial gynecologic brachytherapy, needles are positioned into the tumor and surrounding area through a template to deliver radiotherapy. Optimal dose and avoidance of nearby organs requires precise needle placement; however, there is currently no standard method for intra-operative needle visualization or guidance. We have developed and validated a 360° three-dimensional (3D) transvaginal ultrasound (TVUS) system and created a sonolucent vaginal cylinder that is compatible with the current template to accommodate a conventional side-fire ultrasound probe. This probe is rotated inside the hollow sonolucent cylinder to generate a 3D image. We propose the use of this device for intra-operative verification of brachytherapy needle locations. In a feasibility study, the first ever 360° 3D TVUS image of a gynecologic brachytherapy patient was acquired and the image allowed key features, including bladder, rectum, vaginal wall, and bowel, to be visualized with needles clearly identifiable. Three patients were then imaged following needle insertion (28 needles total) and positions of the needles in the 3D TVUS image were compared to the clinical x-ray computed tomography (CT) image, yielding a mean trajectory difference of 1.67 ± 0.75°. The first and last visible points on each needle were selected in each modality and compared; the point pair with the larger distance was selected as the maximum difference in needle position with a mean maximum difference of 2.33 ± 0.78 mm. This study demonstrates that 360° 3D TVUS may be a feasible approach for intra-operative needle localization during HDR interstitial brachytherapy of gynecologic malignancies.
Ring navigation: an ultrasound-guided technique using real-time motion compensation for prostate biopsies
Derek J. Gillies, Lori Gardi, David Tessier, et al.
Prostate cancer has the second highest noncutaneous cancer incidence in men. Three-dimensional (3D) transrectal ultrasound (TRUS) fused with a magnetic resonance image (MRI) is used to guide prostate biopsy as an alternative technique to conventional 2D TRUS sextant biopsy. The TRUS-MRI fusion technique can provide intraoperative needle guidance to suspicious cancer tissues identified on MRI, increasing the targeting capabilities of a physician. Currently, 3D TRUS-MR guided biopsy suffers from image and target misalignment caused by various forms of prostate motion. Thus, we previously developed a real-time motion compensation algorithm to align 2D and 3D TRUS images with an update rate around an ultrasound system frame rate. During clinical implementation, observations of image misalignment occurred when obtaining tissue samples near the left and right boundaries of the prostate. To minimize transducer translation on the rectal wall and avoid prostate motion and deformation, we are proposing the use of a 3D model-based ring navigation procedure. This navigation keeps the transducer positioned towards the centroid of the prostate when guiding the tracked biopsy gun to targets. Prostate biopsy was performed on three patients while using real-time motion compensation in the background. Our navigation approach was compared to a conventional 2D TRUS-guided procedure using approximately 20 2D and 3D TRUS image pairs and resulted in median [first quartile, third quartile] registration errors of 2.0 [1.3, 2.5] mm and 3.4 [1.5, 8.2] mm, respectively. Using our navigation approach, registration error and variability were reduced, potentially suggesting a more robust technique when performing continuous motion compensation.
Validation, Simulation, and 3D Printing
icon_mobile_dropdown
Using water-soluble additive manufacturing for cheap and soft silicon organ models
Daniel Reichard, Markus Gern, Isabel Funke, et al.
The evaluation and trial of computer-assisted surgery systems is an important part of the development process. Since human and animal trials are difficult to perform and have a high ethical value artificial organs and phantoms have become a key component for testing clinical systems. For soft-tissue phantoms like the liver it is important to match its biomechanical properties as close as possible. Organ phantoms are often created from silicone that is shaped in casting molds. Silicone is relatively cheap and the method doesn’t rely on expensive equipment. One big disadvantage of silicone phantoms is their high rigidity. To this end, we propose a new method for the generation of silicon phantoms with a softer and mechanically more accurate structure. Since we can’t change the rigidity of silicone we developed a new and easy method to weaken the structure of the silicone phantom. The key component is the misappropriation of water-soluble support material from 3D FDM-printing. We designed casting molds with an internal grid structure to reduce the rigidity of the structure. The molds are printed with an FDM (Fused Deposition Modeling) printer and entirely from water-soluble PVA (Polyvinyl Alcohol) material. After the silicone is hardened, the mold with the internal structure can be dissolved in water. The silicone phantom is then pervaded with a grid of cavities. Our experiments have shown that we can control the rigidity of the model up to a 70% reduction of its original value. The rigidity of our silicon models is simply controlled with the size of the internal grid structure.
PedBot: robotically assisted ankle robot and video game for children with neuromuscular disorders
Reza Monfaredi, Hadi Fooladi, Pooneh Roshani, et al.
We have developed a three degree of freedom robot with a custom designed video game for ankle rehabilitation of children with cerebral palsy and other neuromuscular disorders. Physical therapy is commonly used to stretch and strengthen these patients, but current treatment methods have some limitations. By developing a robotic device and associated airplane video game, we aim to improve ankle range of motion, muscle strength, and motor control in a quantitative manner that is also fun and motivating for the child. Our PedBot robot consists of three intersecting axes with a remote center of motion in the ankle joint area. The patient’s ankle is strapped to PedBot and becomes a controller for the airplane game. The patient flies the plane through a series of rings and a bell sound is made each time the plane successfully passes through the center of a ring. To date we enrolled 4 children ages 4-11 in an IRB approved trial. The children completed up to 5 sessions. All of the children said they enjoyed the therapy. A 4-year old boy who completed all five sessions showed measureable improvements in several degrees of motion. We have also begun EMG based studies to investigate muscle activity during robotic rehabilitation.
A mold design for creating low-cost patient specific models with complex anatomy
Reid Vassallo, Daniel Bainbridge, John Moore, et al.
Physical models of patient anatomy have been used increasingly as 3D printing technologies have become mainstream. Such models can be used for both the validation of new minimally invasive surgical techniques, as well as surgical rehearsal and training. However, current workflows for creating flexible models with complex anatomy rely on the use of expensive 3D printing techniques. We present a mold design with which we create patient-specific physical models using low-cost techniques and materials. This generic mold makes it possible to create physical models with multiple components and complex internal structures including tumours, vasculature and other anatomic components with accuracy. To demonstrate this, we have created kidney models derived from the CT of excised porcine kidneys, including vasculature and an artificial tumor. We created the models in two parts, first using a rigid positive model to create a negative mold, and then creating a silicone model with the 3D printed vasculature inside, and removing it to leave wall-less vessels. The vasculature models include at least six separate bifurcations with minimum lumen diameters of approximately 1mm. The mean Euclidean offset distance between the model and original vessels was 0.42 mm, with a standard deviation of 0.50 mm. Both generic and patient specific models can be built with this workflow.
3D tissue mimicking biophantoms for ultrasound imaging: bioprinting and image analysis
Shekoofeh Azizi, Sharareh Bayat, Ajay Rajaram, et al.
Tissue-mimicking phantoms can be used to study various diagnostic imaging techniques and image-guided therapeutic interventions. Bioprinting enables the incorporation of live cells into printed phantoms. Some advantages of bioprinted phantoms include their close similarity to in vivo condition and change in phantom composition with time as the cells proliferate and secrete the extra cellular matrix components. In this study, we 3D-print alginate to form three different types of phantoms; those containing human vascular smooth muscle cells, human liver cancer cells, and no cells, each representing benign tissue, cancer tissue, and controls, respectively. The phantoms are imaged with a clinical ultrasound scanner and Temporal Enhanced Ultrasound (TeUS) data is collected. The comparison of the power spectrum of TeUS depicts separation among the three phantom types.
Validation of cochlear implant electrode localization techniques
Yiyuan Zhao, Robert F. Labadie, Benoit M. Dawant, et al.
Cochlear implants (CIs) are standard treatment for patients who experience sensorineural hearing loss. Although these devices have been remarkably successful at restoring hearing, it is rare to achieve natural fidelity, and many patients experience poor outcomes. Our group has developed image-guided CI programming techniques (IGCIP), in which image analysis techniques are used to locate the intra-cochlear position of CI electrodes to determine patient-customized settings for the CI processor. Clinical studies have shown that IGCIP leads to significantly improved outcomes. A crucial step is the localization of the electrodes, and rigorously quantifying the accuracy of our algorithms requires dedicated datasets. In this work, we discuss the creation of a ground truth dataset for electrode position and its use to evaluate the accuracy of our electrode localization techniques. Our final ground truth dataset includes 26 temporal bone specimens that were each implanted with one of four different types of electrode array by an experienced Otologist. The arrays were localized in conventional CT images using our automatic methods and manually in high resolution μCT images to create the ground truth. The conventional and μCT images were registered to facilitate comparison between automatic and ground truth electrode localization results. Our technique resulted in mean errors of 0.13mm in localizing the electrodes across 26 cases. Our approach successfully permitted characterizing the accuracy of our methods, which is critical to understand their limitations for use in IGCIP.
Poster Session
icon_mobile_dropdown
Vessel layer separation in x-ray angiograms with fully convolutional network
Haidong Hao, Hua Ma, Theo van Walsum
Percutaneous coronary intervention is a minimally-invasive procedure to treat coronary artery disease. In such procedures, X-ray angiography, a real-time imaging technique, is commonly used for image guidance to identify lesion sites and navigate catheters and guide-wires within coronary arteries. Due to the physical nature of X-ray imaging, photon energy undergoes absorption when penetrating tissues, rendering a 2D projection image of a 3D scene, in which semi-transparent structures overlap with each other. The overlapping structures make robust information processing of X-ray images challenging. To tackle this issue, layer separation techniques for X-ray images were proposed to separate those structures into different image layers based on structure appearance or motion pattern. These techniques have been proven effective for vessel enhancement in X-ray angiograms. However, layer separation approaches still suffer either from spurious structures or non-real-time processing, which prevent their application in clinics. Purpose of this work is to investigate whether vessel layer separation from X-ray angiography images is possible via a data-driven strategy. To this end, we develop and evaluate a deep learning based method to extract the vessel layer. More specifically, U-Net, a fully convolutional network architecture, was trained to separate the vessel layer from the background. The results of our experiments show good vessel layer separation on 42 clinical sequences. Compared to the previous state-of-the-art, our proposed method has similar performance but runs much faster, which makes it a potential real-time clinical application.
Geometric modeling of the aortic inner and outer vessel wall from CTA for aortic dissection analysis
Katharina Eigen, Michael Wels, Daniel-Sebastian Dohle, et al.
In this paper, we present a novel method for modeling both layers of the aortic walls in cases of aortic dissections for analysis from Computed Tomography Angiography. It involves a fast initialization of the associated physiological and pathological lumina and further editing on non-linearly formatted and cross-sectional views. Fast and accurate derivation of 3D models of these inner and outer vessel walls is crucial to analyze true and false lumen, to accelerate processing times in research studies, and to answer therapy questions. Since the aorta is a relatively large vessel, our system makes use of a point-based surface interpolation with compactly supported radial basis functions requiring only few surface constraints. Where possible, we use a semi-automatic approach to segment the vessel walls using an Active Contour Model, which detects the contours in the vessel’s cross-sectional planes, stating the constraints for interpolation. After initialization, editing on non-linearly formatted and crosssectional views is possible due to handling user input through tangent frame bundles to dismiss contradictory surface samples before updating the models with the new constraints. Our proposed method was evaluated in a user study to measure processing times and achievable model accuracy with respect to an expert-defined ground truth. The users needed 19 minutes on average to derive one model (both walls) and attained a mean surface distance of about 1.0 mm for the outer vessel wall, respectively, 1.6 mm for the inner wall. Using our method instead of open source program for geometric modeling saves 26 minutes per dataset.
Develop and validate a finite element method model for deformation matching of laparoscopic gastrectomy navigation
Tao Chen, Guodong Wei, Weili Shi, et al.
Experimental surgical navigation systems have been reported being used in laparoscopic surgery, however, accurate registration in the surgical navigation is very challenging due to vessel deformation. We aim to build a deformable model based on the preoperative CT images to improve the matching accuracy by using the finite element method (FEM). Enhanced CT scans before and after the left gastric artery (LGA) pulled up were performed for FEM model and ground-truth generating in a pig experiment, respectively. An ANSYS software was used to simulate the FEM model of the vessel after pulled up according to the need for the laparoscopic gastrectomy. The central line (Line B) of the FEM model and central line (Line A) of the ground-truth were drawn and compared with each other. On the basis of material and parameters acquired from the animal experiment, we built a perigastric vessels FEM model of the patient with gastric cancer and evaluated its accuracy in surgical scene of laparoscopic gastrectomy. In animal experiment, the average distance between the two central lines is 6.467mm while the average distance between the closest points of them is 3.751mm. In surgical scene of laparoscopic gastrectomy, superimposing the FEM model onto the 2D laparoscopic image demonstrated a good coincidence. In this study, we built a deformable vessel model based on the preoperative CT images which may improve the matching accuracy and supply a referable way for further research of the deformation matching in the laparoscopic gastrectomy navigation.
Bayesian delineation framework of clinical target volumes for prostate cancer radiotherapy using an anatomical-features-based machine learning technique
K. Ninomiya, H. Arimura, M. Sasahara, et al.
Our aim was to develop a Bayesian delineation framework of clinical target volumes (CTVs) for prostate cancer radiotherapy using an anatomical-features-based machine learning (AF-ML) technique. Probabilistic atlases (PAs) of the pelvic bone and the CTV were generated from 43 training cases. Translation vectors, which could move the CTV PAs to CTV locations, were estimated using the AF-ML after a bone-based registration between the PAs and planning computed tomography (CT) images. An input vector derived from 11 AF points was fed to three AF-ML techniques (artificial neural network: ANN, random forest: RF, support vector machine: SVM). The AF points were selected from edge points and centroids of anatomical structures around prostate. Reference translation vectors between centroids of CTV PAs and CTVs were given to the AF-ML as teaching data. The CTV regions were extracted by thresholding posterior probabilities produced by using the Bayesian inference with the translated CTV PA and likelihoods of planning CT values. The framework was evaluated based on a leave-one-out test with CTV contours determined by radiation oncologists. Average location errors of CTV PAs along the anterior-posterior and superior-inferior directions without AF-ML were 5.7±4.6 mm and 5.5±4.3 mm, respectively, whereas the errors along the two directions with ANN, which showed the best performance, were 2.4±1.7 mm and 2.2±2.2 mm, respectively. The average Dice’s similarity coefficient between reference and estimated CTVs for 44 test cases were 0.81±0.062 with ANN. The framework using AF-ML could accurately estimate CTVs of prostate cancer radiotherapy.
Real-time workflow detection using webcam video for providing real-time feedback in central venous catheterization training
Rebecca Hisey, Tamas Ungi, Matthew Holden, et al.
Purpose: Medical schools are shifting from a time-based approach to a competency-based education approach. A competency-based approach requires continuous observation and evaluation of trainees. The goal of Central Line Tutor is to be able to provide instruction and real-time feedback for trainees learning the procedure of central venous catheterization, without requiring a continuous expert observer. The purpose of this study is to test the accuracy of the workflow detection method of Central Line Tutor. This study also looks at the effectiveness of object recognition from a webcam video for workflow detection. Methods: Five trials of the procedure were recorded from Central Line Tutor. Five reviewers were asked to identify the timestamp of the transition points in each recording. Reviewer timestamps were compared to those identified by Central Line Tutor. Differences between these values were used to calculate average transitional delay. Results: Central Line Tutor was able to identify 100% of transition points in the procedure with an average transitional delay of -1.46 ± 0.81s. The average transitional delay of EM and webcam tracked steps were -0.35 ± 2.51s and -2.46 ± 3.57s respectively. Conclusions: Central line tutor was able to detect completion of all workflow tasks with minimal delay and may be used to provide trainees with real-time feedback. The results also show that object recognition from a webcam video is an effective method for detecting workflow tasks in the procedure of central venous catheterization.
Control of real-time MRI with a 3D controller during radiofrequency ablation
Vanessa Zurawka, Rüdiger Hoffmann, Oliver Burgert
Radiofrequency ablation is an ablation technique to treat tumors with focused heat. Computer tomography, ultrasound and magnetic resonance imaging (MRI) are imaging modalities which can be used for image-guided procedures. MRI offers several advantages in comparison to the other imaging modalities, such as radiation-free fluoroscopic imaging, temperature mapping, a high-soft-tissue contrast and free selection of imaging planes. This work addresses the application of 3Dcontrollers for controlling interventional, fluoroscopic MR sequences at the scenario of MR-guided radiofrequency ablation of hepatic malignancies. During this procedure, the interventionalist can monitor the targeting of the tumor with near-real time fluoroscopic sequences. In general, adjustments of the imaging planes are necessary during tumor targeting, which is performed by an assistant in the control room. Therefore, communication between the interventionalist in the scanner room and the assistant in the control room is essential. However, verbal communication is impaired due to the loud scanning noises. Alternatively, non-verbal communication between the two persons is possible, however limited to a few gestures and susceptible to misunderstandings. This work is analyzing different 3D-controllers to enable control of interventional MR sequences during MR-guided procedures directly by the interventionalist. Leap Motion, Wii Remote, SpaceNavigator, Phantom Omni and Foot Switch where selected. For that a simulation was built in C++ with VTK to feign the real scenario for test purposes. Previous results showed that Leap Motion is not suitable for the application while Wii Remote and Foot Switch are possible input devices. Final evaluation showed a generally time reduction with the use of 3D-controllers. Best results were reached with Wii Remote in 34 seconds. Handholding input devices like Wii Remote have further potential to integrate them in real environment to reduce intervention time.
In vivo reconstruction of coronary artery and bioresorbable stents from intracoronary optical coherence tomography
Yingguang Li, Niels R. Holm, Zhenyu Fei, et al.
The implantation of bioresorbable scaffolds (BRS) alters the local hemodynamic environment. Computational fluid dynamics (CFD) allows evaluation of local flow pattern, shear stress (SS) and Pressure_distal/ Pressure_approximal (Pd/Pa). The accuracy of CFD results relies to a great extent on the reconstruction of the 3D geometrical model. The aim of this study was to develop a new approach for in vivo reconstruction of coronary tree and BRS by fusion of Optical Coherence Tomography (OCT) and X-ray angiography. Ten patients enrolled in the BIFSORB pilot study with BRS implanted in coronary bifurcations were included for analysis. All patients underwent OCT of the target vessel after BRS implantation in the main vessel. Coronary 3D reconstruction was performed creating two geometrical models: one was angiography model and the other was OCT model with the implanted BRS. CFD analysis was performed separately on these two models. The main vessel was divided into portions of 0.15 mm length and 0.15mm arc width for point-perpoint comparison of SS between the two models. Reconstruction of the implanted BRS in naturally bent shape was successful in all cases. SS was compared in the matched 205463 portions of the two models. The divergence of shear stress was higher in the OCT model (mean±SD: 2.27 ± 3.95 Pa, maximum: 142.48 Pa) than that in the angiography model (mean±SD: 2.05 ± 3.12 Pa, maximum: 83.63 Pa). Pd/Pa values were lower in the OCT model than in the angiography model for both main vessels and side branches (mean±SD: 0.979 ± 0.009 versus 0.984 ± 0.011, and 0.951 ± 0.068 versus 0.966 ± 0.051). Reconstruction of BRS in naturally bent shape after implantation is feasible. It allows detailed analysis of local flow pattern, including shear stress and Pd/Pa in vivo.
Automated location detection of injection site for preclinical stereotactic neurosurgery through fully convolutional network
Zheng Liu, Hemmings Wu, Shiva Abbaszadeh
Currently, injection sites of probes, cannula, and optic fibers in stereotactic neurosurgery are typically located manually. This step involves location estimations based on human experiences and thus introduces errors. In order to reduce location error and improve repeatability of experiments and treatments, we investigate an automated method to locate injection sites. This paper proposes fully convolutional networks to locate specific anatomical points on skulls of rodents. Preliminary results show that fully convolutional networks are capable to identify and locate Bregma and Lambda points on rodent skulls. his method has the advantage of rotation and shifting invariance, and simplifies the procedure of locating injection sites. In the future study, the location error will be quantified, and the fully convolutional networks will be improved by expanding the training dataset as well as exploring other structures of convolutional networks.
Pre- to post-operative CT image registration to estimate cortical shift for image updating in deep brain stimulation
Chen Li, Xiaoyao Fan, Joshua Aronson, et al.
The success of deep brain stimulations (DBS) heavily relies on the accurate placement of electrodes in the operating room (OR). However, the pre-operative images such as MRI and CT for surgical targeting are degraded by brain shift, a combination of brain movement and deformation. One way to compensate for this intra-operative brain shift is to utilize a nonlinear biomechanical brain model to estimate the whole brain deformation based on which an updated MR can be generated. Due to the variability of deformation in both magnitude and direction among different cases, partially sampled intraoperative data (e.g., O-arm, CT) of tissue motion is critical to guide the model estimation. In this paper, we present a method to extract the sparse data by matching brain surface features from pre- and post-operative CTs, followed by the reconstruction of the full 3d-displacement field based on the original spatial information of these 2d points. Specifically, the size and the location of the sparse data were determined based on the pneumocephalus in the post-operative CT. The 2D CT-encoded texture maps from both pre-and post-operative CTs were then registered using Demons algorithm. The final 3d-displacement field in our one-patient-example shows an average lateral shift of 1.42mm, and a shift of 10.11mm in the direction of gravity. The results presented in this work have shown the potential of assimilating the sparse data from intra-operative images into the pipeline of model-based image guidance for DBS in the future.
A learning curve analysis of ultrasound-guided in-plane and out-of-plane vascular access training with Perk Tutor
Sean Xia, Zsuzsanna Keri, Matthew S. Holden, et al.
PURPOSE: Under ultrasound guidance, procedures that have been traditionally performed using landmark approaches have become safer and more efficient. However, inexperienced trainees struggle with coordinating probe handling and needle insertion. We aimed to establish learning curves to identify the rate of acquisition of in-plane and out-of-plane vascular access skill in novice medical trainees. METHODS: Thirty-eight novice participants were randomly assigned to perform either in-plane or out-of-plane insertions. Participants underwent baseline testing, four practice insertions (with 3D visualization assistance), and final testing; performance metrics were computed for all procedures. Five expert participants performed insertions in both approaches to establish expert performance metric benchmarks. RESULTS: In-plane novices (n=19) demonstrated significant final reductions in needle path inefficiency (45.8 vs. 127.1, p<0.05), needle path length (41.1 mm vs. 58.0 mm, p<0.05), probe path length (11.6 mm vs. 43.8 mm, p<0.01), and maximal distance between needle and ultrasound plane (3.1 mm vs. 5.5 mm, p<0.05) and surpassed expert benchmarks in average and maximal rotational error. Out-of-plane novices (n=19) demonstrated significant final reductions in all performance metrics, including needle path inefficiency (54.4 vs. 1102, p<0.01), maximum distance of needle past plane (0.0 mm vs. 7.3 mm, p<0.01), and total time of needle past plane (0.0 s vs. 3.4 s, p<0.01) and surpassed expert benchmarks in maximum distance and time of needle past plane. CONCLUSION: Our learning curves quantify improvement in in-plane and out-of-plane vascular access skill with 3D visualization over multiple attempts. The training session enables more than half of novices to approach expert performance benchmarks.
Clinical feasibility of x-ray based pose estimation of a transthoracic echo probe using attached fiducials
Lindsay E. Bodart, Benjamin R. Ciske, Martin Wagner, et al.
Co-registered display of x-ray fluoroscopy (XRF) and echocardiography during structural heart interventions can provide visualization of both catheter-based devices and soft tissue anatomy. For transesophageal echocardiography (TEE), registration can be achieved by estimating the probe pose in the x-ray image. This work investigated the potential clinical requirements for a similar approach using a transthoracic echocardiography (TTE) probe with attached x-ray-visible fiducials. Clinically, the limited number of acoustic windows for TTE dictates probe positioning on the chest, and the interventional task drives the positioning of the C-arm gantry of the x-ray system. A fiducial apparatus must be compatible with these positions and allow for accurate 3D probe pose estimation. TTE imaging of the aortic and mitral valves was performed on eight healthy subjects to determine typical 3D probe positioning in parasternal and apical acoustic windows. This data was incorporated into software that allowed for the simulation of different 3D configurations of fiducials relative to the probe, patient and x-ray system. Three candidate fiducial designs were identified, each consisting of two 40-mm diameter rings with 16 3-mm diameter spheres. X-ray imaging was simulated for C-arm angles of 30° RAO, PA, and 30° LAO, each with cranial-caudal angles typical of a TAVR procedure. Subjectively graded TTE image quality was highest for the parasternal long axis window. A fiducial configuration for the parasternal long window was identified which yielded median 3D TRE ranging from 0.44 mm to 1.04 mm in simulations. An experimental prototype of this design produced a measured 3D TRE of 1.25±0.19 mm.
Towards webcam-based tracking for interventional navigation
Mark Asselin, Andras Lasso, Tamas Ungi, et al.
PURPOSE: Optical tracking is a commonly used tool in computer assisted surgery and surgical training; however, many current generation commercially available tracking systems are prohibitively large and expensive for certain applications. We developed an open source optical tracking system using the Intel RealSense SR300 webcam with integrated depth sensor. In this paper, we assess the accuracy of this tracking system. METHODS: The PLUS toolkit was extended to incorporate the ArUco marker detection and tracking library. The depth data obtained from the infrared sensor of the Intel RealSense SR300 was used to improve accuracy. We assessed the accuracy of the system by comparing this tracker to a high accuracy commercial optical tracker. RESULTS: The ArUco based optical tracking algorithm had median errors of 20.0mm and 4.1 degrees in a 200x200x200mm tracking volume. Our algorithm processing the depth data had a positional error of 17.3mm, and an orientation error of 7.1 degrees in the same tracking volume. In the direction perpendicular to the sensor, the optical only tracking had positional errors between 11% and 15%, compared to errors in depth of 1% or less. In tracking one marker relative to another, a fused transform from optical and depth data produced the best result of 1.39% error. CONCLUSION: The webcam based system does not yet have satisfactory accuracy for use in computer assisted surgery or surgical training.
HoloLens in suturing training
Hillary Lia, Gregory Paulin, Caitlin T. Yeo, et al.
PURPOSE: A training module for basic suturing training called Suture Tutor was developed by combining video instruction and voice commands with the Microsoft HoloLens software. We put forth two hypotheses: Trainees find the HoloLens helpful and 2.) HoloLens helps the trainees to achieve a better score in objective skill assessment tests. METHODS: Software module was developed to show instructional video in the HoloLens under voice command. Thirtytwo participants were split into the control group or the HoloLens group. The control group used videos displayed on a computer during training while the HoloLens group practiced with Suture Tutor. Each group was given seven minutes to train with their assigned training method before testing. Testing involved replication of a running locking suturing pattern with a time limit of five minutes and was video recorded. The videos were expert reviewed. Participants in the HoloLens group filled out a usability survey. RESULTS: The trainees found the Hololens to be usable and realistic, and the HoloLens group used the instructional videos more than the control group did (p = 0.0175). There was no difference in the skill assessment test scores between the HoloLens and the control group and their rates of completion in the allotted time was similar. CONCLUSION: Participants found the Suture Tutor to be a user friendly and helpful adjunct. The study was unable to determine if the Suture Tutor helps trainees in achieving a better score in skill assessment testing.
Architectural analysis on dynamic MRI to study thoracic insufficiency syndrome
The major hurdles currently preventing advance and innovation in thoracic insufficiency syndrome (TIS) assessment and treatment are the lack of standardizable objective diagnostic measurement techniques that describe the 3D thoracoabdominal structures and the dynamics of respiration. Our goal is to develop, test, and evaluate a quantitative dynamic magnetic resonance imaging (QdMRI) methodology and a biomechanical understanding for deriving key quantitative parameters from free-tidal-breathing dMRI image data for describing the 3D structure and dynamics of the thoracoabdominal organs of TIS patients. In this paper, we propose an idea of a shape sketch to codify and then quantify the overall thoracic architecture, which involves the selection of 3D landmark points and computation of 3D dynamic distances over a respiratory cycle. We perform two statistical analyses of distance sketches on 25 different TIS patients to try to understand the pathophysiological mechanisms in relation to spine deformity and to quantitatively evaluate improvements from pre-operative to post-operative states. This QdMRI methodology involves developing: (1) a 4D image construction method; (2) an algorithm for the 4D segmentation of thoraco-abdominal structures; and (3) a set of key quantitative parameters. We illustrate that the TIS dynamic distance analysis method produces results previously unknown and precisely describes the morphologic and dynamic alterations of the thorax in TIS. A set of 3D thoracoabdominal distances and/or distance differences enables the precise estimation of key measures such as left & right differences, differences over tidal breathing, and differences from pre- to post-operative condition.
Improvement of liver ablation treatment for colorectal liver metastases
Brian M. Anderson, Ethan Y. Lin, Guillaume Cazoulat, et al.
The purpose of this research is to improve treatment of colorectal liver metastases (CLM) in the clinic. It has been previously shown that an ablation margin of 5 mm or more for CLM greatly increases 5 year local tumor progression free survival, however it is often difficult to ensure proper ablation using intraprocedural imaging. CT images of 30 patients with CLM treated with ablation were retrospectively obtained from the MD Anderson Cancer Center. Contours defining the liver, ablation probes, CLM margins, and ablation margin were created from the pre-treatment contrast enhanced CTs and intra-interventional CT images. Using a biomechanical model-based deformable image registration these contours were deformed onto the contrast enhanced CT images obtained just after treatment. The propagated ablation region was then compared with the GTV, as defined before the procedure, to determine the ablation margin delivered. There was a statistically significant difference (p<0.01) in the achieved ablation margin between patients who did and did not have local recurrence. Results showed that patients without local recurrence received on average 3.19 mm of minimum ablation margin around the gross tumor volume(GTV), while those with local recurrence received an average of 1.14 mm. The model presented can assist in the treatment of CLM by identifying the minimum distance to agreement between the GTV and the ablation region directly after treatment. This metric can help determine if sufficient ablation has been delivered to the treat the disease.
Hippotherapy simulator for children with cerebral palsy
Hadi Fooladi Talari, Pooneh R. Tabrizi, Olga Morozova, et al.
We have developed a mechanical horseback riding simulator for the rehabilitation of children with neurological and musculoskeletal disabilities, focused on improving trunk control in this population. While overseen by a physical or occupational therapist, the movement of a horse is often used as therapy for these patients (hippotherapy). However, many children never have the chance to experience hippotherapy due to geographical and financial constraints. We therefore developed a horseback riding simulator that could be used in the office setting to make hippotherapy more accessible for our patient population. The system includes a motion platform, carousel horse, and tracking system. We developed a virtual reality display which simulates a horse moving along a pier. As the horse moves forward, other horses come toward it, and the patient must lean left or right to move out of the way. The tracking system provides the position of tracking markers which are placed on the patient’s back, and this information is used to control the motion of the horse. Under an Institutional Review Board (IRB) approved trial, we have enrolled two patients with cerebral palsy to date. This was after completing testing on five healthy pediatric volunteers as required by the IRB. Early results show the feasibility of the system.
Quantitative assessment of cardiac motion using multiphase computed tomography imaging with application to cardiac ablation therapy
A. C. Hasnain, A. Suzuki, S. Wang, et al.
Cardiac arrhythmias, a condition in which the heart beats irregularly, are typically treated with drug or cardiac ablation therapy. More recently, external beam ablation therapy has been proposed as a potential approach for treating cardiac arrhythmias. Currently, a significant challenge regarding external beam ablation therapy in the heart is compensation for cardiac motion to ensure precise targeting. Porcine animal models are often used for evaluating image-guided intervention systems for cardiac applications; however, to date there have been relatively few studies evaluating motion in the swine heart. In this study, we model and quantify cardiac motion in the left atrium and left ventricle of three beating porcine hearts by tracking anatomic landmarks across twenty phases of the cardiac cycle from multi-phase computed tomography images. 10 landmarks are tracked for each porcine heart, 5 in the left atrium and 5 in the left ventricle. The mean (std) displacement for the 5 left atrial landmarks is 5.5(3.5) mm in x, 5.0(2.9) mm in y, and 5.6(3.3) mm in z. The mean (std) displacement for the 5 left ventricular landmarks is 7.1(3.8) mm in x, 9.9(5.2) mm in y, and 7.7(3.1) mm in z.
Liver surface reconstruction for image guided surgery
Congcong Wang, Faouzi Alaya Cheikh, Mounir Kaaniche, et al.
In image guided surgery, stereo laparoscopes have been introduced to provide a 3D view of the organs during the laparoscopic intervention. This stereo video could possibly be used for other purposes other than simple viewing: such as depth estimation, 3D rendering of the scene and 3D organ modeling. This paper aims at reconstructing 3D liver surface based on stereo vision technique. The estimated surface of the liver can later be used for registration to preoperative 3D model constructed from MRI/CT scans. For this purpose, we resort to a variational disparity estimation technique by minimizing a global energy function over the entire image. More precisely, based on the gray level and gradient constancy assumptions, a data term and a local as well as a nonlocal smoothness terms are defined to build the cost function. The latter is minimized, by using an appropriate optimization technique, to estimate the disparity map. In order to reduce the disparity search range and the influence of noise, the global variational approach is performed on the coarsest level of the multi-resolution pyramidal representation of the stereo images. Then the obtained low-resolution disparity map is up-sampled with a modified joint bilateral filtering method to the original scale. In vivo liver datasets with ground truth is difficult to obtain, so the proposed method is evaluated quantitatively on two cardiac phantom datasets from Hamlyn Center achieving an accuracy of about 2.2 mm for heart1 and 2.1 mm for heart2. Reconstructed points up to 97% for heart1 and 100% for heart2 are obtained. Qualitative validation on in vivo porcine procedure's liver datasets has shown that our proposed method can estimate the untextured surfaces geometry well.
Fusing acoustic and optical sensing for needle tracking with ultrasound
Alexis Cheng, Bofeng Zhang, Philip Oh, et al.
Needles are used in many surgical procedures such as drug delivery or needle biopsies. One of the key challenges when using needles in these interventions is the placement of the needle. Placement of the needle at the goal position will ensure proper execution of the surgical plan as well as avoid possible complications. This work explores tracking a needle with a piezoelectric sensor embedded at its tip with an ultrasound transducer and a mono-camera. While each of the ultrasound transducer and the monocamera sensors are insufficient on their own, one can uniquely locate the position of the piezoelectric sensor by combining these two sources of sensor information together. The information from each sensor can be processed to determine a geometrical locus on which the piezoelectric sensor must lie. By spatially combining the geometrical loci from the two sensors using an ultrasound calibration process, one can uniquely determine the location of the piezoelectric sensor. An experiment in a water tank was conducted with the computed results compared to ground truth cartesian stage data. An in-plane accuracy measure resulted in errors of 0.63mm and 0.18mm. The relative accuracy measure had a minimum, maximum, mean, and standard deviation of 0.02mm, 2.15mm, 0.61mm, and 0.61mm respectively. Future work will focus on demonstrating this method in more realistic ex vivo scenarios and explore whether our listed assumptions hold.
Treatment plan library based on population shape analysis for cervical adaptive radiotherapy
Bastien Rigaud, Antoine Simon, Maxime Gobeli, et al.
External radiotherapy is extensively used to treat cervix carcinoma. It is based on the acquisition of a planning CT scan on which the treatment is optimized before being delivered over 25 fractions. However, large pertreatment anatomical variations, hamper the dose delivery accuracy, with a risk of tumor under-dose and healthy organs over-dose resulting to recurrence and toxicity. We propose to generate a patient-specific treatment library based on a population analysis. First, the cervix meshes of the population were registered towards a template anatomy using a deformable mesh registration (DMR). The DMR follows an iterative point matching approach based on the local shape context (histogram of cylindrical neighbor coordinates and normalized geodesic distance to the cervix base), a topology constraint filter, a thin-plate-spline interpolation and a Gaussian regularization. Second, a standard principal component analysis (PCA) model was generated to estimate the dominant deformation modes of the population. Posterior PCA was computed to generate different potential anatomies of the target. For a new patient, her cervix was registered towards the template and her pre-treatment library was modeled. This method was applied on the data of 19 patients (282 images), using a leave-one-patient-out. The DMR was evaluated using point-to-point distance (mean: 1.3 mm), Hausdorff distance (5.7 mm), dice coeffi- cient (0.96) and mean triangle area difference (0.49 mm2 ). The performances of two modeled libraries (2 and 6 modeled anatomies) were compared to a classic pre-treatment library based on 3 planning CTs, showing better results according to both target and healthy organs coverage.
Ultrathin and flexible 4-channel scope for guiding surgical resections using a near-infrared fluorescence molecular probe for cancer
Yang Jiang, Emily J. Girard, Fiona Pakiam, et al.
Minimally-invasive optical imaging is being advanced by molecular probes that enhance contrast using fluorescence. The applications in cancer imaging are very broad, ranging from early diagnosis of cancer to the guiding of interventions, such as surgery. The high-sensitivity afforded by wide-field fluorescence imaging using scanning laser light is being developed for these broad applications. The platform technology being introduced for fluorescence-guided surgery is multimodal scanning fiber endoscope (mmSFE), which places a sub-1-mm optical fiber scanner at the tip of a highly flexible scope. Because several different laser wavelengths can be mixed and scanned together, full-color reflectance imaging can be combined with near infrared (NIR) fluorescence imaging in a new 4-channel multimodal SFE. Different imaging display modes are evaluated to provide surgeons fluorescence information with anatomical background preserved. These preliminary results provide a measure of mmSFE imaging performance in vitro and ex vivo, using a mouse model of brain cancer and BLZ-100 fluorescence tumor indicator. The mmSFE system generated wide-field 30 Hz video of concurrent reflectance and NIR fluorescence with sensitivity below 1 nM in vitro. Using the ex vivo mouse brain tumor model, the low-power 785-nm laser source does not produce any noticeable photobleaching of tumors with strong fluorescence signal over 30 minutes of continuous multimodal imaging. The wide-field NIR fluorescence images of the mouse brain surface produced a match to the conventional histology slices by processing the hematoxylin signal in a mean intensity projection to the outer surface and then registering with the mmSFE image. These results indicate the potential for the mmSFE and BLZ-100 tumor indicator for fluorescence guidance of keyhole neurosurgery.
Ultrasound imaging of the posterior skull for neurosurgical registration
Grace Underwood, Tamas Ungi, Andras Lasso, et al.
PURPOSE: Neurosurgical registration using optical tracking in prone position is problematic due to a lack of anatomical landmarks on the posterior skull. The current method of registration involves insertion of screws into the skull. Surface registration using ultrasound has been proposed as a less invasive method of registration. Obtaining full access to the posterior skull would require patient hair removal, which is not favored by patients as it can cause an increased risk of surgical site infection and a less aesthetic outcome. We performed ultrasound scans on participants with no hair removal to evaluate the visibility of the mastoid processes and occipital base of the posterior skull in ultrasound imaging. METHODS: Participants were scanned using a linear and a curvilinear ultrasound probe. Scans were taken at the maximum and minimum frequency of each probe. Ultrasound scans captured the region around each mastoid process, the external occipital protuberance, and the occipital base of the skull. Scans were recorded using the Sequences extension in 3D Slicer and replayed for visual analysis. RESULTS: At its minimum frequency, the linear probe was found to have identifiable bone surfaces with some level of uncertainty. At its maximum frequency, clear identification of the mastoid processes and occipital base was possible. The curvilinear probe did not allow identification of bone surfaces in the ultrasound image. CONCLUSION: A linear probe at a high frequency provides clearly identifiable bone surfaces, allowing for the selection of points used in an iterative closest point algorithm for surface registration.
Development of an augmented reality approach to mammographic training: overcoming some real world challenges
Qiang Tang, Yan Chen, Gerald Schaefer, et al.
A dedicated workstation and its corresponding viewing software are essential requirements in breast screener training. A major challenge of developing further generic screener training technology (in particular, for mammographic interpretation training) is that high-resolution radiological images are required to be displayed on dedicated workstations whilst actual reporting of the images is generally completed on individual standard workstations. Due to commercial reasons, dedicated clinical workstations manufactured by leading international vendors tend not to have critical technical aspects divulged which would facilitate further integration of third party generic screener training technology. With standard workstations, it is noticeable that the conventional screener training depends highly on manual transcription so that traditional training methods can potentially be deficient in terms of real-time feedback and interaction. Augmented reality (AR) provides the ability to co-operate with both real and virtual environments, and therefore can supplement conventional training with virtual registered objects and actions. As a result, realistic screener training can co-operate with rich feedback and interaction in real time. Previous work1 has shown that it is feasible to employ an AR approach to deliver workstation-independent radiological screening training by superimposing appropriate feedback coupled with the use of interaction interfaces. The previous study addressed presence issues and provided an AR recognisable stylus which allowed for drawing interaction. As a follow-up, this study extends the AR method and investigates realistic effects and the impacts of environmental illumination, application performance and transcription. A robust stylus calibration method is introduced to address environmental changes over time. Moreover, this work introduces a completed AR workflow which allows real time recording, computer analysable training data and further recoverable transcription during post-training. A quantitative evaluation results show an accuracy of more than 80% of user-drawn points being located within a pixel distance of 5.
Image quality and segmentation
Gargi V. Pednekar, Jayaram K. Udupa, David J. McLaughlin, et al.
Algorithms for image segmentation (including object recognition and delineation) are influenced by the quality of object appearance in the image and overall image quality. However, the issue of how to perform segmentation evaluation as a function of these quality factors has not been addressed in the literature. In this paper, we present a solution to this problem. We devised a set of key quality criteria that influence segmentation (global and regional): posture deviations, image noise, beam hardening artifacts (streak artifacts), shape distortion, presence of pathology, object intensity deviation, and object contrast. A trained reader assigned a grade to each object for each criterion in each study. We developed algorithms based on logical predicates for determining a 1 to 10 numeric quality score for each object and each image from reader-assigned quality grades. We analyzed these object and image quality scores (OQS and IQS, respectively) in our data cohort by gender and age. We performed recognition and delineation of all objects using recent adaptations [8, 9] of our Automatic Anatomy Recognition (AAR) framework [6] and analyzed the accuracy of recognition and delineation of each object. We illustrate our method on 216 head & neck and 211 thoracic cancer computed tomography (CT) studies.
Distant pulse oximetry based on skin region extraction and multi-spectral measurement
Christian Herrmann, Jürgen Metzler, Dieter Willersinn, et al.
Capturing vital signs, specifically heart rate and oxygen saturation, is essential in care situations. Clinical pulse oximetry solutions work contact-based by clips or otherwise fixed sensor units which have sometimes undesired impact on the patient. A typical example would be pre-term infants in neonatal care which require permanent monitoring and have a very fragile skin. This requires a regular change of the sensor unit location by the staff to avoid skin damage. To improve patient comfort and to reduce care effort, a feasibility study with a camera-based passive optical method for contactless pulse oximetry from a distance is performed. In contrast to most existing research on contactless pulse oximetry, a task-optimized multi-spectral sensor unit instead of a standard RGB-camera is proposed. This first allows to avoid the widely used green spectral range for distant heart rate measurement, which is unsuitable for pulse oximetry due to nearly equal spectral extinction coefficients of saturated oxy-hemoglobin and non-saturated hemoglobin. Second, it also better addresses the challenge of the worse signal-to-noise ratio than in the contact-based or active measurement, e.g., caused by background illumination. Signal noise from background illumination is addressed in several ways. The key part is an automated reference measurement of background illumination by automated patient localization in the acquired images by extraction of skin and background regions with a CNN-based detector. Due to the custom spectral ranges, the detector is trained and optimized for this specific setup. Altogether, allowing a contactless measurement, the studied concept promises to improve the care of patients where skin contact has negative effects.
Tracking of liver vessel bifurcations in 3D+t ultrasound by subsequent approximations of a rigid shape model
Heinrich M. Overhoff
Liver motion induced by respiratory or cardiac movement can limit the precision of diagnostic or therapeutic procedures like core biopsies or radiation therapy. Ultrasound provides higher image volume acquisition rates compared to CT or MRI. Notwithstanding the occasionally poor vessel contrast, tracking of vessel bifurcations in 3-D+t ultrasound sequences may improve the precision of image guided interventions. For tracking of vessel bifurcations only only the liver’s translation is considered and its rotation/expansion is neglected. Each 3-D sub-volume supposed to contain a bifurcation is translated and its voxels are locally adaptively binarized to separate tissue vs. vessel voxels. The surface of the vessel voxels is approximated to a data-driven time-varying Y-like shape model of the bifurcation. For each time-stamp, the binarization threshold and the bifurcation’s center translation vs. the predecessor volume are chosen such that the differences of successive shape models are minimized w.r.t. the ℓ0.5-norm. The sequence of bifurcation’s center translations defines its trajectory. The method is evaluated on 7 3-D+t volume sequences with 14 annotations, which are placed in bifurcations of vessels with diameters between 7 mm and 9 mm. Tracking performance is evaluated vs. manually annotated reference translations. For a voxel spacing of 1.1 mm × 0.6 mm × 1.2 mm, i.e. volume diagonal 1.8 mm, a 90%-quantile ℓ2- norm tracking error < 2.1 mm is achieved. The algorithm gives local translational motion information and tracks individual vessel bifurcations. Applying the algorithm to several bifurcations may additionally allow determining a more global displacement field of the liver.
Precision blood flow measurements in vascular networks with conservation constraints
Gabe Shaughnessy, Carson Hoffman, Sebastian Schafer, et al.
In-vivo blood flow measurement, either catheter based or derived from medical images, has become increasingly used for clinical decision making. Most methods focus on a single vascular segment, catheter or simulations, due to mechanical and computational complexity. Accuracy of blood flow measurements in vascular segments are improved by considering the constraint of blood flow conservation across the whole network. Image derived blood flow measurements for individual vessels are made with a variety of techniques including ultrasound, MR, 2D DSA, and 4D-DSA. Time resolved DSA (4D) volumes are derived from 3D-DSA acquisitions and offer one such environment to measure the blood flow and respective measurement uncertainty in a vascular network automatically without user intervention. Vessel segmentation in the static DSA volume allows a mathematical description of the vessel connectivity and flow propagation direction. By constraining the allowable values of flow afforded by the measurement uncertainty and enforcing flow conservation at each junction, a reduction in the effective number of degrees of freedom in the vascular network can be made. This refines the overall measurement uncertainty in each vessel segment and provides a more robust measure of flow. Evaluations are performed with a simulated vascular network and with arterial segments in canine subjects and human renal 4D-DSA datasets. Results show a 30% reduction in flow uncertainty from a renal arterial case and a 2.5-fold improvement in flow uncertainty in some canine vessels. This method of flow uncertainty reduction may provide a more quantitative approach to treatment planning and evaluation in interventional radiology.
Osteotomy planner: an open-source tool for osteotomy simulation
Sam Horvath, Beatriz Paniagua, Johan Andruejol, et al.
There has been a recent emphasis in surgical science on supplementing surgical training outside of the Operating Room (OR). Combining simulation training with the current surgical apprenticeship enhances surgical skills in the OR, without increasing the time spent in the OR practicing. Computer-assisted surgical (CAS) planning consists of performing operative techniques virtually using three-dimensional (3D) computer-based models reconstructed from 3D crosssectional imaging. The purpose of this paper is to present a CAS system to rehearse, visualize and quantify osteotomies, and demonstrate its usefulness in two different osteotomy surgical procedures, cranial vault reconstruction and femoral osteotomy. We found that the system could sufficiently simulate these two procedures. Our system takes advantage of the high-quality visualizations possible with 3DSlicer, as well as implements new infrastructure to allow for direct 3D interaction (cutting and positioning) with the bone models. We see the proposed osteotomy planner tool evolving towards incorporating different cutting templates to help depict several surgical scenarios, help 'trained' surgeons maintain operating skills, help rehearse a surgical sequence before heading to the OR, or even to help surgical planning for specific patient cases.
In vivo imaging of radiopaque resorbable inferior vena cava filter infused with gold nanoparticles
Li Tian, Patrick Lee, Burapol Singhana, et al.
Radiopaque resorbable inferior vena cava filter (IVCF) were developed to offer a less expensive alternative to assessing filter integrity in preventing pulmonary embolism for the recommended prophylactic period and then simply vanishes without intervention. In this study, we determined the efficacy of gold nanoparticle (AuNP)-infused poly-p-dioxanone (PPDO) as an IVCF in a swine model.

Infusion into PPDO loaded 1.14±0.08 % AuNP by weight as determined by elemental analysis. The infusion did not alter PPDO’s mechanical strength nor crystallinity (Kruskal−Wallis one-way ANOVA, p<0.05). There was no cytotoxicity observed (one-way ANOVA, p<0.05) when tested against RF24 and MRC5 cells. Gold content in PPDO was maintained at ~2000 ppm during the 6-week incubation in PBS at 37oC.

As a proof-of-concept, two pigs were deployed with IVCF, one with AuNP-PPDO and the other without coating. Results show that the stent ring of AuNP-PPDO was highly visible even in the presence of iodine-based contrast agent and after clot introduction, but not of the uncoated IVCF. Autopsy at two weeks post-implantation showed AuNP-PPDO filter was endothelialized onto the IVC wall, and no sign of filter migration was observed. The induced clot was also still trapped within the AuNP-PPDO IVCF.

As a conclusion, we successfully fabricated AuNP-infused PPDO IVCF that is radiopaque, has robust mechanical strength, biocompatible, and can be imaged effectively in vivo. This suggests the efficacy of this novel, radiopaque, absorbable IVCF for monitoring its position and integrity over time, thus increasing the safety and efficacy of deep vein thrombosis treatment.
Simulation of high intensity focused ultrasound ablation to enable ultrasound thermal monitoring
Chloé Audigier, Younsu Kim, Nicholas Ellens, et al.
High Intensity Focused Ultrasound (HIFU) is a non-invasive ablative therapy. It is usually performed under MR monitoring, which provides reliable real-time thermal information to ensure a complete tumor ablation while preserving as much healthy tissue as possible. Unfortunately, many patients do not necessarily have access to this expensive and cumbersome cutting-edge technology, which is prohibitive for a widespread use of MRI to guide thermal ablation procedures. Ultrasound (US) is a promising low cost and portable alternative, that allows real-time monitoring and can easily be deployed outside hospitals. However, US-based thermometry alone is not robust enough for the monitoring of in-vivo tissue ablation, and its feasibility is demonstrated only on in-vitro cases for small range of temperatures, up to 50°C. Computational models can simulate the biophysical phenomena and mechanisms which govern this complex thermal therapy. The US wave propagation, the temperature evolution as well as the resulted necrotic lesion can be modeled. A method integrating those sources of information to intra-operative US data would allow to recover the accurate temperature in a wider range. Therefore, US thermometry could be improved and provide an inexpensive yet comprehensive method for intra-procedural monitoring of the ablative process through HIFU. In this paper, we propose to study the rise in temperature induced by high intensity US propagation in biological tissue, which is particularly difficult to simulate due to the complexity of the involved phenomena. The physics-based HIFU model simulates the nonlinear US propagation using a k-space model coupled with the heat propagation in biological tissue using a reaction-diffusion equation. We analyze numerically the model to evaluate its accuracy and related computational cost. Finally, our simulation approach is validated against MR thermometry, the gold-standard monitoring tool used in clinical setting. Three consecutive HIFU ablations were performed on a 2% agar and 2% silicon phantom using the Sonalleve V2 MR-HIFU system (Profound Medical, Toronto, Canada).
Micromechanics based modelling of in-vivo respiratory motion of the diaphragm muscle with the incorporation of optimized z-disks mechanics
Brett Coelho, Abbas Samani
Lung cancer is by far the leading cause of cancer death among both men and women; according to the American Cancer Society, approximately 1 out of 4 cancer deaths are due to lung cancer. The primary treatment for the condition generally involves External Beam Radiation Therapy (EBRT). Lung cancer tumour motion is generally clinically significant and presents a major challenge for clinicians. With significant lung tumour motion (>;5mm) during respiration comes the requirement for motion compensation techniques1 . Ideally, continuous real-time tumour tracking allows for continuous radiation delivery such that the tumour receives sufficient radiation dose while minimizing dose to surrounding healthy lung tissue. Direct tumour tracking is often not possible in non-contrast images and a surrogate is required for tumour motion. Among surrogates for tumour tracking, the diaphragm muscle has shown to provide good correlation with tumour motion2 . Motion compensation techniques often require extensive 4D CT scans which is inherently dangerous. The diaphragm muscle, the major driver of respiratory motion, can also be incorporated into lung biomechanical models used to predict deformations in the lungs and surrounding organs during respiration3 . This research involves the development of a patient specific biomechanical model of the diaphragm muscle with both passive and active responses. Detailed anatomical, and geometric information, including the muscle micromechanics, is used to generate a Finite Element Model (FEM) of the diaphragm in order to predict its in vivo motion. Results from modelling a patient specific case revealed a good match between the simulated and actual contracted diaphragm surface with an average mean squared difference of 2.83 mm.
Cone beam tomosynthesis fluoroscopy: a new approach to 3D image guidance
Cristian Atria, Lisa Last, Nathan Packard, et al.
Fluoroscopy is a common image guidance modality used in spine and orthopedic surgery. One benefit of this technology is that it provides real-time images without interrupting the procedure. A major challenge with fluoroscopy is that it provides projection images with no depth information, limiting surgical accuracy in complex procedures like for example in thoracic spine surgery1 . 3D technologies such as intraoperative Cone Beam CT and surgical navigation solve the surgical accuracy problem but increase cost and impair the surgical workflow, limiting its adoption2 . In an attempt to improve surgical accuracy, control costs and simplify the surgical workflow, a new approach to image guidance based on real-time 3D imaging is proposed3 . Fast fluoroscopic acquisitions taken in a circular tomosynthesis geometry are used to provide near real-time 3D updates of the imaged surgical scene. 3D updates are achieved via a model-based reconstruction that makes proficient use of prior information, and instrument tracking is achieved via image processing. This new imaging approach is named Cone Beam Tomosynthesis (CBT) fluoroscopy. A first prototype based on a modified C-arm and with a single rotating source is used to assess the surgical performance of CBT-fluoroscopy. Preliminary results show that CBT-fluoroscopy can achieve near-real-time imaging performance and provide comparable surgical accuracy to fluoroscopy in the use case of pedicle screw placement on phantoms; the limitations of the approach are analyzed and steps to address these limitations are discussed.
Surgical skill level assessment using automatic feature extraction methods
Marzieh Ershad, Robert Rege, Ann Majewicz
Objective and automatic evaluation of surgical skill is important for the design of surgical simulators used in surgical robotics training. Extensive research has been done to identify and evaluate a variety of evaluation metrics (e.g., path length, completion time); however, these metrics are only provided to the user after completion of the task, and may not fully use the underlying information in the movement data. This study proposes a method for automatic and objective evaluation of surgical expertise levels, in short time intervals, during task performance. We first compare three different automatic feature extraction methods including: (1) principle component analysis (PCA), (2) independent component analysis (ICA), and (3) linear discriminant analysis (LDA) on low-level position data, in their ability to distinguish among different expertise levels. We then study the performance of the best feature extraction method in different time intervals, for the purpose of finding the minimal time frame that accurately predicts user skill level. 14 subjects of different expertise levels were recruited to perform two simulated tasks on the da Vinci training simulator. The position of the subjects’ arm joints (shoulder, elbow and wrist) in the dominant hand, as well as the position of both hands, were recorded. Four classifiers (Naive Bayes, support vector machine, nearest neighbor, and Decision Tree) were used to identify the best feature extraction method. The results indicate that PCA in combination with support vector machine can classify expertise levels with an accuracy of 98% in time frames of 0.25 seconds.
Bundling 3D- and 2D-based registration of MRI to x-ray breast tomosynthesis
P. Cotic Smole, N. V. Ruiter, C. Kaiser, et al.
Increasing interest in multimodal breast cancer diagnosis has led to the development of methods for MRI to X-ray mammography registration to provide direct correlation of modalities. The severe breast deformation in X-ray mammography is often tackled by biomechanical models, which however have not yet brought the registration accuracy to a clinically applicable level. We present a novel registration approach of MRI to X-ray tomosynthesis. Tomosynthesis provides three-dimensional information of the compressed breast and as such has the ability to open new possibilities in the registration of MRI and X-ray data. By bundling the 3D information from the tomosynthesis volume with the 2D projection images acquired at different measuring angles, we provide a correlation between the registration error in 3D and 2D and evaluate different 3D- and 2D-based similarity metrics to drive the optimization of the automated patient-specific registration approach. From the preliminary study of four analysed patients we found that the projected registration error is in general larger than the 3D error in case of small registration errors in the cranio-caudal direction. Although both image shape and intensitybased 2D similarity metrics showed a clear correlation with the 2D registration error at different projection angles, metrics that relied on the combined 2D and 3D information yielded in most of the cases the minimal registration error and as such had better performance than similarity metrics that rely only on the shape similarity of volumes.
Towards robust needle segmentation and tracking in pediatric endoscopic surgery
Yujun Chen, Murilo M. Marinho, Yusuke Kurose, et al.
Neonatal tracheoesophageal fistula surgery poses technical challenges to surgeons, given the limited workspace and fragile tissues. In previous studies from our collaborators, a neonatal chest model was developed to allow surgeons to enhance their performance, such as suturing ability, before conducting actual surgery. Endoscopic images are recorded while the model is used, and surgeon skill can be manually assessed by using a 29-point checklist. However, that is a time-consuming process. In the checklist, there are 15 points that regard needle position and angle that could be automatized if the needle could be efficiently tracked. This paper is a first step towards the goal of tracking the needle. Pixel HSV color space channels, opponent color space channels, and pixel oriented gradient information are used as features to train a random forest model. Three methods are compared in the segmentation stage: single pixel features, pixel and its immediate 10-by-10 square window features, and the features of randomly offset pixels in a larger 169-by-169 window. Our analysis using 9-fold cross-validation shows that using randomly offset pixels increases needle segmentation f-measure by 385 times when comparing with single pixel color, and by 3 times when comparing with the immediate square window even though the same amount of memory is used. The output in the segmentation step is fed into a particle filter to track the full state of the needle.
CT-ultrasound deformable registration for PET-determined prostate brachytherapy
Junghoon Lee, Daniel Y. Song
Recent advances in positron emission tomography (PET) targeting prostate specific membrane antigen (PSMA) allow highly specific and sensitive identification of intra/extraprostatic tumors. Combination of PSMA PET and intraoperative transrectal ultrasound (TRUS) images enables to further improve the standard of care in brachytherapy, allowing the physician to precisely tailor the dose to the individual’s tumor. The key step of PET-determined focal prostate brachytherapy is the preoperative PET/CT and intraoperative TRUS image fusion, which enables mapping of PSMA PET to intraoperative imaging. In this paper, we propose a deformable image registration algorithm for PET/CT-TRUS image fusion based on a structural descriptor map (SDM). The SDM is computed by solving Laplace’s equation based on the prostate segmentations. The solution to the Laplace’s equation with a boundary condition provides equipotential surfaces within the prostate that describe smooth transitions from the midline to the boundary while preserving the prostate shape and geometry. Therefore, the computed equipotential surface distribution can be considered as a structural descriptor and used for deformable registration. The proposed SDM-based CT-TRUS registration algorithm has been evaluated on five prostate brachytherapy patient data sets in which intraoperative end-of-implantation TRUS and day 1 post-implant CT were registered. Target registration errors computed by using the implanted seeds as the anatomical landmarks were 2.25±1.36 mm on average, which is clinically acceptable.
ProjectAlign: a real-time ultrasound guidance system for spinal midline detection during epidural needle placement
Alexander R. Toews, Simon Massey, Vit Gunka, et al.
Ultrasound can provide useful guidance for needle insertion in epidural anesthesia, but image interpretation can be challenging. The aim of this work is to determine the feasibility of a new ultrasound-based system (ProjectAlign) capable of identifying the spinal midline directly at the puncture site. ProjectAlign’s main benefit is that it requires no operator interpretation of ultrasound images. Instead, the operator is guided by automatic real-time estimates of spinal midline position projected onto the skin. A simple cross-correlation routine generates increasingly accurate estimates of midline location as the transducer is centred over the spine. A clinical feasibility study was performed to assess the performance of ProjectAlign in identifying the midline in the L2 to L4 lumbar region of 12 subjects. We hypothesized that (i) ProjectAlign can identify the spinal midline within a 5 mm lateral distance of a sonographer’s manual marking, and (ii) ProjectAlign is more laterally accurate than palpation in identifying the spinal midline. Both hypotheses were validated by the data. Midline measurement with ProjectAlign generated an RMS error of 2.0 mm, with a maximum error of 5.0 mm. The results of this study support further investigation into the use of ProjectAlign, in particular for obese patients where palpation is most difficult.