Proceedings Volume 7964

Medical Imaging 2011: Visualization, Image-Guided Procedures, and Modeling

cover
Proceedings Volume 7964

Medical Imaging 2011: Visualization, Image-Guided Procedures, and Modeling

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 2 March 2011
Contents: 22 Sessions, 114 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2011
Volume Number: 7964

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 7964
  • Image Guided Therapy I
  • Keynote and Image Guidance in Urology
  • Visualization and Modeling
  • Image Segmentation and Registration
  • Lung
  • Keynote and Ultrasound Guided Intervention
  • Neuro
  • Cardiac Applications
  • Endoscopy and Laparoscopy
  • Orthopedic and Cranial Procedures
  • Image Guided Therapy II
  • Poster Session: Calibration
  • Poster Session: Cardiac Procedures
  • Poster Session: Endoscopic Procedures
  • Poster Session: Image-Guided Therapy
  • Poster Session: Intraoperative Imaging
  • Poster Session: Localization and Tracking Technologies
  • Poster Session: Modeling
  • Poster Session: Registration
  • Poster Session: Segmentation
  • Poster Session: Visualization
Front Matter: Volume 7964
icon_mobile_dropdown
Front Matter: Volume 7964
This PDF file contains the front matter associated with SPIE Proceedings Volume 7964, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Image Guided Therapy I
icon_mobile_dropdown
The use of virtual fiducials in image-guided kidney surgery
Courtenay Glisson, Rowena Ong, Amber Simpson, et al.
The alignment of image-space to physical-space lies at the heart of all image-guided procedures. In intracranial surgery, point-based registrations can be used with either skin-affixed or bone-implanted extrinsic objects called fiducial markers. The advantages of point-based registration techniques are that they are robust, fast, and have a well developed mathematical foundation for the assessment of registration quality. In abdominal image-guided procedures such techniques have not been successful. It is difficult to accurately locate sufficient homologous intrinsic points in imagespace and physical-space, and the implantation of extrinsic fiducial markers would constitute "surgery before the surgery." Image-space to physical-space registration for abdominal organs has therefore been dominated by surfacebased registration techniques which are iterative, prone to local minima, sensitive to initial pose, and sensitive to percentage coverage of the physical surface. In our work in image-guided kidney surgery we have developed a composite approach using "virtual fiducials." In an open kidney surgery, the perirenal fat is removed and the surface of the kidney is dotted using a surgical marker. A laser range scanner (LRS) is used to obtain a surface representation and matching high definition photograph. A surface to surface registration is performed using a modified iterative closest point (ICP) algorithm. The dots are extracted from the high definition image and assigned the three dimensional values from the LRS pixels over which they lie. As the surgery proceeds, we can then use point-based registrations to re-register the spaces and track deformations due to vascular clamping and surgical tractions.
Surgical phantom for off-pump mitral valve replacement
A. Jonathan McLeod, John Moore, Gerard M. Guiraudon, et al.
Off-pump, intracardiac, beating heart surgery has the potential to improve patient outcomes by eliminating the need for cardiopulmonary bypass and aortic cross clamping but it requires extensive image guidance as well as the development of specialized instrumentation. Previously, developments in image guidance and instrumentation were validated on either a static phantom or in vivo through porcine models. This paper describes the design and development of a surgical phantom for simulating off-pump mitral valve replacement inside the closed beating heart. The phantom allows surgical access to the mitral annulus while mimicking the pressure inside the beating heart. An image guidance system using tracked ultrasound, magnetic instrument tracking and preoperative models previously developed for off-pump mitral valve replacement is applied to the phantom. Pressure measurements and ultrasound images confirm the phantom closely mimics conditions inside the beating heart.
Incorporating tissue excision in deformable image registration: a modified demons algorithm for cone-beam CT-guided surgery
The ability to perform fast, accurate, deformable registration with intraoperative images featuring surgical excisions was investigated for use in cone-beam CT (CBCT) guided head and neck surgery. Existing deformable registration methods generally fail to account for tissue excised between image acquisitions and typically simply "move" voxels within the images with no ability to account for tissue that is removed (or introduced) between scans. We have thus developed an approach in which an extra dimension is added during the registration process to act as a sink for voxels removed during the course of the procedure. A series of cadaveric images acquired using a prototype CBCT-capable C-arm were used to model tissue deformation and excision occurring during a surgical procedure, and the ability of deformable registration to correctly account for anatomical changes under these conditions was investigated. Using a previously developed version of the Demons deformable registration algorithm, we identify the difficulties that traditional registration algorithms encounter when faced with excised tissue and present a modified version of the algorithm better suited for use in intraoperative image-guided procedures. Studies were performed for different deformation and tissue excision tasks, and registration performance was quantified in terms of the ability to accurately account for tissue excision while avoiding spurious deformations arising around the excision.
Evaluation of an ad hoc model of detection physics for navigated beta-probe surface imaging
Dzhoshkun I. Shakir, Alexander Hartl, Nassir Navab, et al.
Intraoperative surface imaging with navigated beta-probes has been shown to be a possibility to enable control of tumor resection borders. By employing ad hoc models of the detection physics the image quality can be improved. Our model computes the amount of radiation from a single point source that reaches the detector, with the solid angle subtended by the detector on the source, assuming perfect shielding. The sensitivity of the detector to the source due to the angle between the detector axis and the source-to-detector vector is also considered. A set of experiments was performed with three sources (two 10x10mm2 and one 20x10mm2 pieces of cellulose saturated with FDG) on a plate as phantom. Five sets of measurements were taken, three of them at a distance of 10mm from the plate und two at 30mm. At both distances one measurement set was taken in a random manner and the other ones systematically covering the whole area. The same experiments were simulated with our model and the GATE simulation framework. The resulting measurements from the experiments and simulations were then used to perform a reconstruction of the sources. The real measurements were compared to those simulated with our model and GATE, with a mean NCC of 80.64% for our model and 70.14% for GATE. In the reconstructions of the real measurements the sources were visually quite well separated, however the reconstructions of the measurements simulated by the model show that there is still room for further improvement.
Computer assisted intervention surgery planning and navigation for percutaneous microwave ablation of lung cancer
Weiming Zhai, Lin Sheng, Yixu Song, et al.
Microwave ablation is a promising option in lung cancer therapy. However, it's rarely used in percutaneous lung cancer therapy compared to liver cancer, because the presence of a large amount of air within the lung creates significant back shadowing artifacts that preclude adequate delineation of anatomic details on sonography. To utilize microwave ablation in malignant lung tumor therapy, we developed a novel percutaneous intervention surgery navigation system (CAINS-I), which capitalizes on using computer assisted technology to help lung cancer patients whose condition are not amenable to surgical resection, sonographic guidance and intraoperative CT surgery. In these surgeries, preoperative CT images with patient respiration state are first acquired, which are then visualized using GPU-accelerated volume rendering. The optimal surgery trajectories are then planned based on 3D thermal field computation and surgery simulation in the surgery planning software. During the surgery, the patient breath is control by a portable volume ventilator system which could limit the movement and displacement of the tumor. Then the microwave probe is punctured into the tumor according to the dynamic respiratory state and the tumor is ablated by microwave energy. After the surgery, postoperative CT are acquired and compared to the preoperative CT, and the surgery is evaluated by compare preoperative and postoperative CT images. The development of this technique represented an advance from the traditional ways for lung cancer therapy and significantly extends the indications of microwave ablation.
Keynote and Image Guidance in Urology
icon_mobile_dropdown
2D and 3D visualization methods of endoscopic panoramic bladder images
Alexander Behrens, Iris Heisterklaus, Yannick Müller, et al.
While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.
Photoacoustic imaging of prostate brachytherapy seeds in ex vivo prostate
Nathanael Kuo, Hyun Jae Kang, Travis DeJournett, et al.
The localization of brachytherapy seeds in relation to the prostate is a key step in intraoperative treatment planning (ITP) for improving outcomes in prostate cancer patients treated with low dose rate prostate brachytherapy. Transrectal ultrasound (TRUS) has traditionally been the modality of choice to guide the prostate brachytherapy procedure due to its relatively low cost and apparent ease of use. However, TRUS is unable to visualize seeds well, precluding ITP and producing suboptimal results. While other modalities such as X-ray and magnetic resonance imaging have been investigated to localize seeds in relation to the prostate, photoacoustic imaging has become an emerging and promising modality to solve this challenge. Moreover, photoacoustic imaging may be more practical in the clinical setting compared to other methods since it adds little additional equipment to the ultrasound system already adopted in procedure today, reducing cost and simplifying engineering steps. In this paper, we demonstrate the latest efforts of localizing prostate brachytherapy seeds using photoacoustic imaging, including visualization of multiple seeds in actual prostate tissue. Although there are still several challenges to be met before photoacoustic imaging can be used in the operating room, we are pleased to present the current progress in this effort.
Optimal drug release schedule for in-situ radiosensitization of image guided permanent prostate implants
Robert A. Cormack, Paul L. Nguyen, Anthony V. D'Amico, et al.
Planned in-situ radiosensitization may improve the therapeutic ratio of image guided 125I prostate brachytherapy. Spacers used in permanent implants may be manufactured from a radiosensitizer-releasing polymer to deliver protracted localized sensitization of the prostate. Such devices will have a limited drug-loading capacity, and the drug release schedule that optimizes outcome, under such a constraint, is not known. This work determines the optimal elution schedules for 125I prostate brachytherapy. The interaction between brachytherapy dose distributions and drug distribution around drug eluting spacers is modeled using a linear-quadratic (LQ) model of cell kill. Clinical brachytherapy plans were used to calculate the biologic effective dose (BED) for planned radiation dose distributions while adding the spatial distributions of radiosensitizer while varying the temporal release schedule subject to a constraint on the drug capacity of the eluting spacers. Results: The greatest increase in BED is achieved by schedules with the greatest sensitization early in the implant. Making brachytherapy spacers from radiosensitizer eluting polymer transforms inert parts of the implant process into a means of enhancing the effect of the brachytherapy radiation. Such an approach may increase the therapeutic ratio of prostate brachytherapy or offer a means of locally boosting the radiation effect without increasing the radiation dose to surrounding tissues.
Visualization and Modeling
icon_mobile_dropdown
Fuzzy object modeling
Jayaram K. Udupa, Dewey Odhner, Alexandre X. Falcao, et al.
To make Quantitative Radiology (QR) a reality in routine clinical practice, computerized automatic anatomy recognition (AAR) becomes essential. As part of this larger goal, we present in this paper a novel fuzzy strategy for building bodywide group-wise anatomic models. They have the potential to handle uncertainties and variability in anatomy naturally and to be integrated with the fuzzy connectedness framework for image segmentation. Our approach is to build a family of models, called the Virtual Quantitative Human, representing normal adult subjects at a chosen resolution of the population variables (gender, age). Models are represented hierarchically, the descendents representing organs contained in parent organs. Based on an index of fuzziness of the models, 32 thorax data sets, and 10 organs defined in them, we found that the hierarchical approach to modeling can effectively handle the non-linear relationships in position, scale, and orientation that exist among organs in different patients.
The sparse data extrapolation problem: strategies for soft-tissue correction for image-guided liver surgery
The problem of extrapolating cost-effective relevant information from distinctly finite or sparse data, while balancing the competing goals between workflow and engineering design, and between application and accuracy is the 'sparse data extrapolation problem'. Within the context of open abdominal image-guided liver surgery, one realization of this problem is compensating for non-rigid organ deformations while maintaining workflow for the surgeon. More specifically, rigid organ-based surface registration between CT-rendered liver surfaces and laser-range scanned intraoperative partial surface counterparts resulted in an average closest-point residual 6.1 ± 4.5 mm with maximumsigned distances ranging from -13.4 to 16.2 mm. Similar to the neurosurgical environment, there is a need to correct for soft tissue deformation to translate image-guided interventions to the abdomen (e.g. liver, kidney, pancreas, etc.). While intraoperative tomographic imaging is available, these approaches are less than optimal solutions to the sparse data extrapolation problem. In this paper, we compare and contrast three sparse data extrapolation methods to that of datarich interpolation for the correction of deformation within a liver phantom containing 43 subsurface targets. The findings indicate that the subtleties in the initial alignment pose following rigid registration can affect correction up to 5- 10%. The best deformation compensation achieved was approximately 54.5% (target registration error of 2.0 ± 1.6 mm) while the data-rich interpolative method was 77.8% (target registration error of 0.6 ± 0.5 mm).
3D density estimation in digital breast tomosynthesis: application to needle path planning for breast biopsy
Laurence Vancamberg, Nausikaa Geeraert, Razvan Iordache, et al.
Needle insertion planning for digital breast tomosynthesis (DBT) guided biopsy has the potential to improve patient comfort and intervention safety. However, a relevant planning should take into account breast tissue deformation and lesion displacement during the procedure. Deformable models, like finite elements, use the elastic characteristics of the breast to evaluate the deformation of tissue during needle insertion. This paper presents a novel approach to locally estimate the Young's modulus of the breast tissue directly from the DBT data. The method consists in computing the fibroglandular percentage in each of the acquired DBT projection images, then reconstructing the density volume. Finally, this density information is used to compute the mechanical parameters for each finite element of the deformable mesh, obtaining a heterogeneous DBT based breast model. Preliminary experiments were performed to evaluate the relevance of this method for needle path planning in DBT guided biopsy. The results show that the heterogeneous DBT based breast model improves needle insertion simulation accuracy in 71% of the cases, compared to a homogeneous model or a binary fat/fibroglandular tissue model.
Fast interactive exploration of 4D MRI flow data
A. Hennemuth, O. Friman, C. Schumann, et al.
1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing times.
Intraoperative 3D stereo visualization for image-guided cardiac ablation
Mahdi Azizian, Rajni Patel
There are commercial products which provide 3D rendered volumes, reconstructed from electro-anatomical mapping and/or pre-operative CT/MR images of a patient's heart with tools for highlighting target locations for cardiac ablation applications. However, it is not possible to update the three-dimensional (3D) volume intraoperatively to provide the interventional cardiologist with more up-to-date feedback at each instant of time. In this paper, we describe the system we have developed for real-time three-dimensional stereo visualization for cardiac ablation. A 4D ultrasound probe is used to acquire and update a 3D image volume. A magnetic tracking device is used to track the distal part of the ablation catheter in real time and a master-slave robot-assisted system is developed for actuation of a steerable catheter. Three-dimensional ultrasound image volumes go through some processing to make the heart tissue and the catheter more visible. The rendered volume is shown in a virtual environment. The catheter can also be added as a virtual tool to this environment to achieve a higher update rate on the catheter's position. The ultrasound probe is also equipped with an EM tracker which is used for online registration of the ultrasound images and the catheter tracking data. The whole augmented reality scene can be shown stereoscopically to enhance depth perception for the user. We have used transthoracic echocardiography (TTE) instead of the conventional transoesophageal (TEE) or intracardiac (ICE) echocardiogram. A beating heart model has been used to perform the experiments. This method can be used both for diagnostic and therapeutic applications as well as training interventional cardiologists.
Image Segmentation and Registration
icon_mobile_dropdown
A novel class of machine-learning-driven real-time 2D/3D tracking methods: texture model registration (TMR)
Philipp Steininger, Markus Neuner, Karl Fritscher, et al.
We present a novel view on 2D/3D image registration by introducing a generic algorithmic framework that is based on supervised machine learning (SML). First and foremost, this class of algorithms, referred to as texture model registration (TMR), aims at making 2D/3D registration applicable for time-critical image guided medical procedures. TMR methods are two-stage. In a first offline pre-computational stage, a prediction rule is derived from a pre-interventional 3D image and according geometric constraints. This is achieved by computing digitally reconstructed radiographs, pre-processing them, extracting their texture, and applying SML methods. In a second online stage, the inferred rule is used for predicting the spatial rigid transformation of unseen intrainterventional 2D images. A first simple concrete TMR implementation, referred to as TMR-PCR, is introduced. This approach involves principal component regression (PCR) and simple intermediate pre-processing steps. Using TMR-PCR, first experimental results on five clinical IGRT 3D data sets and synthetic intra-interventional images are presented. The implementation showed an average registration rate of 48 Hz over 40000 registrations, and succeeded in the majority of cases with a mean target registration error smaller than 2 mm. Finally, the potential and characteristics of the proposed methodical framework are discussed.
Uncertainty propagation and analysis of image-guided surgery
Amber L. Simpson, Burton Ma, Randy E. Ellis, et al.
A successful image-guided surgical intervention requires accurate measurement of coordinate systems. Uncertainty is introduced every time a pose is measured by the optical tracking system. When we transform a measured pose into a different coordinate system, the covariance (which encodes the uncertainty of the pose) must be propagated to this new coordinate system. In this paper, we describe a method for propagating covariances estimated from registration, tracking, and instrument calibration into the tip of the surgical tool. This is clinically important, since it is at the tool tip that the clinician cares about uncertainty. We demonstrate that the propagation method, which is computed in real time as the tool moves through space, reliably computes the propagated covariance by comparing our estimate to true covariances from Monte Carlo simulations.
Image-based global registration system for bronchoscopy guidance
Rahul Khare, William E. Higgins
Previous studies have shown that bronchoscopy guidance systems improve accuracy and reduce skill variation among physicians during bronchoscopy. In the past, we presented an image-based bronchoscopy guidance system that has been extensively validated in live bronchoscopic procedures. However, this system cannot actively recover from adverse events, such as patient coughing or dynamic airway collapses. After such events, the bronchoscope position is recovered only by moving back to a previously seen and easily identifiable bifurcation such as the main carina. Furthermore, the system requires an attending technician to closely follow the physician's movement of the bronchoscope to avoid misguidance. Also, when the physician is forced to advance the bronchoscope across multiple bifurcations, the system is not able to detect faulty maneuvers. We propose two system-level solutions. The first solution is a system-level guidance strategy that incorporates a global-registration algorithm to provide the physician with updated navigational and guidance information during bronchoscopy. The system can handle general navigation to a region of interest (ROI), as well as adverse events, and it requires minimal commands so that it can be directly controlled by the physician. The second solution visualizes the global picture of all the bifurcations and their relative orientations in advance and suggests the maneuvers needed by the bronchoscope to approach the ROI. Guided bronchoscopy results using human airway-tree phantoms demonstrate the potential of the two solutions.
High-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery
Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.
A novel hybrid model for deformable image registration in abdominal procedures
Xishi Huang, Paul S. Babyn, Thomas Looi, et al.
We propose a novel neuro-fuzzy hybrid transformation model for deformable image registration in intra-operative image guided procedures involving large soft tissue deformation. The hybrid model consists of two parts: a physics-based model and a mathematical approximation model. The physics-based model is based on elastic solid mechanics to model major deformation patterns of the central part of organs, and the mathematical approximation model depicts the deformation of the residual part along organ boundary. A neuro-fuzzy technique is employed to seamlessly integrate the two parts into a unified hybrid model. Its unique feature is to incorporate domain knowledge of soft tissue deformation patterns and significantly reduce the number of transformation parameters. We demonstrate the effectiveness of our hybrid model to register liver magnetic resonance (MR) images in human subject study. This technique has the potential to significantly improve intra-operative image guidance in abdominal and thoracic procedures.
Learning distance function for regression-based 4D pulmonary trunk model reconstruction estimated from sparse MRI data
Dime Vitanovski, Alexey Tsymbal, Razvan Ionasec, et al.
Congenital heart defect (CHD) is the most common birth defect and a frequent cause of death for children. Tetralogy of Fallot (ToF) is the most often occurring CHD which affects in particular the pulmonary valve and trunk. Emerging interventional methods enable percutaneous pulmonary valve implantation, which constitute an alternative to open heart surgery. While minimal invasive methods become common practice, imaging and non-invasive assessment tools become crucial components in the clinical setting. Cardiac computed tomography (CT) and cardiac magnetic resonance imaging (cMRI) are techniques with complementary properties and ability to acquire multiple non-invasive and accurate scans required for advance evaluation and therapy planning. In contrary to CT which covers the full 4D information over the cardiac cycle, cMRI often acquires partial information, for example only one 3D scan of the whole heart in the end-diastolic phase and two 2D planes (long and short axes) over the whole cardiac cycle. The data acquired in this way is called sparse cMRI. In this paper, we propose a regression-based approach for the reconstruction of the full 4D pulmonary trunk model from sparse MRI. The reconstruction approach is based on learning a distance function between the sparse MRI which needs to be completed and the 4D CT data with the full information used as the training set. The distance is based on the intrinsic Random Forest similarity which is learnt for the corresponding regression problem of predicting coordinates of unseen mesh points. Extensive experiments performed on 80 cardiac CT and MR sequences demonstrated the average speed of 10 seconds and accuracy of 0.1053mm mean absolute error for the proposed approach. Using the case retrieval workflow and local nearest neighbour regression with the learnt distance function appears to be competitive with respect to "black box" regression with immediate prediction of coordinates, while providing transparency to the predictions made.
Lung
icon_mobile_dropdown
Real-time method for bronchoscope motion measurement and tracking
Duane C. Cornish, William E. Higgins
Bronchoscopy-guidance systems have been shown to improve the success rate of bronchoscopic procedures. A key technical cornerstone of bronchoscopy-guidance systems is the synchronization between the virtual world, derived from a patient's three-dimensional (3D) multidetector computed-tomography (MDCT) scan, and the real world, derived from the bronchoscope video during a live procedure. Two main approaches for synchronizing these worlds exist: electromagnetic navigation bronchoscopy (ENB) and image-based bronchoscopy. ENB systems require considerable extra hardware, and both approaches have drawbacks that hinder continuous robust guidance. In addition, they both require an attending technician to be present. We propose a technician-free strategy that enables real-time guidance of bronchoscopy. The approach uses measurements of the bronchoscope's movement to predict its position in 3D virtual space. To achieve this, a bronchoscope model, defining the device's shape in the airway tree to a given point p, provides an insertion depth to p. In real time, our strategy compares an observed bronchoscope insertion depth and roll angle, measured by an optical sensor, to precalculated insertion depths along a predefined route in the virtual airway tree. This leads to a prediction of the bronchoscope's location and orientation. To test the method, experiments involving a PVC-pipe phantom and a human airway-tree phantom verified the bronchoscope models and the entire method, respectively. The method has considerable potential for improving guidance robustness and simplicity over other bronchoscopy-guidance systems.
Surface modeling and segmentation of the 3D airway wall in MSCT
Margarete Ortner, Catalin Fetita, Pierre-Yves Brillet, et al.
Airway wall remodeling in asthma and chronic obstructive pulmonary disease (COPD) is a well-known indicator of the pathology. In this context, current clinical studies aim for establishing the relationship between the airway morphological structure and its function. Multislice computed tomography (MSCT) allows morphometric assessment of airways, but requires dedicated segmentation tools for clinical exploitation. While most of the existing tools are limited to cross-section measurements, this paper develops a fully 3D approach for airway wall segmentation. Such approach relies on a deformable model which is built up as a patient-specific surface model at the level of the airway lumen and deformed to reach the outer surface of the airway wall. The deformation dynamics obey a force equilibrium in a Lagrangian framework constrained by a vector field which avoids model self-intersections. The segmentation result allows a dense quantitative investigation of the airway wall thickness with a deeper insight at bronchus subdivisions than classic cross-section methods. The developed approach has been assessed both by visual inspection of 2D cross-sections, performed by two experienced radiologists on clinical data obtained with various protocols, and by using a simulated ground truth (pulmonary CT image model). The results confirmed a robust segmentation in intra-pulmonary regions with an error in the range of the MSCT image resolution and underlined the interest of the volumetric approach versus purely 2D methods.
Evaluation of electromagnetically tracked transbronchial needle aspiration in a ventilated porcine lung
Ingmar Gergel, Ralf Tetzlaff, Hans-Peter Meinzer, et al.
Transbronchial needle aspiration (TBNA) is a common procedure to collect tissue samples from the inside of the lung for diagnostic use. However, the main drawback of the procedure is that it has to be blindly performed because the biopsy target region is behind the bronchial wall and hence not within the field of view of the bronchoscope. Thus, the diagnostic yield rate is low. To increase success rate of TBNA biopsy an electromagnetic trackable TBNA needle has been introduced. Nevertheless, the introduced prototype TBNA instrument was evaluated in a rigid rubber phantom without taking respiratory motion into account. The purpose of this study is to present a new TBNA needle where the electromagnetic sensor is directly integrated into a TBNA needle and to access its performance in a regularly ventilated lung. Using our previously presented navigation system, seven TBNA interventions were performed in a porcine lung during regular respiration lung movement; respectively a control computer tomography scan was acquired. We evaluated tracking accuracy of the electromagnetically tracked needle during the entire respiratory cycle for each intervention. The newly developed TBNA needle successfully operated throughout all seven interventions. According to the results, our electromagnetic TBNA tracking system is a promising approach to increase the TBNA biopsy success rate.
On scale invariant features and sequential Monte Carlo sampling for bronchoscope tracking
This paper presents an improved bronchoscope tracking method for bronchoscopic navigation using scale invariant features and sequential Monte Carlo sampling. Although image-based methods are widely discussed in the community of bronchoscope tracking, they are still limited to characteristic information such as bronchial bifurcations or folds and cannot automatically resume the tracking procedure after failures, which result usually from problematic bronchoscopic video frames or airway deformation. To overcome these problems, we propose a new approach that integrates scale invariant feature-based camera motion estimation into sequential Monte Carlo sampling to achieve an accurate and robust tracking. In our approach, sequential Monte Carlo sampling is employed to recursively estimate the posterior probability densities of the bronchoscope camera motion parameters according to the observation model based on scale invariant feature-based camera motion recovery. We evaluate our proposed method on patient datasets. Experimental results illustrate that our proposed method can track a bronchoscope more accurate and robust than current state-of-the-art method, particularly increasing the tracking performance by 38.7% without using an additional position sensor.
Keynote and Ultrasound Guided Intervention
icon_mobile_dropdown
Section-thickness profiling for brachytherapy ultrasound guidance
Mohammad Peikari, Thomas Kuiran Chen, Everette C. Burdette, et al.
Purpose: Ultrasound (US) elevation beamwidth causes a certain type of image artifact around the anechoic areas of the tissue. It is generally assumed that the US image is of zero thickness, which contradicts the fact that the acoustic beam can only be mechanically focused at a depth resulting in a finite, non-uniformed elevation beamwidth. We suspect that elevation beamwidth artifacts contribute to target reconstruction error in computer-assisted interventions. This paper introduces a method for characterization of the beamwidth for transrectal ultrasound (TRUS) used in prostate brachythyerapy. In particular, we measure how the US sectionthickness varies along the beam's axial depth. Method: We developed a beam-profiling device (a TRUS-bridge phantom) specifically tailored for standard brachytherapy ultrasound imaging systems to generate a complete section-thickness profile of a given TRUS transducer. The device was designed in CAD software and prototyped by a 3D printer. Result: The experimental results demonstrated that the TRUS beam in the elevation direction is focused closely to the transducer and theoretically the transducer would provide a better elevational resolution within that range. Conclusion: We presented a beam profiling phantom to measure the section-thickness of a transrectal ultrasound transducer for operating room use. However, there are some limitations which need to be addressed, for example, phantom sterilization and the speed of sound in the current medium of experiment which is not the same as that of biological tissues.
Neuro
icon_mobile_dropdown
Momentum-based morphometric analysis with application to Parkinson's disease
Jingyun Chen, Ali R. Khan, Martin J. McKeown, et al.
We apply the initial momentum shape representation of diffeomorphic metric mapping from a template region of interest (ROI) to a given ROI as a morphometic marker in Parkinson's disease. We used a three-step segmentation-registrationmomentum process to derive feature vectors from ROIs in a group of 42 subjects consisting of 19 Parkinson's Disease (PD) subjects and 23 normal control (NC) subjects. Significant group differences between PD and NC subjects were detected in four basal ganglia structures including the caudate, putamen, thalamus and globus pallidus. The magnitude of regionally significant between-group differences detected ranged between 34-75%. Visualization of the different structural deformation pattern between-groups revealed that some parts of basal ganglia structure actually hypertrophy, presumably as a compensatory response to more widespread atrophy. Our results of both hypertrophy and atrophy in the same structures further demonstrate the importance of morphological measures as opposed to overall volume in the assessment of neurodegenerative disease.
Potential predictors for the amount of intra-operative brain shift during deep brain stimulation surgery
Ryan Datteri, Srivatsan Pallavaram, Peter E. Konrad, et al.
A number of groups have reported on the occurrence of intra-operative brain shift during deep brain stimulation (DBS) surgery. This has a number of implications for the procedure including an increased chance of intra-cranial bleeding and complications due to the need for more exploratory electrodes to account for the brain shift. It has been reported that the amount of pneumocephalus or air invasion into the cranial cavity due to the opening of the dura correlates with intraoperative brain shift. Therefore, pre-operatively predicting the amount of pneumocephalus expected during surgery is of interest toward accounting for brain shift. In this study, we used 64 DBS patients who received bilateral electrode implantations and had a post-operative CT scan acquired immediately after surgery (CT-PI). For each patient, the volumes of the pneumocephalus, left ventricle, right ventricle, third ventricle, white matter, grey matter, and cerebral spinal fluid were calculated. The pneumocephalus was calculated from the CT-PI utilizing a region growing technique that was initialized with an atlas-based image registration method. A multi-atlas-based image segmentation method was used to segment out the ventricles of each patient. The Statistical Parametric Mapping (SPM) software package was utilized to calculate the volumes of the cerebral spinal fluid (CSF), white matter and grey matter. The volume of individual structures had a moderate correlation with pneumocephalus. Utilizing a multi-linear regression between the volume of the pneumocephalus and the statistically relevant individual structures a Pearson's coefficient of r = 0.4123 (p = 0.0103) was found. This study shows preliminary results that could be used to develop a method to predict the amount of pneumocephalus ahead of the surgery.
Simulation of brain tumor resection in image-guided neurosurgery
Xiaoyao Fan, Songbai Ji, Kathryn Fontaine, et al.
Preoperative magnetic resonance images are typically used for neuronavigation in image-guided neurosurgery. However, intraoperative brain deformation (e.g., as a result of gravitation, loss of cerebrospinal fluid, retraction, resection, etc.) significantly degrades the accuracy in image guidance, and must be compensated for in order to maintain sufficient accuracy for navigation. Biomechanical finite element models are effective techniques that assimilate intraoperative data and compute whole-brain deformation from which to generate model-updated MR images (uMR) to improve accuracy in intraoperative guidance. To date, most studies have focused on early surgical stages (i.e., after craniotomy and durotomy), whereas simulation of more complex events at later surgical stages has remained to be a challenge using biomechanical models. We have developed a method to simulate partial or complete tumor resection that incorporates intraoperative volumetric ultrasound (US) and stereovision (SV), and the resulting whole-brain deformation was used to generate uMR. The 3D ultrasound and stereovision systems are complimentary to each other because they capture features deeper in the brain beneath the craniotomy and at the exposed cortical surface, respectively. In this paper, we illustrate the application of the proposed method to simulate brain tumor resection at three temporally distinct surgical stages throughout a clinical surgery case using sparse displacement data obtained from both the US and SV systems. We demonstrate that our technique is feasible to produce uMR that agrees well with intraoperative US and SV images after dural opening, after partial tumor resection, and after complete tumor resection. Currently, the computational cost to simulate tumor resection can be up to 30 min because of the need for re-meshing and the trial-and-error approach to refine the amount of tissue resection. However, this approach introduces minimal interruption to the surgical workflow, which suggests the potential for its clinical application with further improvement in computational efficiency.
Optimizing nonrigid registration performance between volumetric true 3D ultrasound images in image-guided neurosurgery
Songbai Ji, Xiaoyao Fan, David W. Roberts, et al.
Compensating for brain shift as surgery progresses is important to ensure sufficient accuracy in patient-to-image registration in the operating room (OR) for reliable neuronavigation. Ultrasound has emerged as an important and practical imaging technique for brain shift compensation either by itself or through computational modeling that estimates whole-brain deformation. Using volumetric true 3D ultrasound (3DUS), it is possible to nonrigidly (e.g., based on B-splines) register two temporally different 3DUS images directly to generate feature displacement maps for data assimilation in the biomechanical model. Because of a large amount of data and number of degrees-of-freedom (DOFs) involved, however, a significant computational cost may be required that can adversely influence the clinical feasibility of the technique for efficiently generating model-updated MR (uMR) in the OR. This paper parametrically investigates three B-splines registration parameters and their influence on the computational cost and registration accuracy: number of grid nodes along each direction, floating image volume down-sampling rate, and number of iterations. A simulated rigid body displacement field was employed as a ground-truth against which the accuracy of displacements generated from the B-splines nonrigid registration was compared. A set of optimal parameters was then determined empirically that result in a registration computational cost of less than 1 min and a sub-millimetric accuracy in displacement measurement. These resulting parameters were further applied to a clinical surgery case to demonstrate their practical use. Our results indicate that the optimal set of parameters result in sufficient accuracy and computational efficiency in model computation, which is important for future application of the overall biomechanical modeling to generate uMR for image-guidance in the OR.
Improved geometric variables for predicting disturbed flow at the normal carotid bifurcation
Recent work from our group has shown the primacy of the bifurcation area ratio and tortuosity in determining the amount of disturbed flow at the carotid bifurcation, believed to be a local risk factor for the carotid atherosclerosis. We have also presented fast and reliable methods of extraction of geometry from routine 3D contrast-enhanced magnetic resonance angiography, as the necessary step along the way for large-scale trials of such local risk factors. In the present study, we refine our original geometric variables to better reflect the underlying fluid mechanical principles. Flaring of the bifurcation, leading to flow separation, is defined by the maximum relative expansion of the common carotid artery (CCA), proximal to the bifurcation apex. The beneficial effect of curvature on flow inertia, via its suppression of flow separation, is now characterized by the tortuosity of CCA as it enters the flare region. Based on data from 50 normal carotid bifurcations, multiple linear regressions of these new independent geometric predictors against the dependent disturbed flow burden reveals adjusted R2 values approaching 0.5, better than the values closer to 0.3 achieved using the original variables. The excellent scan-rescan reproducibility demonstrated for our earlier geometric variables is shown to be preserved for the new definitions. Improved prediction of disturbed flow by robust and reproducible vascular geometry offers a practical pathway to large-scale studies of local risk factors in atherosclerosis.
Clinical study of model-based blood flow quantification on cerebrovascular data
A. Groth, I. Wächter-Stehle, O. Brina, et al.
Diagnosis and treatment decisions of cerebrovascular diseases are currently based on structural information like the endovascular lumen. In future, clinical diagnosis will increasingly be based on functional information which gives direct information about the physiological parameters and, hence, is a direct measure for the severity of the pathology. In this context, an important functional quantity is the volumetric blood flow over time. The proposed flow quantification method uses contrasted X-ray images from cerebrovascular interventions and a model of contrast agent dispersion to estimate the flow parameters from the spatial and temporal development of the contrast agent concentration through the vascular system. To evaluate the model-based blood flow quantification under realistic circumstances, dedicated cerebrovascular data has been acquired during clinical interventions. To this aim, a clinical protocol for this novel procedure has been defined and optimized. For the verification of the measured flow results ultrasound Doppler measurements have been performed acting as reference measurements. The clinical data available so far indicates the ability of the proposed flow model to explain the in-vivo transport of contrast agent in blood. The flow quantification results show good correspondence of flow waveform and mean volumetric flow rate with the accomplished ultrasound measurements before or after angiography.
Estimating blood flow velocity in angiographic image data
Clemens M. Hentschke, Steffen Serowy, Gábor Janiga, et al.
We propose a system to estimate blood flow velocity in angiographic image data for patient-specific blood flow simulations. Angiographies are acquired routinely for diagnosis and before treatment of vascular diseases. Projective blood flow is measured in digital subtraction X-ray angiography (2D-DSA) images by tracking contrast agent propagation. Spatial information is added by re-projecting 2D centerline pixels to the reconstructed 3D X-ray rotation angiography (3D-RA) data of the same subject. Ambiguities caused by occluding vessels from the virtual viewpoint of the acquired 2D-DSA image are resolved by a graph-based approach. The blood flow velocity can be used as boundary condition for exact blood flow simulations that can help physicians to understand hemodynamics of the vasculature. Our focus is to analyze cerebral angiographic data. We performed several experiments with phantom and patient data that proved the accuracy and the functionality of our method. We evaluated experimentally the projective flow estimation method and the re-projection method. We measured mean deviations to the ground truth between 11 % and 15.7 % for phantom data. We also showed the ability of our method to produce plausible results with patient-data.
Cardiac Applications
icon_mobile_dropdown
Automatic detection of contrast injection on fluoroscopy and angiography for image guided trans-catheter aortic valve implantations (TAVI)
Rui Liao, Wei You, Michelle Yan, et al.
Presentation of detailed anatomical structures via 3-D models helps navigation and deployment of the prosthetic valve in TAVI procedures. Fast and automatic contrast detection in the aortic root on X-ray images facilitates a seamless workflow to utilize the 3-D models by triggering 2-D/3-D registration automatically when motion compensation is needed. In this paper, we propose a novel method for automatic detection of contrast injection in the aortic root on fluoroscopic and angiographic sequences. The proposed method is based on histogram analysis and likelihood ratio test, and is robust to variations in the background, the density and volume of the injected contrast, and the size of the aorta. The performance of the proposed algorithm was evaluated on 26 sequences from 5 patients and 3 clinical sites, with 16 out of 17 contrast injections correctly detected and zero false detections. The proposed method is of general form and can be extended for detection of contrast injection in other organs and/or applications.
A patient-specific visualization tool for comprehensive analysis of coronary CTA and perfusion MRI data
H. A. Kirisli, V. Gupta, S. Kirschbaum, et al.
Cardiac magnetic resonance perfusion imaging (CMR) and computed tomography angiography (CTA) are widely used to assess heart disease. CMR is used to measure the global and regional myocardial function and to evaluate the presence of ischemia; CTA is used for diagnosing coronary artery disease, such as coronary stenoses. Nowadays, the hemodynamic significance of coronary artery stenoses is determined subjectively by combining information on myocardial function with assumptions on coronary artery territories. As the anatomy of coronary arteries varies greatly between individuals, we developed a patient-specific tool for relating CTA and perfusion CMR data. The anatomical and functional information extracted from CTA and CMR data are combined into a single frame of reference. Our graphical user interface provides various options for visualization. In addition to the standard perfusion Bull's Eye Plot (BEP), it is possible to overlay a 2D projection of the coronary tree on the BEP, to add a 3D coronary tree model and to add a 3D heart model. The perfusion BEP, the 3D-models and the CTA data are also interactively linked. Using the CMR and CTA data of 14 patients, our tool directly established a spatial correspondence between diseased coronary artery segments and myocardial regions with abnormal perfusion. The location of coronary stenoses and perfusion abnormalities were visualized jointly in 3D, thereby facilitating the study of the relationship between the anatomic causes of a blocked artery and the physiological effects on the myocardial perfusion. This tool is expected to improve diagnosis and therapy planning of early-stage coronary artery disease.
Incorporating a Gaussian model at the catheter tip for improved registration of preoperative surface models
M. E. Rettmann, D. R. Holmes III, D. L. Packer, et al.
Atrial fibrillation is a common cardiac arrhythmia in which aberrant electrical activity cause the atria to quiver which results in irregular beating of the heart. Catheter ablation therapy is becoming increasingly popular in treating atrial fibrillation, a procedure in which an electrophysiologist guides a catheter into the left atrium and creates radiofrequency lesions to stop the arrhythmia. Typical visualization tools include bi-plane fluoroscopy, 2-D ultrasound, and electroanatomic maps, however, recently there has been increased interest in incorporating preoperative surface models into the procedure. Typical strategies for registration include landmark-based and surface-based methods. Drawbacks of these approaches include difficulty in accurately locating corresponding landmark pairs and the time required to sample surface points with a catheter. In this paper, we describe a new approach which models the catheter tip as a Gaussian kernel and eliminates the need to collect surface points by instead using the stream of continuosly tracked catheter points. We demonstrate the feasibility of this technique with a left atrial phantom model and compare the results with a standard surface based approach.
Patient specific optimal catheter selection for right coronary artery
Sami ur Rahman, Stefan Wesarg, Wolfram Völker
During coronary artery angiography, a catheter is used to inject a contrast dye into the coronary arteries. Due to the anatomical variation of the aorta and the coronary arteries in different humans, one common catheter cannot be used for all patients. The cardiologists test different catheters for a patient and select the best catheter according to the patient's anatomy. This procedure is time consuming and there is a slight chance of cancer from excessive exposure to radiation. To overcome these problems, we propose a computer aided catheter selection procedure. In this paper we present our approach for the angiography of the Right Coronary Artery (RCA). Our approach involves segmentation of the aorta and coronary arteries, finding the centerline and computing the Curve Angle (CA) and Curve Length (CL) between the aorta and the coronary arteries. We then compute CA and CL of catheters and suggest a catheter with the closest CA and CL with respect to the aorta's and coronary arteries' CA and CL. This solution avoids testing of many catheters during catheterization. The cardiologist already gets the recommendation about the optimal catheter for the patient prior to the intervention.
Data fusion for catheter tracking using Kalman filtering: applications in robot-assisted catheter insertion
Mahdi Azizian, Rajni Patel
X-ray image guided angioplasty is a minimally invasive procedure that involves the insertion of a catheter into a blood vessel to remove blockages to blood flow. There are several issues associated with conventional angioplasty which cause risks for the patient (damage to blood vessels, dislodging plaques, etc.) and difficulties for the clinician (X-ray exposure, fatigue, etc.). Autonomous or semi-autonomous robot-assisted catheter insertion is a solution that can reduce these problems substantially. To perform autonomous catheter insertion, closed-loop position control of the distal tip of the catheter is required during insertion. Therefore accurate real-time position feedback is needed for this purpose. We have developed a real-time image processing algorithm for catheter tip position tracking which has an acceptable performance but is sensitive to X-ray image artifacts caused by bones and dense tissues. A magnetic tracking system (MTS) is another modality that has also been used for catheter tip position tracking, but it is sensitive to external electromagnetic interferences and ferromagnetic material. Combining the measurement data provided by both imaging and magnetic sensors can compensate for the deficiencies of each and can also improve the robustness of catheter tip position tracking. We have developed a Kalman filter based sensor fusion scheme to overcome deficiencies of both of these methods and create a reliable real-time tracking of a catheter tip. Experiments have been performed by inserting a guide catheter into a model of the vasculature. The method has been tested in presence of occlusion in the images and also electromagnetic interference.
Endoscopy and Laparoscopy
icon_mobile_dropdown
Real-time surface reconstruction from stereo endoscopic images for intraoperative registration
S. Röhl, S. Bodenstedt, S. Suwelack, et al.
Minimally invasive surgery is a medically complex discipline that can heavily benefit from computer assistance. One way to assist the surgeon is to blend in useful information about the intervention into the surgical view using Augmented Reality. This information can be obtained during preoperative planning and integrated into a patient-tailored model of the intervention. Due to soft tissue deformation, intraoperative sensor data such as endoscopic images has to be acquired and non-rigidly registered with the preoperative model to adapt it to local changes. Here, we focus on a procedure that reconstructs the organ surface from stereo endoscopic images with millimeter accuracy in real-time. It deals with stereo camera calibration, pixel-based correspondence analysis, 3D reconstruction and point cloud meshing. Accuracy, robustness and speed are evaluated with images from a test setting as well as intraoperative images. We also present a workflow where the reconstructed surface model is registered with a preoperative model using an optical tracking system. As preliminary result, we show an initial overlay between an intraoperative and a preoperative surface model that leads to a successful rigid registration between these two models.
3D surface reconstruction for laparoscopic computer-assisted interventions: comparison of state-of-the-art methods
A. Groch, A. Seitel, S. Hempel, et al.
One of the main challenges related to computer-assisted laparoscopic surgery is the accurate registration of pre-operative planning images with patient's anatomy. One popular approach for achieving this involves intraoperative 3D reconstruction of the target organ's surface with methods based on multiple view geometry. The latter, however, require robust and fast algorithms for establishing correspondences between multiple images of the same scene. Recently, the first endoscope based on Time-of-Flight (ToF) camera technique was introduced. It generates dense range images with high update rates by continuously measuring the run-time of intensity modulated light. While this approach yielded promising results in initial experiments, the endoscopic ToF camera has not yet been evaluated in the context of related work. The aim of this paper was therefore to compare its performance with different state-of-the-art surface reconstruction methods on identical objects. For this purpose, surface data from a set of porcine organs as well as organ phantoms was acquired with four different cameras: a novel Time-of-Flight (ToF) endoscope, a standard ToF camera, a stereoscope, and a High Definition Television (HDTV) endoscope. The resulting reconstructed partial organ surfaces were then compared to corresponding ground truth shapes extracted from computed tomography (CT) data using a set of local and global distance metrics. The evaluation suggests that the ToF technique has high potential as means for intraoperative endoscopic surface registration.
A real-time online video overlay navigation system for minimally invasive laparoscopic tumor resection
The purpose of this paper is to present a detailed description of our real-time navigation system for computer assisted surgery. The system was developed with laparoscopic partial nephrectomies as a first application scenario. The main goal of the application is to enable tracking of the tumor position and orientation during a surgery. Our system is based on ultrasound to CT registration and electromagnetic tracking. The basic idea is to process tracking information to generate an augmented reality (AR) visualization of a tumor model in the camera image of a laparoscopic camera. Thereby it enhances the surgeon's view on the current scene and therefore facilitates higher safety during the surgery. So far we have applied our system in vitro during two phantom trials with a surgeon which yielded promising results.
Constructing spherical panoramas of a bladder phantom from endoscopic video using bundle adjustment
Timothy D. Soper, John E. Chandler, Michael P. Porter, et al.
The high recurrence rate of bladder cancer requires patients to undergo frequent surveillance screenings over their lifetime following initial diagnosis and resection. Our laboratory is developing panoramic stitching software that would compile several minutes of cystoscopic video into a single panoramic image, covering the entire bladder, for review by an urolgist at a later time or remote location. Global alignment of video frames is achieved by using a bundle adjuster that simultaneously recovers both the 3D structure of the bladder as well as the scope motion using only the video frames as input. The result of the algorithm is a complete 360° spherical panorama of the outer surface. The details of the software algorithms are presented here along with results from both a virtual cystoscopy as well from real endoscopic imaging of a bladder phantom. The software successfully stitched several hundred video frames into a single panoramic with subpixel accuracy and with no knowledge of the intrinsic camera properties, such as focal length and radial distortion. In the discussion, we outline future work in development of the software as well as identifying factors pertinent to clinical translation of this technology.
Comparison of two navigation system designs for flexible endoscopes using abdominal 3D ultrasound
Marcus Kaar, Rainer Hoffmann, Helmar Bergmann, et al.
This paper describes a navigation system for flexible endoscopes equipped with ultrasound scan heads. For navigation and needle biopsy procedures it provides additional oblique slices from preoperative computed tomography (CT) volumes which are displayed with the corresponding endoscopic ultrasound (US) image. In contrast to similar systems an additional abdominal 3D ultrasound image is used to achieve the required registration. Two different approaches are compared: one method is based on direct inter-modal registration between abdominal 3D ultrasound and CT volume. The second method uses another 3D US scan taken preoperatively before the CT scan. Here, the CT is calibrated by means of an optical tracking system and the transformation between CT and the calibrated 3D US can be calculated without image registration. Before intervention, a pre-interventional 3D US is registered intra-modal to the preoperative US. This second method invoked to be the more robust and accurate procedure. For experimental studies a phantom has been developed which consists of a plastic tube inside a water tank. For error evaluation small plastic spheres have been fixed around the tube at different distances. First results give an overall error of 3.9 mm for the first method while the overall error for the intramodal method amounted to 3.1 mm.
Evaluation of electronic biopsy for clinical diagnosis in virtual colonoscopy
Joseph Marino, Wei Du, Matthew Barish, et al.
Virtual colonoscopy provides techniques not available in optical colonoscopy, an exciting one being the ability to perform an electronic biopsy. An electronic biopsy image is created using ray-casting volume rendering of the CT data with a translucent transfer function mapping higher densities to red and lower densities to blue. The resulting image allows the physician to gain insight into the internal structure of polyps. Benign tissue and adenomas can be differentiated; the former will appear as homogeneously blue and the latter as irregular red structures. Although this technique is now common, is included with clinical systems, and has been used successfully for computer aided detection, there has so far been no study to evaluate the effectiveness of a physician using electronic biopsy in determining the pathological state of a polyp. We present here such a study, wherein an experienced radiologist ranked polyps based on electronic biopsy alone per scan (supine and prone), as well as both combined. Our results show a correct identification 77% of the time using prone or supine images alone, and 80% accuracy using both. Using ROC analysis based on this study with one reader and a modest sample size, the combined score is not significantly higher than using a single electronic biopsy image alone. However, our analysis indicates a trend of superiority for the combined ranking that deserves a follow-up confirmatory study with a larger sample and more readers. This study yields hope that an improved electronic biopsy technique could become a primary clinical diagnosis method.
Orthopedic and Cranial Procedures
icon_mobile_dropdown
Closed-form inverse kinematics for intra-operative mobile C-arm positioning with six degrees of freedom
Lejing Wang, Rui Zou, Simon Weidert, et al.
For trauma and orthopedic surgery, maneuvering a mobile C-arm X-ray device into a desired position in order to acquire the right picture is a routine task. The precision and ease of use of the C-arm positioning becomes even more important for more advanced imaging techniques as parallax-free X-ray image stitching, for example. Standard mobile C-arms have only five degrees of freedom (DOF), which definitely restricts their motions that have six DOF in 3D Cartesian space. We have proposed a method to model the kinematics of the mobile Carm and operating table as an integrated 6DOF C-arm X-ray imaging system.1 This enables mobile C-arms to be positioned relative to the patient's table with six DOF in 3D Cartesian space. Moving mobile C-arms to a desired position and orientation requires finding the necessary joint values, which is an inverse kinematics problem. In this paper, we present closed-form solutions, i.e. analytic expressions, obtained in an algebraic way for the inverse kinematics problem of the 6DOF C-arm model. In addition, we implement a 6DOF C-arm system for interactively radiation-free C-arm positioning based on a continuous guidance from C-arm pose estimation. For this we employ a visual marker pattern attached under the operating table and a mobile C-arm system augmented by a video camera and mirror construction. In our experiment, repositioning C-arm to a pre-defined pose in a phantom study demonstrates the practicality and accuracy of our developed 6DOF C-arm system.
Spectral-based 2D/3D X-ray to CT image rigid registration
M. Freiman, O. Pele, A. Hurvitz, et al.
We present a spectral-based method for the 2D/3D rigid registration of X-ray images to a CT scan. The method uses a Fourier-based representation to decompose the six rigid transformation parameters problem into a twoparameter out-of-plane rotation and a four-parameter in-plane transformation problems. Preoperatively, a set of Digitally Reconstructed Radiographs (DRRs) are generated offline from the CT in the expected in-plane location ranges of the fluoroscopic X-ray imaging devices. Each DRR is transformed into a imaging device in-plane invariant features space. Intraoperatively, a few 2D projections of the patient anatomy are acquired with an X-ray imaging device. Each projection is transformed into its in-plane invariant representation. The out-of-plane parameters are first computed by maximization of the Normalized Cross-Correlation between the invariant representations of the DRRs and the X-ray images. Then, the in-plane parameters are computed with the phase correlation method based on the Fourier-Mellin transform. Experimental results on publicly available data sets show that our method can robustly estimate the out-of-plane parameters with accuracy of 1.5° in less than 1sec for out-of-plane rotations of 10° or more, and perform the entire registration in less than 10secs.
Intra-temporal facial nerve centerline segmentation for navigated temporal bone surgery
Eduard H. J. Voormolen, Marijn van Stralen, Peter A. Woerdeman, et al.
Approaches through the temporal bone require surgeons to drill away bone to expose a target skull base lesion while evading vital structures contained within it, such as the sigmoid sinus, jugular bulb, and facial nerve. We hypothesize that an augmented neuronavigation system that continuously calculates the distance to these structures and warns if the surgeon drills too close, will aid in making safe surgical approaches. Contemporary image guidance systems are lacking an automated method to segment the inhomogeneous and complexly curved facial nerve. Therefore, we developed a segmentation method to delineate the intra-temporal facial nerve centerline from clinically available temporal bone CT images semi-automatically. Our method requires the user to provide the start- and end-point of the facial nerve in a patient's CT scan, after which it iteratively matches an active appearance model based on the shape and texture of forty facial nerves. Its performance was evaluated on 20 patients by comparison to our gold standard: manually segmented facial nerve centerlines. Our segmentation method delineates facial nerve centerlines with a maximum error along its whole trajectory of 0.40±0.20 mm (mean±standard deviation). These results demonstrate that our model-based segmentation method can robustly segment facial nerve centerlines. Next, we can investigate whether integration of this automated facial nerve delineation with a distance calculating neuronavigation interface results in a system that can adequately warn surgeons during temporal bone drilling, and effectively diminishes risks of iatrogenic facial nerve palsy.
Optimization of multi-image pose recovery of fluoroscope tracking (FTRAC) fiducial in an image-guided femoroplasty system
Wen P. Liu, Mehran Armand, Yoshito Otake, et al.
Percutaneous femoroplasty [1], or femoral bone augmentation, is a prospective alternative treatment for reducing the risk of fracture in patients with severe osteoporosis. We are developing a surgical robotics system that will assist orthopaedic surgeons in planning and performing a patient-specific, augmentation of the femur with bone cement. This collaborative project, sponsored by the National Institutes of Health (NIH), has been the topic of previous publications [2],[3] from our group. This paper presents modifications to the pose recovery of a fluoroscope tracking (FTRAC) fiducial during our process of 2D/3D registration of X-ray intraoperative images to preoperative CT data. We show improved automata of the initial pose estimation as well as lower projection errors with the advent of a multiimage pose optimization step.
Insertion of electrode array using percutaneous cochlear implantation technique: a cadaveric study
Ramya Balachandran, Jason E. Mitchell, Jack Noble, et al.
Cochlear implantation is a surgical procedure for treating patients with hearing loss in which an electrode array is inserted into the cochlea. The traditional surgical approach requires drilling away a large portion of the bone behind the ear to provide anatomical reference and access to the cochlea. A minimally-invasive technique, called percutaneous cochlear implantation (PCI), has been proposed that involves drilling a linear path from the lateral skull to the cochlea avoiding vital structures and inserting the implant using that drilled path. The steps required to achieve PCI safely include: placing three bone-implanted markers surrounding the ear, obtaining a CT scan, planning a surgical path to the cochlea avoiding vital anatomy, designing and constructing a microstereotactic frame that mounts on the markers and constrains the drill to the planned path, affixing the frame on the markers, using it to drill to the cochlea, and inserting the electrode through the drilled path. We present in this paper a cadaveric study demonstrating the PCI technique on three temporal bone cadaveric specimens for inserting electrode array into the cochlea. A custom fixture, called a Microtable, which is a type of microstereotactic frame that can be constructed in less than five minutes, was fabricated for each specimen and used to reach the cochlea. The insertion was successfully performed on all three specimens. Postinsertion CT scans confirm the correct placement of the electrodes inside the cochlea without any damage to the facial nerve.
Image Guided Therapy II
icon_mobile_dropdown
Single camera closed-form real-time needle trajectory tracking for ultrasound
In ultrasound-guided needle insertion procedures, tracking of the needle relative to the ultrasound image is beneficial for needle trajectory planning and guidance. A single camera closed-form method is proposed for automatic real-time trajectory tracking with a low-cost camera mounted directly on the ultrasound transducer. The camera is calibrated to the ultrasound image coordinates. By mounting the camera on the transducer, issues of visual obstruction are reduced and accuracy of tracking is increased compared to camera-tracking systems with a fixed case. Compared to previous work with stereo cameras, a single camera further reduces cost, complexity and size, but requires a needle with known markings. The proposed solution uses the depth markings etched on many common needles (e.g. epidural needle). A fully automatic image processing method has been developed for real-time identification of the needle trajectory using a novel closed-form solution based on three identified markings and the camera's intrinsic calibration parameters. The trajectory of the needle relative to the ultrasound image is calculated and displayed. Validation compares the calculated intersection of the needle trajectory to the ultrasound image with the depiction of the actual needle intersection in the image. The overall error is 3.0 ± 2.6 mm for a low-cost 640×480 pixel USB camera.
Feature-based US to CT registration of the aortic root
Pencilla Lang, Elvis C. S. Chen, Gerard M. Guiraudon, et al.
A feature-based registration was developed to align biplane and tracked ultrasound images of the aortic root with a preoperative CT volume. In transcatheter aortic valve replacement, a prosthetic valve is inserted into the aortic annulus via a catheter. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to significant morbidity and mortality. Registration of pre-operative CT to transesophageal ultrasound and fluoroscopy images is a major step towards providing augmented image guidance for this procedure. The proposed registration approach uses an iterative closest point algorithm to register a surface mesh generated from CT to 3D US points reconstructed from a single biplane US acquisition, or multiple tracked US images. The use of a single simultaneous acquisition biplane image eliminates reconstruction error introduced by cardiac gating and TEE probe tracking, creating potential for real-time intra-operative registration. A simple initialization procedure is used to minimize changes to operating room workflow. The algorithm is tested on images acquired from excised porcine hearts. Results demonstrate a clinically acceptable accuracy of 2.6mm and 5mm for tracked US to CT and biplane US to CT registration respectively.
Improved validation platform for ultrasound-based monitoring of thermal ablation
PURPOSE: Thermal ablation is a popular method in local cancer management; however it is extremely challenging to predict thermal changes in vivo. Ultrasound could be a convenient and inexpensive imaging modality for real-time monitoring of the ablation, but the required advanced image processing algorithms need extensive validation. Our goal is to design and develop a reliable test-bed for validation of these monitoring algorithms. METHOD: We previously developed a test-bed, consisting of ablated tissue sample and fiducial lines embedded in tissue-mimicking gel.1 The gel block is imaged by ultrasound and sliced to acquire pathology images. Following fiducial localization in both image modalities, the pathology and US data were registered. Ground truth ablated region is retrieved from pathology images and compared to the result of the ultrasound-based processing in 3D space. We improved on this platform to resolve limitations that hindered its usage in a larger-scale validation study. A simulator for evaluating and optimizing different line fiducial structures was implemented, and a new fiducial line structure was proposed. RESULTS: The new proposed fiducial configuration outperforms the previous in terms of accuracy, fiducial visibility, and use of larger tissue samples. Simulation results show improvement in pose recovery accuracy using our proposed fiducial structure, reducing target registration error (TRE) by 34%. Inaccurate pixel spacing information and fiducial localization noise are the main sources of error in slice pose recovery. CONCLUSION: A new generation of test-bed was developed, with software that does not require lengthy manual data processing, and is easier to maintain and extend. Further experimental work is required to optimize phantom preparation and precise pixel spacing computation.
Toward robotic needle steering in lung biopsy: a tendon-actuated approach
Louis B. Kratchman, Mohammed M. Rahman, Justin R. Saunders, et al.
Needle tip dexterity is advantageous for transthoracic lung biopsies, which are typically performed with rigid, straight biopsy needles. By providing intraoperative compensation for trajectory error and lesion motion, tendon-driven biopsy needles may reach smaller or deeper nodules in fewer attempts, thereby reducing trauma. An image-guided robotic system that uses these needles also has the potential to reduce radiation exposure to the patient and physician. In this paper, we discuss the design, workflow, kinematic modeling, and control of both the needle and a compact and inexpensive robotic prototype that can actuate the tendon-driven needle for transthoracic lung biopsy. The system is designed to insert and steer the needle under Computed Tomography (CT) guidance. In a free-space targeting experiment using a discrete proportional control law with digital camera feedback, we show a position error of less than 1 mm achieved using an average of 8.3 images (n=3).
Implementation of an interactive liver surgery planning system
Liver tumor, one of the most wide-spread diseases, has a very high mortality in China. To improve success rates of liver surgeries and life qualities of such patients, we implement an interactive liver surgery planning system based on contrastenhanced liver CT images. The system consists of five modules: pre-processing, segmentation, modeling, quantitative analysis and surgery simulation. The Graph Cuts method is utilized to automatically segment the liver based on an anatomical prior knowledge that liver is the biggest organ and has almost homogeneous gray value. The system supports users to build patient-specific liver segment and sub-segment models using interactive portal vein branch labeling, and to perform anatomical resection simulation. It also provides several tools to simulate atypical resection, including resection plane, sphere and curved surface. To match actual surgery resections well and simulate the process flexibly, we extend our work to develop a virtual scalpel model and simulate the scalpel movement in the hepatic tissue using multi-plane continuous resection. In addition, the quantitative analysis module makes it possible to assess the risk of a liver surgery. The preliminary results show that the system has the potential to offer an accurate 3D delineation of the liver anatomy, as well as the tumors' location in relation to vessels, and to facilitate liver resection surgeries. Furthermore, we are testing the system in a full-scale clinical trial.
Poster Session: Calibration
icon_mobile_dropdown
System for robust bronchoscopic video distortion correction
Brett Flood, Lav Rai, William E. Higgins
Bronchoscopes contain wide-angle lenses that produce a large field of view but suffer from radial distortion. For image-guided bronchoscopy, geometric calibration including distortion correction is essential for comparing video images to renderings developed from 3D computed-tomography (CT) images. This paper describes an easy-to-use system for bronchoscopic video-distortion correction and studies the robustness of the resulting calibration over a wide range of conditions. The internal calibration method integrated into the system incorporates a well-known camera calibration framework devised for general camera-distortion correction. The robustness study considers the calibration results as follows: (1) varying lighting during video capture, (2) using different number of captured images for parameter estimation, (3) changing camera pose with respect to the calibration pattern, (4) recording temporal changes in estimated parameters, and (5) comparing parameters between different bronchoscopes of a same model. Multiple bronchoscopes were successfully calibrated under a variety of conditions.
Online temporal synchronization of pose and endoscopic video streams
Özgür Güler, Ziv Yaniv, Wolfgang Freysinger
Computer assisted navigation systems that combine real-time endoscopy images with pre-operative volumetric data sets aim at improving the physician's understanding of the underlying anatomical structures. To achieve accurate and safe guidance these systems are required to provide a consistent representation of the physical world. This implies that all data streams are synchronized. In our case, we are dealing with synchronization of tracking data and a video stream obtained by a tracked endoscope. Previously, such synchronization was obtained pre-operatively using phantoms. This type of approach assumes a constant latency between the data streams and is less desirable for clinical use due to the required additional hardware. In this work we describe an online temporal synchronization method. The method is based on the observation that in clinical practice the endoscope is not in constant motion. By identifying corresponding stationary points in the video and tracking streams temporal synchronization can be performed online in a manner that is transparent to the user. Initial evaluation of our approach in a laboratory study has shown that it provides comparable estimates to a phantom based approach we had previously proposed.
Ultrasound calibration framework for the image-guided surgery toolkit (IGSTK)
Registration is a key technology in image-guided navigation systems. By aligning pre-operative images with the intra-operative setting these systems provide visual feedback that improves the physician's understanding of the spatial relationships between anatomical structures and surgical tools. Most often the alignment is obtained using fiducials. Another option is to replace the use of fiducials with intra-operative imaging. Two dimensional ultrasound (US) is a widely available intra-operative non-ionizing imaging modality. To utilize this modality for registration one must first perform spatial calibration of the US. In this work we describe the implementation of three spatial calibration methods as part of the image-guided surgery toolkit (IGSTK). The implementation follows the IGSTK calibration framework, separating algorithmic aspects from user interaction aspects of the calibration. Our calibration framework includes three methods. The first is a phantom-less method using a tracked pointer tool in addition to the tracked US, the second method uses a cross-wire phantom, and the third method is based on the use of a plane phantom.
Poster Session: Cardiac Procedures
icon_mobile_dropdown
Motion compensation by registration-based catheter tracking
Alexander Brost, Andreas Wimmer, Rui Liao, et al.
The treatment of atrial fibrillation has gained increasing importance in the field of computer-aided interventions. State-of-the-art treatment involves the electrical isolation of the pulmonary veins attached to the left atrium under fluoroscopic X-ray image guidance. Due to the rather low soft-tissue contrast of X-ray fluoroscopy, the heart is difficult to see. To overcome this problem, overlay images from pre-operative 3-D volumetric data can be used to add anatomical detail. Unfortunately, these overlay images are static at the moment, i.e., they do not move with respiratory and cardiac motion. The lack of motion compensation may impair X-ray based catheter navigation, because the physician could potentially position catheters incorrectly. To improve overlay-based catheter navigation, we present a novel two stage approach for respiratory and cardiac motion compensation. First, a cascade of boosted classifiers is employed to segment a commonly used circumferential mapping catheter which is firmly fixed at the ostium of the pulmonary vein during ablation. Then, a 2-D/2-D model-based registration is applied to track the segmented mapping catheter. Our novel hybrid approach was evaluated on 10 clinical data sets consisting of 498 fluoroscopic monoplane frames. We obtained an average 2-D tracking error of 0.61 mm, with a minimum error of 0.26 mm and a maximum error of 1.62 mm. These results demonstrate that motion compensation using registration-based catheter tracking is both feasible and accurate. Using this approach, we can only estimate in-plane motion. Fortunately, compensating for this is often sufficient for EP procedures where the motion is governed by breathing.
First steps towards initial registration for electrophysiology procedures
Alexander Brost, Felix Bourier, Liron Yatziv, et al.
Atrial fibrillation is the most common heart arrhythmia and a leading cause of stroke. The treatment option of choice is radio-frequency catheter ablation, which is performed in electrophysiology labs using C-Arm X-ray systems for navigation and guidance. The goal is to electrically isolate the pulmonary vein-left atrial junction thereby rendering myocardial fibers responsible for induction and maintenance of AF inactive. The use of overlay images for fluoroscopic guidance may improve the quality of the ablation procedure, and can reduce procedure time. Overlay images, acquired using CT, MRI, or C-arm CT, can add soft-tissue information, otherwise not visible under X-ray. MRI can be used to image a wide variety of anatomical details without ionizing radiation. In this paper, we present a method to register a 3-D MRI volume to 2-D biplane X-ray images using the coronary sinus. Current approaches require registration of the overlay images to the fluoroscopic images to be performed after the trans-septal puncture, when contast agent can be administered. We present a new approach for registration to align overlay images before the trans-septal puncture. To this end, we manually extract the coronary sinus from pre-operative MRI and register it to a multi-electorde catheter placed in the coronary sinus.
3D imaging of myocardial perfusion and coronary tree morphology from a single rotational angiogram
Günter Lauritsch, Christopher Rohkohl, Joachim Hornegger, et al.
Diagnosis and treatment of coronary heart disease are performed in the catheter laboratory using an angiographic X-ray C-arm system. The morphology of the coronary tree and potentially ischemic lesions are determined in 2D projection views. The hemodynamic impact of the lesion would be valuable information for treatment decision. Using other modalities for functional imaging is disrupting the clinical workflow since the patient has to be transferred from the catheter laboratory to another scanner, and back to the catheter laboratory for performing the treatment. In this work a novel technology is used for simultaneous 3D imaging of first pass perfusion and the morphology of the coronary tree from a single rotational angiogram. A selective, single shot of contrast agent of less than 20ml directly into the coronaries is sufficient for a proper contrast resolution. Due to the long acquisition time cardiac motion has to be considered. A novel reconstruction technique for estimation and compensation of cardiac motion from the acquired projection data is used. The overlay of the 3D structure of the coronary tree and the perfusion image shows the correlation of myocardial areas and the associated coronary sections supporting that region. In a case example scar lesions caused by a former myocardial infarct are investigated. A first pass perfusion defect is found which is validated by a late enhancement magnetic resonance image. No ischemic defects are found. The non vital regions are still supported by the coronary vasculature.
Intensity-based hierarchical clustering in CT-scans: application to interactive segmentation in cardiology
The segmentation of anatomical structures in Computed Tomography Angiography (CTA) is a pre-operative task useful in image guided surgery. Even though very robust and precise methods have been developed to help achieving a reliable segmentation (level sets, active contours, etc), it remains very time consuming both in terms of manual interactions and in terms of computation time. The goal of this study is to present a fast method to find coarse anatomical structures in CTA with few parameters, based on hierarchical clustering. The algorithm is organized as follows: first, a fast non-parametric histogram clustering method is proposed to compute a piecewise constant mask. A second step then indexes all the space-connected regions in the piecewise constant mask. Finally, a hierarchical clustering is achieved to build a graph representing the connections between the various regions in the piecewise constant mask. This step builds up a structural knowledge about the image. Several interactive features for segmentation are presented, for instance association or disassociation of anatomical structures. A comparison with the Mean-Shift algorithm is presented.
4D motion animation of coronary arteries from rotational angiography
Wolfgang Holub, Christopher Rohkohl, Dominik Schuldhaus, et al.
Time-resolved 3-D imaging of the heart is a major research topic in the medical imaging community. Recent advances in the interventional cardiac 3-D imaging from rotational angiography (C-arm CT) are now also making 4-D imaging feasible during procedures in the catheter laboratory. State-of-the-art reconstruction algorithms try to estimate the cardiac motion and utilize the motion field to enhance the reconstruction of a stable cardiac phase (diastole). The available data offers a handful of opportunities during interventional procedures, e.g. the ECG-synchronized dynamic roadmapping or the computation and analysis of functional parameters. In this paper we will demonstrate that the motion vector field (MVF) that is output by motion compensated image reconstruction algorithms is in general not directly usable for animation and motion analysis. Dependent on the algorithm different defects are investigated. A primary issue is that the MVF needs to be inverted, i.e. the wrong direction of motion is provided. A second major issue is the non-periodicity of cardiac motion. In algorithms which compute a non-periodic motion field from a single rotation the in depth motion information along viewing direction is missing, since this cannot be measured in the projections. As a result, while the MVF improves reconstruction quality, it is insufficient for motion animation and analysis. We propose an algorithm to solve both problems, i.e. inversion and missing in-depth information in a unified framework. A periodic version of the MVF is approximated. The task is formulated as a linear optimization problem where a parametric smooth motion model based on B-splines is estimated from the MVF. It is shown that the problem can be solved using a sparse QR factorization within a clinical feasible time of less than one minute. In a phantom experiment using the publicly available CAVAREV platform, the average quality of a non-periodic animation could be increased by 39% by applying the proposed periodization and inversion method.
Poster Session: Endoscopic Procedures
icon_mobile_dropdown
A novel bronchoscope tracking method for bronchoscopic navigation using a low cost optical mouse sensor
Image-guided bronchoscopy usually requires to track the bronchoscope camera position and orientation to align the preinterventional 3-D computed tomography (CT) images to the intrainterventional 2-D bronchoscopic video frames. Current state-of-the-art image-based algorithms often fail in bronchoscope tracking due to shortages of information on depth and rotation around the viewing (running) direction of the bronchoscope camera. To address these problems, this paper presents a novel bronchoscope tracking method for bronchoscopic navigation based on a low-cost optical mouse sensor, bronchial structure information, and image registration. We first utilize an optical mouse senor to automatically measure the insertion depth and the rotation of the viewing direction along the bronchoscope. We integrate the outputs of such a 2-D sensor by performing a centerline matching on the basis of bronchial structure information before optimizing the bronchoscope camera motion parameters during image registration. An assessment of our new method is implemented on phantom data. Experimental results illustrate that our proposed method is a promising means for bronchoscope tracking, compared to our previous image-based method, significantly improving the tracking performance.
Image-based camera motion estimation using prior probabilities
Dusty Sargent, Sun Young Park, Inbar Spofford, et al.
Image-based camera motion estimation from video or still images is a difficult problem in the field of computer vision. Many algorithms have been proposed for estimating intrinsic camera parameters, detecting and matching features between images, calculating extrinsic camera parameters based on those features, and optimizing the recovered parameters with nonlinear methods. These steps in the camera motion inference process all face challenges in practical applications: locating distinctive features can be difficult in many types of scenes given the limited capabilities of current feature detectors, camera motion inference can easily fail in the presence of noise and outliers in the matched features, and the error surfaces in optimization typically contain many suboptimal local minima. The problems faced by these techniques are compounded when they are applied to medical video captured by an endoscope, which presents further challenges such as non-rigid scenery and severe barrel distortion of the images. In this paper, we study these problems and propose the use of prior probabilities to stabilize camera motion estimation for the application of computing endoscope motion sequences in colonoscopy. Colonoscopy presents a special case for camera motion estimation in which it is possible to characterize typical motion sequences of the endoscope. As the endoscope is restricted to move within a roughly tube-shaped structure, forward/backward motion is expected, with only small amounts of rotation and horizontal movement. We formulate a probabilistic model of endoscope motion by maneuvering an endoscope and attached magnetic tracker through a synthetic colon model and fitting a distribution to the observed motion of the magnetic tracker. This model enables us to estimate the probability of the current endoscope motion given previously observed motion in the sequence. We add these prior probabilities into the camera motion calculation as an additional penalty term in RANSAC to help reject improbable motion parameters caused by outliers and other problems with medical data. This paper presents the theoretical basis of our method along with preliminary results on indoor scenes and synthetic colon images.
Detection of inflating balloon in optical coherence tomography images of a porcine artery in a beating heart experiment
Hamed Azarnoush, Sébastien Vergnole, Mark Hewko, et al.
Suboptimal results of angioplasty procedures have been correlated to arterial damage during balloon inflation. We propose to monitor balloon inflation during the angioplasty procedure by detecting the balloon contours with intravascular optical coherence tomography (IVOCT). This will shed more light on the interaction between the balloon and the artery and to assess the artery's mechanical response. An automatic edge detection algorithm is applied for detection of the outer surface of an inflating balloon in a porcine artery in a beating heart experiment. A compliant balloon is inflated to deform the artery. IVOCT monitoring of balloon inflation is performed at a rate of 30 frames per second. During inflation, the balloon engages the arterial wall. Therefore, the characterization of the diameter of the inflated balloon leads to a characterization of the luminal diameter of the vessel. This provides precise information about the artery response to a simulated angioplasty procedure, information currently not provided by any other existing technique. In the current experiment, balloon inflation characterization is based on 356 IVOCT frames during which the estimated balloon diameter increases approximately from 1.8 mm to 2.9 mm.
Poster Session: Image-Guided Therapy
icon_mobile_dropdown
Automatic measurement of contrast bolus distribution in carotid arteries using a C-arm angiography system to support interventional perfusion imaging
Andreas Fieselmann, Arundhuti Ganguly, Deuerling-Zheng Yu, et al.
Brain perfusion CT using a C-arm angiography system capable of CT-like imaging could optimize patient treatment during stroke therapy procedures. For this application, an intra-arterial contrast bolus injection at the aortic arch could be used provided that the location of the injection catheter enables uniform distribution of the bolus into the two common carotid arteries (CCA). In this work, we present a novel method to support optimal injection catheter placement by providing additional quantitative information about the distribution of the contrast bolus into the CCAs. Our fully automatic method uses 2-D digital subtraction angiography (DSA) images following a test bolus injection. It segments both CCAs and computes the relative contrast distribution. We have tested the method in DSA data sets from 5 healthy pigs and our method achieved successful segmentation of both CCAs in all data sets. The results showed that the contrast is uniformly distributed (mean relative difference less or equal than 10%) if the injection location is properly chosen.
Accuracy assessment of fluoroscopy-transesophageal echocardiography registration
Pencilla Lang, Petar Seslija, Daniel Bainbridge, et al.
This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.
A single-imager stereoscopic endoscope
We have developed a 5.5mm and 10mm dual optical channel laparoscope that combines both exit channels into a single, standard, endoscopic eye cup which attaches directly to a single, conventional HD camera head. We have also developed image processing software that auto-calibrates, aligns, enhances and processes the image so that it can be displayed on a stereo/3D display to achieve a true 3D effect. The advantages to the end user for such a 3D system are that they do not have to purchase a new camera system, all of their existing scopes are still available to use, as are all integrated OR features. They will be able to add 3D capability to current HD system by purchasing only stereo scopes and a small video processing computer box and adding a 2D/3D HD capable monitor.
Mixed variable optimization for radio frequency ablation planning
Ankur Kapoor, Ming Li, Bradford Wood
We present a method towards optimization of multiple ablation probe placement to provide efficient coverage of a tumor for thermal therapy while respecting clinical needs such as limiting the sites of probe insertions at the pleura/liver surface, choosing secure probe trajectories and locations, avoiding ablation of critical structures, reducing ablation of healthy tissue and overlap of ablation zones. The ablation optimizer treats each ablation location independently, and the number of ablation probe placements itself is treated as a variable to be optimized. This allows us to potentially feedback the ablation after deployment and re-optimize the next steps during the plan. The optimization method uses a new class of derivate-free algorithms for solving a non-linear mixed variable problem with hard and soft constraints derived from clinical images. Our methods use discretization of the ablation volume, which can accommodate irregular shape of the ablation zone. The non-gradient based strategy produce new candidates to yield a feasible solution within a few iterations. In our simulation experiments this strategy typically reduced the ablation zone overlap and ablated healthy tissue ablated by 46% and 29%, respectively in a single iteration, resulting in a feasible solution to be found within 35 iterations. Our method for optimization provides efficient implementation for planning the coverage of a tumor while respecting clinical constraints. The ablation planning can be combined with navigation assistance to enable accurate translation and feedback of the plan.
Automatic fiducial localization in ultrasound images for a thermal ablation validation platform
Laura Bartha, Andras Lasso, Thomas Kuiran Chen, et al.
PURPOSE: Development of ultrasound-based tumor ablation monitoring systems requires extensive validation. Validation is based on the comparison of ablated regions, computed from ultrasound images, to the ground truth region observed on histopathology images. Registration of ultrasound and histopathology images can be efficiently implemented by localizing fiducial lines embedded in the test phantom. Manual fiducial localization is time consuming and may be inaccurate. Current automatic localization algorithms were designed for use on images containing easily detectable fiducials in clear water, while the images produced by the ablation monitoring platform contain fiducials and ablated tissue embedded in tissue-mimicking gel. Our goal was to develop an automatic fiducial localization algorithm for the ablation monitoring platform. METHOD: A previously existing algorithm for detecting fishing line in water for ultrasound probe calibration, created by Chen et al., was tested on ultrasound images of an ablation phantom. Fiducial and line point detection parameters were determined by running the algorithm multiple times with different parameter sets and searching for the set that results in the best detection success rate. The fiducial intensity scoring method was modified to use intensities from an unaltered image; this greatly reduced the number of incorrectly identified fiducials. Line finding was modified to suit the ablation phantom geometry. RESULTS: The new algorithm was tested by comparing the automatic localization results to manually identified fiducial positions. Using the optimized parameters, it was found to have a 94.1 % success rate on the tested images. Fiducial localization error was defined as the difference between the manually segmented positions and the positions found by the algorithm. Fiducial localization error was - 0.04±0.18mm along the x-axis, and -0.09±0.14mm along the y-axis. CONCLUSION: We have developed an automatic algorithm that detects line fiducials at a high success rate in complex phantoms containing a tissue sample embedded in tissue-mimicking gel.
Poster Session: Intraoperative Imaging
icon_mobile_dropdown
Architecture of a high-performance surgical guidance system based on C-arm cone-beam CT: software platform for technical integration and clinical translation
Intraoperative imaging modalities are becoming more prevalent in recent years, and the need for integration of these modalities with surgical guidance is rising, creating new possibilities as well as challenges. In the context of such emerging technologies and new clinical applications, a software architecture for cone-beam CT (CBCT) guided surgery has been developed with emphasis on binding open-source surgical navigation libraries and integrating intraoperative CBCT with novel, application-specific registration and guidance technologies. The architecture design is focused on accelerating translation of task-specific technical development in a wide range of applications, including orthopaedic, head-and-neck, and thoracic surgeries. The surgical guidance system is interfaced with a prototype mobile C-arm for high-quality CBCT and through a modular software architecture, integration of different tools and devices consistent with surgical workflow in each of these applications is realized. Specific modules are developed according to the surgical task, such as: 3D-3D rigid or deformable registration of preoperative images, surgical planning data, and up-to-date CBCT images; 3D-2D registration of planning and image data in real-time fluoroscopy and/or digitally reconstructed radiographs (DRRs); compatibility with infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements; augmented overlay of image and planning data in endoscopic or in-room video; real-time "virtual fluoroscopy" computed from GPU-accelerated DRRs; and multi-modality image display. The platform aims to minimize offline data processing by exposing quantitative tools that analyze and communicate factors of geometric precision. The system was translated to preclinical phantom and cadaver studies for assessment of fiducial (FRE) and target registration error (TRE) showing sub-mm accuracy in targeting and video overlay within intraoperative CBCT. The work culminates in the development of a CBCT guidance system (reported here for the first time) that leverages the technical developments in Carm CBCT and associated technologies for realizing a high-performance system for translation to clinical studies.
Adaptive bilateral filter for image denoising and its application to in-vitro Time-of-Flight data
Alexander Seitel, Thiago R. dos Santos, Sven Mersmann, et al.
Image-guided therapy systems generally require registration of pre-operative planning data with the patient's anatomy. One common approach to achieve this is to acquire intra-operative surface data and match it to surfaces extracted from the planning image. Although increasingly popular for surface generation in general, the novel Time-of-Flight (ToF) technology has not yet been applied in this context. This may be attributed to the fact that the ToF range images are subject to considerable noise. The contribution of this study is two-fold. Firstly, we present an adaption of the well-known bilateral filter for denoising ToF range images based on the noise characteristics of the camera. Secondly, we assess the quality of organ surfaces generated from ToF range data with and without bilateral smoothing using corresponding high resolution CT data as ground truth. According to an evaluation on five porcine organs, the root mean squared (RMS) distance between the denoised ToF data points and the reference computed tomography (CT) surfaces ranged from 3.0 mm (lung) to 9.0 mm (kidney). This corresponds to an error-reduction of up to 36% compared to the error of the original ToF surfaces.
Development of a novel laser range scanner
Thomas S. Pheiffer, Brian Lennon, Amber L. Simpson, et al.
Laser range scanning an organ surface intraoperatively provides a cost effective and accurate means of measuring geometric changes in tissue. A novel laser range scanner with integrated tracking was designed, developed, and analyzed with the goal of providing intraoperative surface data during neurosurgery. The scanner is fitted with passive spheres to be optically tracked in the operating room. The design notably includes a single-lens system capable of acquiring the geometric information (as a Cartesian point cloud) via laser illumination and charge-coupled device (CCD) collection, as well as the color information via visible light collection on the same CCD. The geometric accuracy was assessed by scanning a machined phantom of known dimensions and comparing relative distances of landmarks from the point cloud to the known distances. The ability of the scanner to be tracked was first evaluated by perturbing its orientation in front of the optical tracking camera and recording the number of spheres visible to the camera at each orientation, and then by observing the variance in point cloud locations of a fixed object when the tracking camera is moved around the scanner. The scanning accuracy test resulted in an RMS error of 0.47 mm with standard deviation of 0.40 mm. The sphere visibility test showed that four diodes were visible in most of the probable operating orientations, and the overall tracking standard deviation was observed to be 1.49 mm. Intraoperative collection of cortical surface scans using the new scanner is currently underway.
Clinical implementation of intraoperative cone-beam CT in head and neck surgery
A prototype mobile C-arm for cone-beam CT (CBCT) has been translated to a prospective clinical trial in head and neck surgery. The flat-panel CBCT C-arm was developed in collaboration with Siemens Healthcare, and demonstrates both sub-mm spatial resolution and soft-tissue visibility at low radiation dose (e.g., <1/5th of a typical diagnostic head CT). CBCT images are available ~15 seconds after scan completion (~1 min acquisition) and reviewed at bedside using custom 3D visualization software based on the open-source Image-Guided Surgery Toolkit (IGSTK). The CBCT C-arm has been successfully deployed in 15 head and neck cases and streamlined into the surgical environment using human factors engineering methods and expert feedback from surgeons, nurses, and anesthetists. Intraoperative imaging is implemented in a manner that maintains operating field sterility, reduces image artifacts (e.g., carbon fiber OR table) and minimizes radiation exposure. Image reviews conducted with surgical staff indicate bony detail and soft-tissue visualization sufficient for intraoperative guidance, with additional artifact management (e.g., metal, scatter) promising further improvements. Clinical trial deployment suggests a role for intraoperative CBCT in guiding complex head and neck surgical tasks, including planning mandible and maxilla resection margins, guiding subcranial and endonasal approaches to skull base tumours, and verifying maxillofacial reconstruction alignment. Ongoing translational research into complimentary image-guidance subsystems include novel methods for real-time tool tracking, fusion of endoscopic video and CBCT, and deformable registration of preoperative volumes and planning contours with intraoperative CBCT.
Poster Session: Localization and Tracking Technologies
icon_mobile_dropdown
Validation of visual surface measurement using computed tomography
Amy M. VanBerlo, Aaron R. Campbell, Randy E. Ellis
Although dysesthesia is a common and persistent surgical complication, there is no accepted method for quantitatively tracking affected skin. To address this, two types of computer vision technologies were tested in a total of four configurations. Surface regions on plastic models of limbs were delineated with colored tape, imaged, and compared with computed tomography scans. The most accurate system used visually projected texture captured by a binocular stereo camera, capable of measuring areas to within 0.05% of the ground-truth areas with 1.4% variance. This simple, inexpensive technology shows promise for postoperative monitoring of dysesthesia surrounding surgical scars.
Alignment and calibration of high frequency ultrasound (HFUS) and optical coherence tomography (OCT) 1D transducers using a dual wedge-tri step phantom
N. Afsham, K. Chan, L. Pan, et al.
This paper introduces a novel alignment and calibration method for high frequency ultrasound (HFUS) and optical coherence tomography (OCT) 1D transducers. 2D images are constructed by means of translation of the transducers using a linear motor stage. Physical alignment of the transducers is needed in order to capture images of the same crosssectional plane, and calibration is needed to determine the relative coordinates of the images, including the image skew. A dual wedge-tri step phantom is created for both alignment and calibration. This phantom includes two symmetrical wedges and three steps that provide the user with visual feedback on how well the scan plane is aligned with the midplane of the phantom. The phantom image consists of five line segments, each of which corresponds to one of the wedges or steps. The slopes and positions of the lines are extracted from the image and compared with the phantom model. The scan plane parameters are found so that the difference between the model and extracted features is minimized. The main advantage of this phantom is that only one frame is required to determine translations, orientations, and skew parameters of the scan plane with respect to the phantom. Experimental results with ocular imaging show the ability to achieve alignment based on this method and its potential for medical applications.
3D-guided CT reconstruction using time-of-flight camera
Mahmoud Ismail, Katsuyuki Taguchi, Jingyan Xu, et al.
We propose the use of a time-of-flight (TOF) camera to obtain the patient's body contour in 3D guided imaging reconstruction scheme in CT and C-arm imaging systems with truncated projection. In addition to pixel intensity, a TOF camera provides the 3D coordinates of each point in the captured scene with respect to the camera coordinates. Information from the TOF camera was used to obtain a digitized surface of the patient's body. The digitization points are transformed to X-Ray detector coordinates by registering the two coordinate systems. A set of points corresponding to the slice of interest are segmented to form a 2D contour of the body surface. Radon transform is applied to the contour to generate the 'trust region' for the projection data. The generated 'trust region' is integrated as an input to augment the projection data. It is used to estimate the truncated, unmeasured projections using linear interpolation. Finally the image is reconstructed using the combination of the estimated and the measured projection data. The proposed method is evaluated using a physical phantom. Projection data for the phantom were obtained using a C-arm system. Significant improvement in the reconstructed image quality near the truncation edges was observed using the proposed method as compared to that without truncation correction. This work shows that the proposed 3D guided CT image reconstruction using a TOF camera represents a feasible solution to the projection data truncation problem.
Transorbital therapy delivery: phantom testing
Martha-Conley Ingram, Nkiruka Atuegwu, Louise Mawn, et al.
We have developed a combined image-guided and minimally invasive system for the delivery of therapy to the back of the eye. It is composed of a short 4.5 mm diameter endoscope with a magnetic tracker embedded in the tip. In previous work we have defined an optimized fiducial placement for accurate guidance to the back of the eye and are now moving to system testing. The fundamental difficulty in testing performance is establishing a target in a manner which closely mimics the physiological task. We have to have a penetrable material which obscures line of sight, similar to the orbital fat. In addition we need to have some independent measure of knowing when a target has been reached to compare to the ideal performance. Lastly, the target cannot be rigidly attached to the skull phantom since the optic nerve lies buried in the orbital fat. We have developed a skull phantom with white cloth stellate balls supporting a correctly sized globe. Placed in the white balls are red, blue, orange and yellow balls. One of the colored balls has been soaked in barium to make it bright on CT. The user guides the tracked endoscope to the target as defined by the images and tells us its color. We record task accuracy and time to target. We have tested this with 28 residents, fellows and attending physicians. Each physician performs the task twice guided and twice unguided. Results will be presented.
Expansion and dissemination of a standardized accuracy and precision assessment technique
The advent and development of new imaging techniques and image-guidance have had a major impact on surgical practice. These techniques attempt to allow the clinician to not only visualize what is currently visible, but also what is beneath the surface, or function. These systems are often based on tracking systems coupled with registration and visualization technologies. The accuracy and precision of the tracking systems, thus is critical in the overall accuracy and precision of the image-guidance system. In this work the accuracy and precision of an Aurora tracking system is assessed, using the technique specified in " novel technique for analysis of accuracy of magnetic tracking systems used in image guided surgery." This analysis yielded a demonstration that accuracy is dependent on distance from the tracker's field generator, and had an RMS value of 1.48 mm. The error has the similar characteristics and values as the previous work, thus validating this method for tracker analysis.
Time-of-flight camera technique for augmented reality in computer-assisted interventions
Sven Mersmann, Michael Müller, Alexander Seitel, et al.
Augmented reality (AR) for enhancement of intra-operative images is gaining increasing interest in the field of navigated medical interventions. In this context, various imaging modalities such as ultrasound (US), C-Arm computed tomography (CT) and endoscopic images have been applied to acquire intra-operative information about the patient's anatomy. The aim of this paper was to evaluate the potential of the novel Time-of-Flight (ToF) camera technique as means for markerless intra-operative registration. For this purpose, ToF range data and corresponding CT images were acquired from a set of explanted non-transplantable human and porcine organs equipped with a set of marker that served as targets. Based on a rigid matching of the surfaces generated from the ToF images with the organ surfaces generated from the CT data, the targets extracted from the planning images were superimposed on the 2D ToF intensity images, and the target visualization error (TVE) was computed as quality measure. Color video data of the same organs were further used to assess the TVE of a previously proposed marker-based registration method. The ToF-based registration showed promising accuracy yielding a mean TVE of 2.5±1.1 mm compared to 0.7±0.4 mm with the marker-based approach. Furthermore, the target registration error (TRE) was assessed to determine the anisotropy in the localization error of ToF image data. The TRE was 8.9± 4.7 mm on average indicating a high localization error in the viewing direction of the camera. Nevertheless, the young ToF technique may become a valuable means for intra-operative surface acquisition. Future work should focus on the calibration of systematic distance errors.
Poster Session: Modeling
icon_mobile_dropdown
Patient-specific blood flow simulation to improve intracranial aneurysm diagnosis
Wolfgang Fenz, Johannes Dirnberger
We present a novel simulation system of blood flow through intracranial aneurysms including the interaction between blood lumen and vessel tissue. It provides the means to estimate rupture risks by calculating the distribution of pressure and shear stresses in the aneurysm, in order to support the planning of clinical interventions. So far, this has only been possible with commercial simulation packages originally targeted at industrial applications, whereas our implementation focuses on the intuitive integration into clinical workflow. Due to the time-critical nature of the application, we exploit most efficient state-of-the-art numerical methods and technologies together with high performance computing infrastructures (Austrian Grid). Our system builds a three-dimensional virtual replica of the patient's cerebrovascular system from X-ray angiography, CT or MR images. The physician can then select a region of interest which is automatically transformed into a tetrahedral mesh. The differential equations for the blood flow and the wall elasticity are discretized via the finite element method (FEM), and the resulting linear equation systems are handled by an algebraic multigrid (AMG) solver. The wall displacement caused by the blood pressure is calculated using an iterative fluid-structure interaction (FSI) algorithm, and the fluid mesh is deformed accordingly. First simulation results on measured patient geometries show good medical relevance for diagnostic decision support.
Augmented reality needle guidance improves facet joint injection training
Tamas Ungi, Caitlin T. Yeo, Paweena U-Thainual, et al.
PURPOSE: The purpose of this study was to determine if medical trainees would benefit from augmented reality image overlay and laser guidance in learning how to set the correct orientation of a needle for percutaneous facet joint injection. METHODS: A total of 28 medical students were randomized into two groups: (1) The Overlay group received a training session of four insertions with image and laser guidance followed by two insertions with laser overlay only; (2) The Control group was trained by carrying out six freehand insertions. After the training session, needle trajectories of two facet joint injections without any guidance were recorded by an electromagnetic tracker and were analyzed. Number of successful needle placements, distance covered by needle tip inside the phantom and procedural time were measured to evaluate performance. RESULTS: Number of successful placements was significantly higher in the Overlay group compared to the Control group (85.7% vs. 57.1%, p = 0.038). Procedure time and distance covered inside phantom have both been found to be less in the Overlay group, although not significantly. CONCLUSION: Training with augmented reality image overlay and laser guidance improves the accuracy of facet joint injections in medical students learning image-guided facet joint needle placement.
Effects of deflated lung's geometry simplifications on the biomechanical model of its tumor motion: a phantom study
Ali Sadeghi Naini, Rajni V. Patel, Abbas Samani
Deflated lung's geometry simplifications effects on the accuracy of its biomechanical model used for its tumor motion prediction are investigated. This investigation is necessary to determine the highest degree of simplifications that can be incorporated in the lung's Finite Element (FE) model without compromising its ability to predict tumor motion with reasonable accuracy. The simplifications involve neglecting the lung's airways in its FE model. Such simplification is important to avoid unnecessary complications and to pave the way for fast tumor location prediction during a lung tumor ablative procedure such as brachytherapy. One major factor, which may affect the accuracy of such ablative procedures, is tumor motion resulting from lung tissue deformation caused by respiration. Although the target lung is almost completely deflated during the procedure, tissue deformation remains an issue due to diaphragm contact forces during respiration. In this investigation several numerical experiments were conducted using different tumor and airway sizes and locations in conjunction with both elastic and hyperelastic material models. Sensitivity of the tumor's motion prediction accuracy to the geometry simplification was then presented as a function of airways' size relative to the tumor's size. FE analysis results obtained for both material models suggest that tumor displacements due to surface contact forces are not very sensitive to geometry simplification carried out by omitting airways as long as the airways size does not exceed the tumor size.
Creation of 3D digital anthropomorphic phantoms which model actual patient non-rigid body motion as determined from MRI and position tracking studies of volunteers
C. M. Connolly, A. Konik, P. K. R. Dasari, et al.
Patient motion can cause artifacts, which can lead to difficulty in interpretation. The purpose of this study is to create 3D digital anthropomorphic phantoms which model the location of the structures of the chest and upper abdomen of human volunteers undergoing a series of clinically relevant motions. The 3D anatomy is modeled using the XCAT phantom and based on MRI studies. The NURBS surfaces of the XCAT are interactively adapted to fit the MRI studies. A detailed XCAT phantom is first developed from an EKG triggered Navigator acquisition composed of sagittal slices with a 3 x 3 x 3 mm voxel dimension. Rigid body motion states are then acquired at breath-hold as sagittal slices partially covering the thorax, centered on the heart, with 9 mm gaps between them. For non-rigid body motion requiring greater sampling, modified Navigator sequences covering the entire thorax with 3 mm gaps between slices are obtained. The structures of the initial XCAT are then adapted to fit these different motion states. Simultaneous to MRI imaging the positions of multiple reflective markers on stretchy bands about the volunteer's chest and abdomen are optically tracked in 3D via stereo imaging. These phantoms with combined position tracking will be used to investigate both imaging-data-driven and motion-tracking strategies to estimate and correct for patient motion. Our initial application will be to cardiacperfusion SPECT imaging where the XCAT phantoms will be used to create patient activity and attenuation distributions for each volunteer with corresponding motion tracking data from the markers on the body-surface. Monte Carlo methods will then be used to simulate SPECT acquisitions, which will be used to evaluate various motion estimation and correction strategies.
3D reconstruction of microvascular flow phantoms with hybrid imaging modalities
Jingying Lin, Kevin Hsiung, Russell Ritenour, et al.
Microvascular flow phantoms were built to aid the development of a hemodynamic simulation model for treating hepatocelluar carcinoma. The goal is to predict the blood flow routing for embolotherapy planning. Embolization is to deliver agents (e.g. microspheres) to the vicinity of the tumor to obstruct blood supply and nutrients to the tumor, targeting into 30 - 40 μm arterioles. Due to the size of the catheter, it has to release microspheres at an upper stream location, which may not localize the blocking effect. Accurate anatomical descriptions of microvasculature will help to conduct a reliable simulation and prepare a successful embolization strategy. Modern imaging devices can generate 3D reconstructions with ease. However, with a fixed detector size, larger field of view yields lower resolution. Clinical CT images can't be used to measure micro vessel dimensions, while micro-CT requires more acquisitions to reconstruct larger vessels. A multi-tiered, montage 3D reconstruction method with hybrid-modality imagery is devised to minimize the reconstruction effort. Regular CT is used for larger vessels and micro-CT is used for micro vessels. The montage approach aims to stitch up images with different resolutions and orientations. A resolution-adaptable 3D image registration is developed to assemble the images. We have created vessel phantoms that consist of several tiers of bifurcating polymer tubes in reducing diameters, down to 25 μm. No previous work of physical flow phantom has ventured into this small scale. Overlapping phantom images acquired from clinical CT and micro-CT are used to verify the image registration fidelity.
A biomechanical liver model for intraoperative soft tissue registration
Stefan Suwelack, Hugo Talbot, Sebastian Röhl, et al.
Organ motion due to respiration and contact with surgical instruments can significantly degrade the accuracy of image guided surgery. In most applications the ensuing soft tissue deformations have to be compensated in order to register preoperative planning data to the patient. Biomechanical models can be used to perform an accurate registration based on sparse intraoperative sensor data. Using elasticity theory, the approach can be formulated as a boundary value problem with displacement boundary conditions. In this paper, several models of the liver from the literature and a new simplified model are evaluated with regards to their application to intraoperative soft tissue registration. We construct finite element models of a liver phantom using the different material laws. Thereafter, typical deformation pattern that occur during surgery are imposed by applying displacement boundary conditions. A comparative numerical study shows that the maximal registration error of all non-linear models stays below 1.1mm, while the linear model produces errors up to 3.9mm. It can be concluded that linear elastic models are not suitable for the registration of the liver and that a geometrically non-linear formulation has to be used. Although the stiffness parameters of the non-linear materials differ considerably, the calculated displacement fields are very similar. This suggests that a difficult patient-specific parameterization of the model might not be necessary for intraoperative soft tissue registration. We also demonstrate that the new simplified model achieves nearly the same registration accuracy as complex quasi-linear viscoelastic models.
Approach-specific multi-grid anatomical modeling for neurosurgery simulation with public-domain and open-source software
Michel A. Audette, Denis Rivière, Charles Law, et al.
We present on-going work on multi-resolution sulcal-separable meshing for approach-specific neurosurgery simulation, in conjunction multi-grid and Total Lagrangian Explicit Dynamics finite elements. Conflicting requirements of interactive nonlinear finite elements and small structures lead to a multi-grid framework. Implications for meshing are explicit control over resolution, and prior knowledge of the intended neurosurgical approach and intended path. This information is used to define a subvolume of clinical interest, within some distance of the path and the target pathology. Restricted to this subvolume are a tetrahedralization of finer resolution, the representation of critical tissues, and sulcal separability constraint for all mesh levels.
3D shape decomposition and comparison for gallbladder modeling
Weimin Huang, Jiayin Zhou, Jiang Liu, et al.
This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal important topology features.
Virtual simulation of the postsurgical cosmetic outcome in patients with Pectus Excavatum
João L. Vilaça, António H. J. Moreira, Pedro L-Rodrigues, et al.
Pectus excavatum is the most common congenital deformity of the anterior chest wall, in which several ribs and the sternum grow abnormally. Nowadays, the surgical correction is carried out in children and adults through Nuss technic. This technic has been shown to be safe with major drivers as cosmesis and the prevention of psychological problems and social stress. Nowadays, no application is known to predict the cosmetic outcome of the pectus excavatum surgical correction. Such tool could be used to help the surgeon and the patient in the moment of deciding the need for surgery correction. This work is a first step to predict postsurgical outcome in pectus excavatum surgery correction. Facing this goal, it was firstly determined a point cloud of the skin surface along the thoracic wall using Computed Tomography (before surgical correction) and the Polhemus FastSCAN (after the surgical correction). Then, a surface mesh was reconstructed from the two point clouds using a Radial Basis Function algorithm for further affine registration between the meshes. After registration, one studied the surgical correction influence area (SCIA) of the thoracic wall. This SCIA was used to train, test and validate artificial neural networks in order to predict the surgical outcome of pectus excavatum correction and to determine the degree of convergence of SCIA in different patients. Often, ANN did not converge to a satisfactory solution (each patient had its own deformity characteristics), thus invalidating the creation of a mathematical model capable of estimating, with satisfactory results, the postsurgical outcome.
Intensity non-standardness affects computer recognition of anatomical structures
Since MR image intensities do not possess a tissue specific numeric meaning, even in images acquired for the same subject, on the same scanner, for the same body region, by using the same pulse sequence, it is important to transform the image scale into a standard intensity scale so that, for the same body region, intensities are similar. The lack of a standard image intensity scale in MRI leads to many difficulties in tissue characterizability, image display, and analysis, including image segmentation and registration. The influence of standardization on these tasks has been documented well; however, how intensity non-standardness may affect the automatic recognition of anatomical structures for image segmentation has not been studied. Motivated from the study that we previously presented in SPIE Medical Imaging Conference 2010,1, 2 in this study, we analyze the effects of intensity standardization on anatomical object recognition. A set of 31 scenarios of multiple objects from the ankle complex included in the model, plus seven different realistic levels of non-standardness introduced are considered for evaluation. The experimental results imply that, intensity variation among scenes in an ensemble - a particular characteristic of the behavior of non-standardness - degrades object recognition performance.
A comprehensive validation of patient-specific CFD simulations of cerebral aneurysm flow with virtual angiography
Qi Sun, Alexandra Groth, Matthias Bertram, et al.
Recently, image-based computational fluid dynamic simulations (CFD) have been proposed to investigate the local hemodynamics inside human cerebral aneurysms. It is suggested that the knowledge of the computed three-dimensional flow fields can be used to assist clinical risk assessment and treatment decision making. However, the reliability of CFD for accurately representing the human cerebral blood flow is difficult to assess due to the impossibility of ground truth measurements. A recently proposed virtual angiography method has been used to indirectly validate CFD results by comparing virtually constructed and clinically acquired angiograms. However, the validations are not yet comprehensive as they lack either from patient-specific boundary conditions (BCs) required for CFD simulations or from quantitative comparison methods. In this work, a simulation pipeline is built up including image-based geometry reconstruction, CFD simulations solving the dynamics of blood flow and contrast agent (CA), and virtual angiogram generation. In contrast to previous studies, the patient-specific blood flow rates obtained by transcranial color coded Doppler (TCCD) ultrasound are used to impose CFD BCs. Quantitative measures are defined to thoroughly evaluate the correspondence between the clinically acquired and virtually constructed angiograms, and thus, the reliability of CFD simulations. Exemplarily, two patient cases are presented. Close similarities are found in terms of spatial and temporal variations of CA distribution between acquired and virtual angiograms. Besides, for both patient cases, discrepancies of less than 15% are found for the relative root mean square errors (rRMSE) in time intensity curve (TIC) comparisons from selected characteristic positions.
Alternative statistical methods for bone atlas modelling
Traditional bone atlas modelling is carried out using linear methods such as PCA. Such linear models use a mean shape and principal modes to represent the atlas. A new shape, which is a high dimensional data vector, is then described using this mean and a weighted combination of the principal modes. The use of alternate methods for modelling statistical atlases have not been explored very much. Recently, there has been a lot of new work in the areas of multilinear modelling and nonlinear modelling. They present new ways of modelling high dimensional data. In this work, we compare and contrast several linear, multilinear and nonlinear methods for bone atlas modelling.
Poster Session: Registration
icon_mobile_dropdown
Accuracy assessment of an automatic image-based PET/CT registration for ultrasound-guided biopsies and ablations
Samuel Kadoury, Bradford J. Wood, Aradhana M. Venkatesan, et al.
The multimodal fusion of spatially tracked real-time ultrasound (US) with a prior CT scan has demonstrated clinical utility, accuracy, and positive impact upon clinical outcomes when used for guidance during biopsy and radiofrequency ablation in the treatment of cancer. Additionally, the combination of CT-guided procedures with positron emission tomography (PET) may not only enhance navigation, but add valuable information regarding the specific location and volume of the targeted masses which may be invisible on CT and US. The accuracy of this fusion depends on reliable, reproducible registration methods between PET and CT. This can avoid extensive manual efforts to correct registration which can be long and tedious in an interventional setting. In this paper, we present a registration workflow for PET/CT/US fusion by analyzing various image metrics based on normalized mutual information and cross-correlation, using both rigid and affine transformations to automatically align PET and CT. Registration is performed between the CT component of the prior PET-CT and the intra-procedural CT scan used for navigation to maximize image congruence. We evaluate the accuracy of the PET/CT registration by computing fiducial and target registration errors using anatomical landmarks and lesion locations respectively. We also report differences to gold-standard manual alignment as well as the root mean square errors for CT/US fusion. Ten patients with prior PET/CT who underwent ablation or biopsy procedures were selected for this study. Studies show that optimal results were obtained using a crosscorrelation based rigid registration with a landmark localization error of 1.1 +/- 0.7 mm using a discrete graphminimizing scheme. We demonstrate the feasibility of automated fusion of PET/CT and its suitability for multi-modality ultrasound guided navigation procedures.
2D-3D registration using gradient-based MI for image guided surgery systems
Yeny Yim, Xuanyi Chen, Mike Wakid, et al.
Registration of preoperative CT data to intra-operative video images is necessary not only to compare the outcome of the vocal fold after surgery with the preplanned shape but also to provide the image guidance for fusion of all imaging modalities. We propose a 2D-3D registration method using gradient-based mutual information. The 3D CT scan is aligned to 2D endoscopic images by finding the corresponding viewpoint between the real camera for endoscopic images and the virtual camera for CT scans. Even though mutual information has been successfully used to register different imaging modalities, it is difficult to robustly register the CT rendered image to the endoscopic image due to varying light patterns and shape of the vocal fold. The proposed method calculates the mutual information in the gradient images as well as original images, assigning more weight to the high gradient regions. The proposed method can emphasize the effect of vocal fold and allow a robust matching regardless of the surface illumination. To find the viewpoint with maximum mutual information, a downhill simplex method is applied in a conditional multi-resolution scheme which leads to a less-sensitive result to local maxima. To validate the registration accuracy, we evaluated the sensitivity to initial viewpoint of preoperative CT. Experimental results showed that gradient-based mutual information provided robust matching not only for two identical images with different viewpoints but also for different images acquired before and after surgery. The results also showed that conditional multi-resolution scheme led to a more accurate registration than single-resolution.
Fast intra-operative non-linear registration of 3D-CT to tracked, selected 2D-ultrasound slices
Janine Olesch, Björn Beuthien, Stefan Heldmann, et al.
In navigated liver surgery it is an important task to align intra-operative data to pre-operative planning data. This work describes a method to register pre-operative 3D-CT-data to tracked intra-operative 2D US-slices. Instead of reconstructing a 3D-volume out of the two-dimensional US-slice sequence we directly apply the registration scheme to the 2D-slices. The advantage of this approach is manyfold. We circumvent the time consuming compounding process, we use only known information, and the complexity of the scheme reduces drastically. As the liver is a non-rigid organ, we apply non-linear techniques to take care of deformations occurring during the intervention. During the surgery, computing time is a crucial issue. As the complexity of the scheme is proportional to the number of acquired slices, we devise a scheme which starts out by selecting a few "key-slices" to be used in the non-linear registration scheme. This step is followed by multi-level/multi-scale strategies and fast optimization techniques. In this abstract we briefly describe the new method and show first convincing results.
Automatic C-arm pose estimation via 2D/3D hybrid registration of a radiographic fiducial
E. Moult, E. C. Burdette, D. Y. Song, et al.
Motivation: In prostate brachytherapy, real-time dosimetry would be ideal to allow for rapid evaluation of the implant quality intra-operatively. However, such a mechanism requires an imaging system that is both real-time and which provides, via multiple C-arm fluoroscopy images, clear information describing the three-dimensional position of the seeds deposited within the prostate. Thus, accurate tracking of the C-arm poses proves to be of critical importance to the process. Methodology: We compute the pose of the C-arm relative to a stationary radiographic fiducial of known geometry by employing a hybrid registration framework. Firstly, by means of an ellipse segmentation algorithm and a 2D/3D feature based registration, we exploit known FTRAC geometry to recover an initial estimate of the C-arm pose. Using this estimate, we then initialize the intensity-based registration which serves to recover a refined and accurate estimation of the C-arm pose. Results: Ground-truth pose was established for each C-arm image through a published and clinically tested segmentation-based method. Using 169 clinical C-arm images and a ±10° and ±10 mm random perturbation of the ground-truth pose, the average rotation and translation errors were 0.68° (std = 0.06°) and 0.64 mm (std = 0.24 mm). Conclusion: Fully automated C-arm pose estimation using a 2D/3D hybrid registration scheme was found to be clinically robust based on human patient data.
A comparison of thin-plate splines with automatic correspondences and B-splines with uniform grids for multimodal prostate registration
This paper provides a comparison of spline-based registration methods applied to register interventional Trans Rectal Ultrasound (TRUS) and pre-acquired Magnetic Resonance (MR) prostate images for needle guided prostate biopsy. B-splines and Thin-plate Splines (TPS) are the most prevalent spline-based approaches to achieve deformable registration. Pertaining to the strategic selection of correspondences for the TPS registration, we use an automatic method already proposed in our previous work to generate correspondences in the MR and US prostate images. The method exploits the prostate geometry with the principal components of the segmented prostate as the underlying framework and involves a triangulation approach. The correspondences are generated with successive refinements and Normalized Mutual Information (NMI) is employed to determine the optimal number of correspondences required to achieve TPS registration. B-spline registration with successive grid refinements are consecutively applied for a significant comparison of the impact of the strategically chosen correspondences on the TPS registration against the uniform B-spline control grids. The experimental results are validated on 4 patient datasets. Dice Similarity Coefficient (DSC) is used as a measure of the registration accuracy. Average DSC values of 0.97±0.01 and 0.95±0.03 are achieved for the TPS and B-spline registrations respectively. B-spline registration is observed to be more computationally expensive than the TPS registration with average execution times of 128.09 ± 21.7 seconds and 62.83 ± 32.77 seconds respectively for images with maximum width of 264 pixels and a maximum height of 211 pixels.
Phantom validation for ultrasound to statistical shape model registration of human pelvis
Total Hip Replacement (THR) has become a common surgical procedure in recent years, as a result of increasing aging population with osteoarthritis of the hip joint. Localization of the pelvic anatomical coordinate system (PaCS) is a critical step in accurate placement of the femur prosthesis in the acetabulum in THR. Intra-operative ultrasound (US) imaging can provide a radiation-free navigation system for localization of the PaCS. However, US images are noisy and cannot provide any anatomical information beneath the bone surface due to the total reflection of US beam at the bone-soft tissue interface. A solution to this problem is to fuse intra-operative US with pre-operative imaging or a statistical shape model (SSM) of the pelvis. Here, we propose a multi-slice to volume intensity-based registration of the pelvic SSM to a sparse set of 2D US images in order to localize the PaCS in the US. In this registration technique, a set of 2D slices are extracted from a pelvic SSM using the approximate location and orientation of their corresponding 2D US images. During the registration, the comparison between the SSM slices and the US images is made using an ultrasound simulation technique and a correlation-based similarity metric. We demonstrate the feasibility of our proposed approach in localizing the PaCS on five patient-based phantoms. These results indicate the necessity of including pubic symphysis landmarks in the 2D US slices in order to have a precise estimation of the PaCS.
3D non-rigid registration using surface and local salient features for transrectal ultrasound image-guided prostate biopsy
Xiaofeng Yang, Hamed Akbari, Luma Halig, et al.
We present a 3D non-rigid registration algorithm for the potential use in combining PET/CT and transrectal ultrasound (TRUS) images for targeted prostate biopsy. Our registration is a hybrid approach that simultaneously optimizes the similarities from point-based registration and volume matching methods. The 3D registration is obtained by minimizing the distances of corresponding points at the surface and within the prostate and by maximizing the overlap ratio of the bladder neck on both images. The hybrid approach not only capture deformation at the prostate surface and internal landmarks but also the deformation at the bladder neck regions. The registration uses a soft assignment and deterministic annealing process. The correspondences are iteratively established in a fuzzy-to-deterministic approach. B-splines are used to generate a smooth non-rigid spatial transformation. In this study, we tested our registration with pre- and postbiopsy TRUS images of the same patients. Registration accuracy is evaluated using manual defined anatomic landmarks, i.e. calcification. The root-mean-squared (RMS) of the difference image between the reference and floating images was decreased by 62.6±9.1% after registration. The mean target registration error (TRE) was 0.88±0.16 mm, i.e. less than 3 voxels with a voxel size of 0.38×0.38×0.38 mm3 for all five patients. The experimental results demonstrate the robustness and accuracy of the 3D non-rigid registration algorithm.
GPU accelerated registration of a statistical shape model of the lumbar spine to 3D ultrasound images
We present a parallel implementation of a statistical shape model registration to 3D ultrasound images of the lumbar vertebrae (L2-L4). Covariance Matrix Adaptation Evolution Strategy optimization technique, along with Linear Correlation of Linear Combination similarity metric have been used, to improve the robustness and capture range of the registration approach. Instantiation and ultrasound simulation have been implemented on a graphics processing unit for a faster registration. Phantom studies show a mean target registration error of 3.2 mm, while 80% of all the cases yield target registration error of below 3.5 mm.
Anatomically correct deformable colon phantom
James A. Norris, Michael D. Barton, Brynmor J. Davis, et al.
We describe a technique to build a soft-walled colon phantom that provides realistic lumen anatomy in computed tomography (CT) images. The technique begins with the geometry of a human colon measured during CT colonography (CTC). The three-dimensional air-filled colonic lumen is segmented and then replicated using stereolithography (SLA). The rigid SLA model includes large-scale features (e.g., haustral folds and tenia coli bands) down to small-scale features (e.g., a small pedunculated polyp). Since the rigid model represents the internal air-filled volume, a highly-pliable silicone polymer is painted onto the rigid model. This thin layer of silicone, when removed, becomes the colon wall. Small 3 mm diameter glass beads are affixed to the outer wall. These glass beads show up with high intensity in CT scans and provide a ground truth for evaluating performance of algorithms designed to register prone and supine CTC data sets. After curing, the silicone colon wall is peeled off the rigid model. The resulting colon phantom is filled with air and submerged in a water bath. CT images and intraluminal fly-through reconstructions from CTC scans of the colon phantom are compared against patient data to demonstrate the ability of the phantom to simulate a human colon.
Elastic image registration via rigid object motion induced deformation
Xiaofen Zheng, Jayaram K. Udupa, Bruce E. Hirsch
In this paper, we estimate the deformations induced on soft tissues by the rigid independent movements of hard objects and create an admixture of rigid and elastic adaptive image registration transformations. By automatically segmenting and independently estimating the movement of rigid objects in 3D images, we can maintain rigidity in bones and hard tissues while appropriately deforming soft tissues. We tested our algorithms on 20 pairs of 3D MRI datasets pertaining to a kinematic study of the flexibility of the ankle complex of normal feet as well as ankles affected by abnormalities in foot architecture and ligament injuries. The results show that elastic image registration via rigid object-induced deformation outperforms purely rigid and purely nonrigid approaches.
Correspondenceless 3D-2D registration based on expectation conditional maximization
X. Kang, R. H. Taylor, M. Armand, et al.
3D-2D registration is a fundamental task in image guided interventions. Due to the physics of the X-ray imaging, however, traditional point based methods meet new challenges, where the local point features are indistinguishable, creating difficulties in establishing correspondence between 2D image feature points and 3D model points. In this paper, we propose a novel method to accomplish 3D-2D registration without known correspondences. Given a set of 3D and 2D unmatched points, this is achieved by introducing correspondence probabilities that we model as a mixture model. By casting it into the expectation conditional maximization framework, without establishing one-to-one point correspondences, we can iteratively refine the registration parameters. The method has been tested on 100 real X-ray images. The experiments showed that the proposed method accurately estimated the rotations (< 1°) and in-plane (X-Y plane) translations (< 1 mm).
Poster Session: Segmentation
icon_mobile_dropdown
OpenCL based machine learning labeling of biomedical datasets
Oscar Amoros, Sergio Escalera, Anna Puig
In this paper, we propose a two-stage labeling method of large biomedical datasets through a parallel approach in a single GPU. Diagnostic methods, structures volume measurements, and visualization systems are of major importance for surgery planning, intra-operative imaging and image-guided surgery. In all cases, to provide an automatic and interactive method to label or to tag different structures contained into input data becomes imperative. Several approaches to label or segment biomedical datasets has been proposed to discriminate different anatomical structures in an output tagged dataset. Among existing methods, supervised learning methods for segmentation have been devised to easily analyze biomedical datasets by a non-expert user. However, they still have some problems concerning practical application, such as slow learning and testing speeds. In addition, recent technological developments have led to widespread availability of multi-core CPUs and GPUs, as well as new software languages, such as NVIDIA's CUDA and OpenCL, allowing to apply parallel programming paradigms in conventional personal computers. Adaboost classifier is one of the most widely applied methods for labeling in the Machine Learning community. In a first stage, Adaboost trains a binary classifier from a set of pre-labeled samples described by a set of features. This binary classifier is defined as a weighted combination of weak classifiers. Each weak classifier is a simple decision function estimated on a single feature value. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. In this work, we propose an alternative representation of the Adaboost binary classifier. We use this proposed representation to define a new GPU-based parallelized Adaboost testing stage using OpenCL. We provide numerical experiments based on large available data sets and we compare our results to CPU-based strategies in terms of time and labeling speeds.
Advanced level set segmentation of the right atrium in MR
Siqi Chen, Timo Kohlberger, Klaus J. Kirchberg
Atrial fibrillation is a common heart arrhythmia, and can be effectively treated with ablation. Ablation planning requires 3D models of the patient's left atrium (LA) and/or right atrium (RA), therefore an automatic segmentation procedure to retrieve these models is desirable. In this study, we investigate the use of advanced level set segmentation approaches to automatically segment RA in magnetic resonance angiographic (MRA) volume images. Low contrast to noise ratio makes the boundary between the RA and the nearby structures nearly indistinguishable. Therefore, pure data driven segmentation approaches such as watershed and ChanVese methods are bound to fail. Incorporating training shapes through PCA modeling to constrain the segmentation is one popular solution, and is also used in our segmentation framework. The shape parameters from PCA are optimized with a global histogram based energy model. However, since the shape parameters span a much smaller space, it can not capture fine details of the shape. Therefore, we employ a second refinement step after the shape based segmentation stage, which follows closely the recent work of localized appearance model based techniques. The local appearance model is established through a robust point tracking mechanism and is learned through landmarks embedded on the surface of training shapes. The key contribution of our work is the combination of a statistical shape prior and a localized appearance prior for level set segmentation of the right atrium from MRA. We test this two step segmentation framework on porcine RA to verify the algorithm.
Automatic 3D segmentation of ultrasound images using atlas registration and statistical texture prior
Xiaofeng Yang, David Schuster, Viraj Master, et al.
We are developing a molecular image-directed, 3D ultrasound-guided, targeted biopsy system for improved detection of prostate cancer. In this paper, we propose an automatic 3D segmentation method for transrectal ultrasound (TRUS) images, which is based on multi-atlas registration and statistical texture prior. The atlas database includes registered TRUS images from previous patients and their segmented prostate surfaces. Three orthogonal Gabor filter banks are used to extract texture features from each image in the database. Patient-specific Gabor features from the atlas database are used to train kernel support vector machines (KSVMs) and then to segment the prostate image from a new patient. The segmentation method was tested in TRUS data from 5 patients. The average surface distance between our method and manual segmentation is 1.61 ± 0.35 mm, indicating that the atlas-based automatic segmentation method works well and could be used for 3D ultrasound-guided prostate biopsy.
Poster Session: Visualization
icon_mobile_dropdown
Quantitative wound healing measurement and monitoring system based on an innovative 3D imaging system
Steven Yi, Arthur Yang, Gongjie Yin, et al.
In this paper, we report a novel three-dimensional (3D) wound imaging system (hardware and software) under development at Technest Inc. System design is aimed to perform accurate 3D measurement and modeling of a wound and track its healing status over time. Accurate measurement and tracking of wound healing enables physicians to assess, document, improve, and individualize the treatment plan given to each wound patient. In current wound care practices, physicians often visually inspect or roughly measure the wound to evaluate the healing status. This is not an optimal practice since human vision lacks precision and consistency. In addition, quantifying slow or subtle changes through perception is very difficult. As a result, an instrument that quantifies both skin color and geometric shape variations would be particularly useful in helping clinicians to assess healing status and judge the effect of hyperemia, hematoma, local inflammation, secondary infection, and tissue necrosis. Once fully developed, our 3D imaging system will have several unique advantages over traditional methods for monitoring wound care: (a) Non-contact measurement; (b) Fast and easy to use; (c) up to 50 micron measurement accuracy; (d) 2D/3D Quantitative measurements;(e) A handheld device; and (f) Reasonable cost (< $1,000).
Between developable surfaces and circular cone splines: curved slices of 3D volumes
Public visualization of high quality medical information has been wildly available since the creation of the Visible Human Project in the late 90´s. We discuss the extraction of information from 3D volumes along curved slices with emphasis on those that can be displayed on the plane without deformation. Special attention is given to a dental volume containing the sixteen teeth of the upper human jaw. We review several approaches to display information along curved slices contained within the 3D data set.
A unified framework for voxel classification and triangulation
A unified framework for voxel classification and triangulation for medical images is presented. Given volumetric data, each voxel is labeled by a two-dimensional classification function based on voxel intensity and gradient. A modified Constrained Elastic Surface Net is integrated into the classification function, allowing the surface mesh to be generated in a single step. The modification to the Constrained Elastic Surface Net includes additional triangulation cases which reduce visual artifacts, and a surface-node relaxation criterion based on linear regression which improves visual appearance and preserves the enclosed volume. By carefully designing the two-dimensional classification function, surface meshes for different anatomical structures can be generated in a single process. This framework is implemented on the GPU, allowing rendition of the voxel classification to be visualized in near real-time.
An interactive ROI tool for DTI fiber tracking
Florian Weiler, Horst K. Hahn
Fiber tracking is one of the clinically most well-established analysis techniques for Diffusion Tensor Imaging data (DTI). It facilitates the reconstruction of anatomically known white matter structures by tracing trajectories on a tensor field obtained from diffusion weighted MR images. A crucial step when using this technique is the placement and shape of regions-of-interest (ROIs) to identify the structures in question. Typically, free-hand contours or simple geometric shapes like rectangles are placed in regions, where a given structure can be identified using the color coded DTI representation. However, such approaches result in a high variability of the resulting tracts and usually require additional filtering and placement of multiple ROIs. Also, the generation of accurate ROIs using a free-hand tool requires a significant amount of interaction time. We present a method which allows for interactive generation of anatomically meaningful ROIs for DTI fiber tracking based on geometric similarities of the underlying tensor field. The method works similar to the magicwand tool known from image editing software tools to create reasonable, fully image based ROIs using a single mouseclick.
SimITK: rapid ITK prototyping using the Simulink visual programming environment
A. W. L. Dickinson, P. Mousavi, D. G. Gobbi, et al.
The Insight Segmentation and Registration Toolkit (ITK) is a long-established, software package used for image analysis, visualization, and image-guided surgery applications. This package is a collection of C++ libraries, that can pose usability problems for users without C++ programming experience. To bridge the gap between the programming complexities and the required learning curve of ITK, we present a higher-level visual programming environment that represents ITK methods and classes by wrapping them into "blocks" within MATLAB's visual programming environment, Simulink. These blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. Due to the heavily C++ templated nature of ITK, direct interaction between Simulink and ITK requires an intermediary to convert their respective datatypes and allow intercommunication. We have developed a "Virtual Block" that serves as an intermediate wrapper around the ITK class and is responsible for resolving the templated datatypes used by ITK to native types used by Simulink. Presently, the wrapping procedure for SimITK is semi-automatic in that it requires XML descriptions of the ITK classes as a starting point, as this data is used to create all other necessary integration files. The generation of all source code and object code from the XML is done automatically by a CMake build script that yields Simulink blocks as the final result. An example 3D segmentation workflow using cranial-CT data as well as a 3D MR-to-CT registration workflow are presented as a proof-of-concept.
Multi-dimensional transfer functions for effective visualization of streaming ultrasound and elasticity images
David Mann, Jesus J. Caban, Philipp J. Stolka, et al.
The low-cost and minimum health risks associated with ultrasound (US) have made ultrasonic imaging a widely accepted method to perform diagnostic and image-guided procedures. Despite the existence of 3D ultrasound probes, most analysis and diagnostic procedures are done by studying the B-mode images. Currently, multiple ultrasound probes include 6-DOF sensors that can provide positioning information. Such tracking information can be used to reconstruct a 3D volume from a set of 2D US images. Recent advances in ultrasound imaging have also shown that, directly from the streaming radio frequency (RF) data, it is possible to obtain additional information of the anatomical region under consideration including the elasticity properties. This paper presents a generic framework that takes advantage of current graphics hardware to create a low-latency system to visualize streaming US data while combining multiple tissue attributes into a single illustration. In particular, we introduce a framework that enables real-time reconstruction and interactive visualization of streaming data while enhancing the illustration with elasticity information. The visualization module uses two-dimensional transfer functions (2D TFs) to more effectively fuse and map B-mode and strain values into specific opacity and color values. On commodity hardware, our framework can simultaneously reconstruct, render, and provide user interaction at over 15 fps. Results with phantom and real-world medical datasets show the advantages and effectiveness of our technique with ultrasound data. In particular, our results show how two-dimensional transfer functions can be used to more effectively identify, analyze and visualize lesions in ultrasound images.
Efficient 3D rendering for web-based medical imaging software: a proof of concept
Diego Cantor-Rivera, Robert Bartha, Terry Peters
Medical Imaging Software (MIS) found in research and in clinical practice, such as in Picture and Archiving Communication Systems (PACS) and Radiology Information Systems (RIS), has not been able to take full advantage of the Internet as a deployment platform. MIS is usually tightly coupled to algorithms that have substantial hardware and software requirements. Consequently, MIS is deployed on thick clients which usually leads project managers to allocate more resources during the deployment phase of the application than the resources that would be allocated if the application were deployed through a web interface.To minimize the costs associated with this scenario, many software providers use or develop plug-ins to provide the delivery platform (internet browser) with the features to load, interact and analyze medical images. Nevertheless there has not been a successful standard means to achieve this goal so far. This paper presents a study of WebGL as an alternative to plug-in development for efficient rendering of 3D medical models and DICOM images. WebGL is a technology that enables the internet browser to have access to the local graphics hardware in a native fashion. Because it is based in OpenGL, a widely accepted graphic industry standard, WebGL is being implemented in most of the major commercial browsers. After a discussion on the details of the technology, a series of experiments are presented to determine the operational boundaries in which WebGL is adequate for MIS. A comparison with current alternatives is also addressed. Finally conclusions and future work are discussed.
Efficient ray casting with LF-Minmax map in CUDA
Ray casting is the most frequently used algorithm in direct volume rendering for displaying medical data, although it is computationally very expensive. Recent hardware improvements have allowed ray casting to be used in real-time, however, there is room for performance gains to take advantage of the recent development of general-purpose graphical processing units (GPU). The purpose of this paper is to implement the volume ray casting with the Compute Unified Device Architecture (CUDA) to obtain higher rendering performance. The experimental results show that the new algorithm is up to 15 times faster than the conventional CPU-based ray casting algorithm.
An interactive exploded view generation using block-based re-rendering
D. S. Kang, B. S. Shin
Exploded view generation for volumetric object is a useful method in surgical simulation, but it is very hard to perform real-time operation. We present an interactive method to determine interesting regions and to render a scene with varying volume datasets in real-time. In general, since conventional methods are designed to solve the problem of occlusion of sub-volumes, they did not consider performance. Especially, exploded view generation methods are difficult to render a scene in real-time even in the case of exploiting highly optimized method. Because they perform volume rendering after defining rules to split an original volume and constraints to order sub-volumes. We present an interactive cutting operation using GPU-based parallel processing and real-time rendering using block-based re-rendering method.