Proceedings Volume 10132

Medical Imaging 2017: Physics of Medical Imaging

Thomas G. Flohr, Joseph Y. Lo, Taly Gilat Schmidt
cover
Proceedings Volume 10132

Medical Imaging 2017: Physics of Medical Imaging

Thomas G. Flohr, Joseph Y. Lo, Taly Gilat Schmidt
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 5 June 2017
Contents: 28 Sessions, 199 Papers, 65 Presentations
Conference: SPIE Medical Imaging 2017
Volume Number: 10132

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10132
  • Tomosynthesis and Mammography
  • Detectors
  • Joint Session with MI101 and MI105: Task-based Assessment in CT
  • Cone Beam CT I: New Technologies and Corrections
  • CT: Reconstruction and Algorithms
  • Keynote and Radiation Dose
  • Photon Counting I: Instrumentation
  • Cone Beam CT II: Optimization and Reconstruction
  • Phase Contrast Imaging
  • Photon Counting II: Algorithms
  • Nuclear Medicine and Magnetic Resonance Imaging
  • New Systems and Technologies
  • Modeling and Simulations I: CT
  • Modeling and Simulations II: Breast Imaging
  • Breast Imaging: Tomosynthesis
  • Poster Session: Cone-Beam CT
  • Poster Session: CTI: New Technologies and Corrections
  • Poster Session: CTII: Image Reconstruction and Artifact Reduction
  • Poster Session: Photon Counting: Spectral CT, Instrumentation, and Algorithms
  • Poster Session: Detectors
  • Poster Session: Radiation Dose
  • Poster Session: Mammography and Breast Tomosynthesis
  • Poster Session: New Systems and Technologies
  • Poster Session: Nuclear Medicine and Magnetic Resonance Imaging
  • Poster Session: Observers, Modeling, and Phantoms
  • Poster Session: Phase Contrast and Dark Field Imaging
  • Poster Session: Radiography: X-Ray Imaging, Fluoroscopy, and Tomosynthesis
Front Matter: Volume 10132
icon_mobile_dropdown
Front Matter: Volume 10132
This PDF file contains the front matter associated with SPIE Proceedings Volume 10132, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Tomosynthesis and Mammography
icon_mobile_dropdown
GPU-accelerated compressed-sensing (CS) image reconstruction in chest digital tomosynthesis (CDT) using CUDA programming
A compressed-sensing (CS) technique has been rapidly applied in medical imaging field for retrieving volumetric data from highly under-sampled projections. Among many variant forms, CS technique based on a total-variation (TV) regularization strategy shows fairly reasonable results in cone-beam geometry. In this study, we implemented the TV-based CS image reconstruction strategy in our prototype chest digital tomosynthesis (CDT) R/F system. Due to the iterative nature of time consuming processes in solving a cost function, we took advantage of parallel computing using graphics processing units (GPU) by the compute unified device architecture (CUDA) programming to accelerate our algorithm. In order to compare the algorithmic performance of our proposed CS algorithm, conventional filtered back-projection (FBP) and simultaneous algebraic reconstruction technique (SART) reconstruction schemes were also studied. The results indicated that the CS produced better contrast-to-noise ratios (CNRs) in the physical phantom images (Teflon region-of-interest) by factors of 3.91 and 1.93 than FBP and SART images, respectively. The resulted human chest phantom images including lung nodules with different diameters also showed better visual appearance in the CS images. Our proposed GPU-accelerated CS reconstruction scheme could produce volumetric data up to 80 times than CPU programming. Total elapsed time for producing 50 coronal planes with 1024×1024 image matrix using 41 projection views were 216.74 seconds for proposed CS algorithms on our GPU programming, which could match the clinically feasible time (~ 3 min). Consequently, our results demonstrated that the proposed CS method showed a potential of additional dose reduction in digital tomosynthesis with reasonable image quality in a fast time.
Stationary intraoral tomosynthesis for dental imaging
Christina R. Inscoe, Gongting Wu, Danai Elena Soulioti, et al.
Despite recent advances in dental radiography, the diagnostic accuracies for some of the most common dental diseases have not improved significantly, and in some cases remain low. Intraoral x-ray is the most commonly used x-ray diagnostic tool in dental clinics. It however suffers from the typical limitations of a 2D imaging modality including structure overlap. Cone-beam computed tomography (CBCT) uses high radiation dose and suffers from image artifacts and relatively low resolution. The purpose of this study is to investigate the feasibility of developing a stationary intraoral tomosynthesis (s-IOT) using spatially distributed carbon nanotube (CNT) x-ray array technology, and to evaluate its diagnostic accuracy compared to conventional 2D intraoral x-ray. A bench-top s-IOT device was constructed using a linear CNT based X-ray source array and a digital intraoral detector. Image reconstruction was performed using an iterative reconstruction algorithm. Studies were performed to optimize the imaging configuration. For evaluation of s-IOT’s diagnostic accuracy, images of a dental quality assurance phantom, and extracted human tooth specimens were acquired. Results show s-IOT increases the diagnostic sensitivity for caries compared to intraoral x-ray at a comparable dose level.
An atlas-based organ dose estimator for tomosynthesis and radiography
Jocelyn Hoye, Yakun Zhang, Greeshma Agasthya, et al.
The purpose of this study was to provide patient-specific organ dose estimation based on an atlas of human models for twenty tomosynthesis and radiography protocols. The study utilized a library of 54 adult computational phantoms (age: 18-78 years, weight 52-117 kg) and a validated Monte-Carlo simulation (PENELOPE) of a tomosynthesis and radiography system to estimate organ dose. Positioning of patient anatomy was based on radiographic positioning handbooks. The field of view for each exam was calculated to include relevant organs per protocol. Through simulations, the energy deposited in each organ was binned to estimate normalized organ doses into a reference database. The database can be used as the basis to devise a dose calculator to predict patient-specific organ dose values based on kVp, mAs, exposure in air, and patient habitus for a given protocol. As an example of the utility of this tool, dose to an organ was studied as a function of average patient thickness in the field of view for a given exam and as a function of Body Mass Index (BMI). For tomosynthesis, organ doses can also be studied as a function of x-ray tube position. This work developed comprehensive information for organ dose dependencies across tomosynthesis and radiography. There was a general exponential decrease dependency with increasing patient size that is highly protocol dependent. There was a wide range of variability in organ dose across the patient population, which needs to be incorporated in the metrology of organ dose.
Lesion characterization in spectral photon-counting tomosynthesis
It has previously been shown that 2D spectral mammography can be used to discriminate between (likely benign) cystic and (potentially malignant) solid lesions in order to reduce unnecessary recalls in mammography. One limitation of the technique is, however, that the composition of overlapping tissue needs to be interpolated from a region surrounding the lesion. The purpose of this investigation was to demonstrate that lesion characterization can be done with spectral tomosynthesis, and to investigate whether the 3D information available in tomosynthesis can reduce the uncertainty from the interpolation of surrounding tissue. A phantom experiment was designed to simulate a cyst and a tumor, where the tumor was overlaid with a structure that made it mimic a cyst. In 2D, the two targets appeared similar in composition, whereas spectral tomosynthesis revealed the exact compositional difference. However, the loss of discrimination signal due to spread from the plane of interest was of the same strength as the reduction of anatomical noise. Results from a preliminary investigation on clinical tomosynthesis images of solid lesions yielded results that were consistent with the phantom experiments, but were still to some extent inconclusive. We conclude that lesion characterization is feasible in spectral tomosynthesis, but more data, as well as refinement of the calibration and discrimination algorithms, are needed to draw final conclusions about the benefit compared to 2D.
Pipeline for effective denoising of digital mammography and digital breast tomosynthesis
Lucas R. Borges, Predrag R. Bakic, Alessandro Foi, et al.
Denoising can be used as a tool to enhance image quality and enforce low radiation doses in X-ray medical imaging. The effectiveness of denoising techniques relies on the validity of the underlying noise model. In full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT), calibration steps like the detector offset and flat-fielding can affect some assumptions made by most denoising techniques. Furthermore, quantum noise found in X-ray images is signal-dependent and can only be treated by specific filters. In this work we propose a pipeline for FFDM and DBT image denoising that considers the calibration steps and simplifies the modeling of the noise statistics through variance-stabilizing transformations (VST). The performance of a state-of-the-art denoising method was tested with and without the proposed pipeline. To evaluate the method, objective metrics such as the normalized root mean square error (N-RMSE), noise power spectrum, modulation transfer function (MTF) and the frequency signal-to-noise ratio (SNR) were analyzed. Preliminary tests show that the pipeline improves denoising. When the pipeline is not used, bright pixels of the denoised image are under-filtered and dark pixels are over-smoothed due to the assumption of a signal-independent Gaussian model. The pipeline improved denoising up to 20% in terms of spatial N-RMSE and up to 15% in terms of frequency SNR. Besides improving the denoising, the pipeline does not increase signal smoothing significantly, as shown by the MTF. Thus, the proposed pipeline can be used with state-of-the-art denoising techniques to improve the quality of DBT and FFDM images.
Detectors
icon_mobile_dropdown
Signal and noise characteristics of a CdTe-based photon counting detector: cascaded systems analysis and experimental studies
Xu Ji, Ran Zhang, Yongshuai Ge, et al.
Recent advances in single photon counting detectors (PCDs) are opening up new opportunities in medical imaging. However, the performance of PCDs is not flawless. Problems such as charge sharing may deteriorate the performance of PCD. This work studied the dependence of the signal and noise properties of a cadmium telluride (CdTe)-based PCD on the charge sharing effect and the anti-charge sharing (ACS) capability offered by the PCD. Through both serial and parallel cascaded systems analysis, a theoretical model was developed to trace the origin of charge sharing in CdTe-based PCD, which is primarily related to remote k-fluorescence re-absorption and spatial spreading of charge cloud. The ACS process was modeled as a sub-imaging state prior to the energy thresholding stage, and its impact on the noise power spectrum (NPS) of PCD can be qualitatively determined by the theoretical model. To validate the theoretical model, experimental studies with a CdTe-based PCD system (XC-FLITE X1, XCounter AB) was performed. Two x-ray radiation conditions, including an RQA-5 beam and a 40 kVp beam, were used for the NPS measurements. Both theoretical predictions and experimental results showed that ACS makes the NPS of the CdTe-based PCD flatter, which corresponds to reduced noise correlation length. The flatness of the NPS is further boosted by increasing the energy threshold or reducing the x-ray energy, both of which reduce the likelihood of registering multiple counts from the same incidenting x-ray photon.
SWAD: transient conductivity and pulse-height spectrum
Photon counting detectors (PCD) with energy discrimination capabilities have the potential for improved detector performance over conventional energy integrating detectors. Additionally, PCDs are capable of advanced imaging techniques such as material decomposition with a single exposure, which may have significant impact in breast imaging applications. Our goal is to develop a large area amorphous Selenium (a-Se) photon counting detector. By using our novel direct conversion field-Shaping multi-Well Avalanche Detector (SWAD) structure, the inherent limitations of low charge conversion gain and low carrier mobility of a-Se can be overcome. In this work we developed a spatio-temporal charge transport model to investigate the effects of charge sharing, energy loss and pulse pileup for SWAD. Using a monoenergetic 20 keV source we found that 32% of primary interactions have K-fluorescence emissions that escape the target pixel, 62.5% of which are reabsorbed in neighboring pixels, while 37.5% escape the detector entirely for a 100 μm × 100 μm pixel size. Simulated pulse height spectra for an input count rate of 50,000 counts/s/pixel with a 2 μs dead time was also generated, showing a photopeak FWHM = 2.6 keV with ~10% pulse pileup. Additionally we present the first time-of-flight (TOF) measurements from prototype SWAD samples, showing successful unipolar time differential (UTD) charge sensing. Our simulation and initial experimental results show that SWAD has potential towards making a large area a-Se based PCD for breast imaging applications.
Direct measurement of Lubberts effect in CsI:Tl scintillators using single x-ray photon imaging
Adrian Howansky, A. R. Lubinsky, S. K. Ghose, et al.
The imaging performance of an indirect flat panel detector (I-FPD) is fundamentally limited by that of its scintillator. The scintillator’s modulation transfer function (MTF) varies as a function of the depth of x-ray interaction in the layer, due to differences in the lateral spread of light before detection by the optical sensor. This variation degrades the spatial frequency-dependent detective quantum efficiency (DQE(f)) of I-FPDs, and is quantified by the Lubberts effect. The depth-dependent MTFs of various scintillators used in I-FPDs have been estimated using Monte Carlo simulations, but have never been measured directly. This work presents the first experimental measurements of the depth-dependent MTF of thallium-doped cesium iodide (CsI) and terbium-doped Gd2O2S (GOS) scintillators with thickness ranging from 200 – 1000 μm. Light bursts from individual x-ray interactions occurring at known, fixed depths within a scintillator are imaged using an ultra-high-sensitivity II-EMCCD (image-intensifier, electron multiplying charge coupled device) camera. X-ray interaction depth in the scintillator is localized using a micro-slit beam of parallel synchrotron radiation (32 keV), and varied by translation in 50 ± 1 µm depth intervals. Fourier analysis of the imaged light bursts is used to deduce the MTF versus x-ray interaction depth z. Measurements of MTF(z,f) are used to calculate presampling MTF(f) with RQA-M3, RQA5 and RQA9 beam qualities and compared with conventional slanted edge measurements. Images of the depth-varying light bursts are used to derive each scintillator’s Lubberts function for a 32 keV beam.
Exploration of strategies for implementation of screen-printed mercuric iodide converters in direct detection AMFPIs for digital breast tomosynthesis
Digital breast tomosynthesis (DBT) has become an increasingly important tool in the diagnosis of breast disease. For those DBT imaging systems based on active matrix, flat-panel imager (AMFPI) arrays, the incident radiation is detected directly or indirectly by means of an a-Se or CsI:Tl x-ray converter, respectively. While all AMFPI DBT devices provide clinically useful volumetric information, their performance is limited by the relatively modest average signal generated per interacting X ray by present converters compared to the electronic additive noise of the system. To address this constraint, we are pursuing the development of a screen-printed form of mercuric iodide (SP HgI2) which has demonstrated considerably higher sensitivities (i.e., larger average signal per interacting X ray) than those of conventional a-Se and CsI:Tl converters, as well as impressive DQE and MTF performance under mammographic irradiation conditions. A converter offering such enhanced sensitivity would greatly improve signal-to-noise performance and facilitate quantum-limited imaging down to significantly lower exposures than present AMFPI DBT systems. However, before this novel converter material can be implemented practically, challenges associated with SP HgI2 must be addressed. Most significantly, high levels of charge trapping (which lead to image lag as well as fall-off in DQE at higher exposures) need to be reduced – while improving the uniformity in pixel-to-pixel signal response as well as maintaining low dark current and otherwise favorable DQE performance. In this paper, a pair of novel strategies for overcoming the challenge of charge trapping in SP HgI2 converters are described, and initial results from empirical and calculational studies of these strategies are reported.
Temporal imaging for accurate time, space, and energy localization of photoelectric events in monolithic scintillators
In this communication, we propose an original temporal imaging concept for accurate spatio-temporal localization of scintillation events within a monolithic scintillator and a digital Si-PM matrix. Jointly analyzing the light distribution and the arrival time distribution of the first detected photons, it was possible to better recognize a photoelectric event and to accurately localize it in space, time and energy.
Towards a high sensitivity small animal PET system based on CZT detectors (Conference Presentation)
Shiva Abbaszadeh, Craig Levin
Small animal positron emission tomography (PET) is a biological imaging technology that allows non-invasive interrogation of internal molecular and cellular processes and mechanisms of disease. New PET molecular probes with high specificity are under development to target, detect, visualize, and quantify subtle molecular and cellular processes associated with cancer, heart disease, and neurological disorders. However, the limited uptake of these targeted probes leads to significant reduction in signal. There is a need to advance the performance of small animal PET system technology to reach its full potential for molecular imaging. Our goal is to assemble a small animal PET system based on CZT detectors and to explore methods to enhance its photon sensitivity. In this work, we reconstruct an image from a phantom using a two-panel subsystem consisting of six CZT crystals in each panel. For image reconstruction, coincidence events with energy between 450 and 570 keV were included. We are developing an algorithm to improve sensitivity of the system by including multiple interaction events.
Joint Session with MI101 and MI105: Task-based Assessment in CT
icon_mobile_dropdown
Dependence of quantitative accuracy of CT perfusion imaging on system parameters
Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d’) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Pushing the boundaries of diagnostic CT systems for high spatial resolution imaging tasks
Juan P. Cruz-Bastida, Daniel Gomez-Cardona, John W. Garrett, et al.
In a previous work [Cruz-Bastida et al Med. Phys. 43, 2399 (2016)], the spatial resolution performance of a new High-Resolution (Hi-Res) multi-detector row CT (MDCT) scan mode and the associated High Definition (HD) reconstruction kernels was systematically characterized. The purpose of the present work was to study the noise properties of the Hi-Res scan mode and the joint impact of spatial resolution and noise characteristics on high contrast and high spatial resolution imaging tasks. Using a physical phantom and a diagnostic MDCT system, equipped with both Hi-Res and conventional scan modes, noise power spectrum (NPS) measurements were performed at 8 off-centered positions (0 to 14 cm with an increment of 2 cm) for 8 non-HD kernels and 7 HD kernels. An in vivo rabbit experiment was then performed to demonstrate the potential clinical value of the Hi-Res scan mode. Without the HD kernels, the Hi-Res scan mode preserved the shape of the NPS and slightly increased noise magnitude across all object positions. The combined use of the Hi-Res scan mode and HD kernels led to a greater noise increase and pushed the NPS towards higher frequencies, particularly for those edge-preserving or edge-enhancing HD kernels. Results of the in vivo rabbit study demonstrate important trade-offs between spatial resolution and noise characteristics. Overall, for a given high contrast and high spatial resolution imaging task (bronchi imaging), the benefit of spatial resolution improvement introduced by the Hi-Res scan mode outweighs the potential noise amplification, leading to better overall imaging performance for both centered and off-centered positions.
Practical implementation of channelized hotelling observers: effect of ROI size
Andrea Ferrero, Christopher P. Favazza, Lifeng Yu, et al.
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO’s performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO’s performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.
Cone Beam CT I: New Technologies and Corrections
icon_mobile_dropdown
Task-driven orbit design and implementation on a robotic C-arm system for cone-beam CT
S. Ouadah, M. Jacobson, J. W. Stayman, et al.
Purpose: This work applies task-driven optimization to the design of non-circular orbits that maximize imaging performance for a particular imaging task. First implementation of task-driven imaging on a clinical robotic C-arm system is demonstrated, and a framework for orbit calculation is described and evaluated.

Methods: We implemented a task-driven imaging framework to optimize orbit parameters that maximize detectability index d'. This framework utilizes a specified Fourier domain task function and an analytical model for system spatial resolution and noise. Two experiments were conducted to test the framework. First, a simple task was considered consisting of frequencies lying entirely on the fz-axis (e.g., discrimination of structures oriented parallel to the central axial plane), and a “circle + arc” orbit was incorporated into the framework as a means to improve sampling of these frequencies, and thereby increase task-based detectability. The orbit was implemented on a robotic C-arm (Artis Zeego, Siemens Healthcare). A second task considered visualization of a cochlear implant simulated within a head phantom, with spatial frequency response emphasizing high-frequency content in the (fy, fz) plane of the cochlea. An optimal orbit was computed using the task-driven framework, and the resulting image was compared to that for a circular orbit.

Results: For the fz-axis task, the circle + arc orbit was shown to increase d' by a factor of 1.20, with an improvement of 0.71 mm in a 3D edge-spread measurement for edges located far from the central plane and a decrease in streak artifacts compared to a circular orbit. For the cochlear implant task, the resulting orbit favored complementary views of high tilt angles in a 360° orbit, and d' was increased by a factor of 1.83.

Conclusions: This work shows that a prospective definition of imaging task can be used to optimize source-detector orbit and improve imaging performance. The method was implemented for execution of non-circular, task-driven orbits on a clinical robotic C-arm system. The framework is sufficiently general to include both acquisition parameters (e.g., orbit, kV, and mA selection) and reconstruction parameters (e.g., a spatially varying regularizer).
Geometric calibration using line fiducials for cone-beam CT with general, non-circular source-detector trajectories
M. W. Jacobson, M. Ketcha, A. Uneri, et al.
Purpose: Traditional BB-based geometric calibration methods for cone-beam CT (CBCT) rely strongly on foreknowledge of the scan trajectory shape. This is a hindrance to the implementation of variable trajectory CBCT systems, normally requiring a dedicated calibration phantom or software algorithm for every scan orbit of interest. A more flexible method of calibration is proposed here that accommodates multiple orbit types – including strongly noncircular trajectories – with a single phantom and software routine.

Methods: The proposed method uses a calibration phantom consisting of multiple line-shaped wire segments. Geometric models relating the 3D line equations of the wires to the 2D line equations of their projections are used as the basis for system geometry estimation. This method was tested using a mobile C-arm CT system and comparisons were made to standard BB-based calibrations. Simulation studies were also conducted using a sinusoid-on-sphere orbit. Calibration performance was quantified in terms of Point Spread Function (PSF) width and back projection error. Visual image quality was assessed with respect to spatial resolution in trabecular bone in an anthropomorphic head phantom.

Results: The wire-based calibration method performed equal to or better than BB-based calibrations in all evaluated metrics. For the sinusoidal scans, the method provided reliable calibration, validating its application to non-circular trajectories. Furthermore, the ability to improve image quality using non-circular orbits in conjunction with this calibration method was demonstrated.

Conclusion: The proposed method has been shown feasible for conventional circular CBCT scans and offers a promising tool for non-circular scan orbits that can improve image quality, reduce dose, and extend field of view.
Shading correction for cone-beam CT in radiotherapy: validation of dose calculation accuracy using clinical images
T. E. Marchant, K. D. Joshi, C. J. Moore
Cone-beam CT (CBCT) images are routinely acquired to verify patient position in radiotherapy (RT), but are typically not calibrated in Hounsfield Units (HU) and feature non-uniformity due to X-ray scatter and detector persistence effects. This prevents direct use of CBCT for re-calculation of RT delivered dose. We previously developed a prior-image based correction method to restore HU values and improve uniformity of CBCT images. Here we validate the accuracy with which corrected CBCT can be used for dosimetric assessment of RT delivery, using CBCT images and RT plans for 45 patients including pelvis, lung and head sites. Dose distributions were calculated based on each patient's original RT plan and using CBCT image values for tissue heterogeneity correction. Clinically relevant dose metrics were calculated (e.g. median and minimum target dose, maximum organ at risk dose). Accuracy of CBCT based dose metrics was determined using an "override ratio" method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the image is assumed to be constant for each patient, allowing comparison to “gold standard” CT. For pelvis and head images the proportion of dose errors >2% was reduced from 40% to 1.3% after applying shading correction. For lung images the proportion of dose errors >3% was reduced from 66% to 2.2%. Application of shading correction to CBCT images greatly improves their utility for dosimetric assessment of RT delivery, allowing high confidence that CBCT dose calculations are accurate within 2-3%.
Development and clinical translation of a cone-beam CT scanner for high-quality imaging of intracranial hemorrhage
Purpose: Prompt, reliable detection of intracranial hemorrhage (ICH) is essential for treatment of stroke and traumatic brain injury, and would benefit from availability of imaging directly at the point-of-care. This work reports the performance evaluation of a clinical prototype of a cone-beam CT (CBCT) system for ICH imaging and introduces novel algorithms for model-based reconstruction with compensation for data truncation and patient motion.

Methods: The tradeoffs in dose and image quality were investigated as a function of analytical (FBP) and model-based iterative reconstruction (PWLS) algorithm parameters using phantoms with ICH-mimicking inserts. Image quality in clinical applications was evaluated in a human cadaver imaged with simulated ICH. Objects outside of the field of view (FOV), such as the head-holder, were found to introduce challenging truncation artifacts in PWLS that were mitigated with a novel multi-resolution reconstruction strategy. Following phantom and cadaver studies, the scanner was translated to a clinical pilot study. Initial clinical experience indicates the presence of motion in some patient scans, and an image-based motion estimation method that does not require fiducial tracking or prior patient information was implemented and evaluated.

Results: The weighted CTDI for a nominal scan technique was 22.8 mGy. The high-resolution FBP reconstruction protocol achieved < 0.9 mm full width at half maximum (FWHM) of the point spread function (PSF). The PWLS soft-tissue reconstruction showed <1.2 mm PSF FWHM and lower noise than FBP at the same resolution. Effects of truncation in PWLS were mitigated with the multi-resolution approach, resulting in 60% reduction in root mean squared error compared to conventional PWLS. Cadaver images showed clear visualization of anatomical landmarks (ventricles and sulci), and ICH was conspicuous. The motion compensation method was shown in clinical studies to restore visibility of fine bone structures, such as the subtle fracture, cranial sutures, and the cochlea as well as subtle low-contrast structures in the brain parenchyma.

Conclusion: The imaging performance of the prototype suggests sufficient quality for ICH imaging and motivates continued clinical studies to assess the diagnosis utility of the CBCT system in realistic clinical scenarios at the point of care.
Lab-based x-ray nanoCT imaging
Mark Müller, Sebastian Allner, Simone Ferstl, et al.
Due to the recent development of transmission X-ray tubes with very small focal spot sizes, laboratory-based CT imaging with sub-micron resolutions is nowadays possible. We recently developed a novel X-ray nanoCT setup featuring a prototype nanofocus X-ray source and a single-photon counting detector. The system is based on mere geometrical magnification and can reach resolutions of 200 nm. To demonstrate the potential of the nanoCT system for biomedical applications we show high resolution nanoCT data of a small piece of human tooth comprising coronal dentin. The reconstructed CT data clearly visualize the dentin tubules within the tooth piece.
CT: Reconstruction and Algorithms
icon_mobile_dropdown
High quality high spatial resolution functional classification in low dose dynamic CT perfusion using singular value decomposition (SVD) and k-means clustering
Francesco Pisana, Thomas Henzler, Stefan Schönberg, et al.
Dynamic CT perfusion acquisitions are intrinsically high-dose examinations, due to repeated scanning. To keep radiation dose under control, relatively noisy images are acquired. Noise is then further enhanced during the extraction of functional parameters from the post-processing of the time attenuation curves of the voxels (TACs) and normally some smoothing filter needs to be employed to better visualize any perfusion abnormality, but sacrificing spatial resolution. In this study we propose a new method to detect perfusion abnormalities keeping both high spatial resolution and high CNR. To do this we first perform the singular value decomposition (SVD) of the original noisy spatial temporal data matrix to extract basis functions of the TACs. Then we iteratively cluster the voxels based on a smoothed version of the three most significant singular vectors. Finally, we create high spatial resolution 3D volumes where to each voxel is assigned a distance from the centroid of each cluster, showing how functionally similar each voxel is compared to the others. The method was tested on three noisy clinical datasets: one brain perfusion case with an occlusion in the left internal carotid, one healthy brain perfusion case, and one liver case with an enhancing lesion. Our method successfully detected all perfusion abnormalities with higher spatial precision when compared to the functional maps obtained with a commercially available software. We conclude this method might be employed to have a rapid qualitative indication of functional abnormalities in low dose dynamic CT perfusion datasets. The method seems to be very robust with respect to both spatial and temporal noise and does not require any special a priori assumption. While being more robust respect to noise and with higher spatial resolution and CNR when compared to the functional maps, our method is not quantitative and a potential usage in clinical routine could be as a second reader to assist in the maps evaluation, or to guide a dataset smoothing before the modeling part.
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for ~24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
Polyenergetic known-component reconstruction without prior shape models
C. Zhang, W. Zbijewski, X. Zhang, et al.
Purpose: Previous work has demonstrated that structural models of surgical tools and implants can be integrated into model-based CT reconstruction to greatly reduce metal artifacts and improve image quality. This work extends a polyenergetic formulation of known-component reconstruction (Poly-KCR) by removing the requirement that a physical model (e.g. CAD drawing) be known a priori, permitting much more widespread application. Methods: We adopt a single-threshold segmentation technique with the help of morphological structuring elements to build a shape model of metal components in a patient scan based on initial filtered-backprojection (FBP) reconstruction. This shape model is used as an input to Poly-KCR, a formulation of known-component reconstruction that does not require a prior knowledge of beam quality or component material composition. An investigation of performance as a function of segmentation thresholds is performed in simulation studies, and qualitative comparisons to Poly-KCR with an a priori shape model are made using physical CBCT data of an implanted cadaver and in patient data from a prototype extremities scanner. Results: We find that model-free Poly-KCR (MF-Poly-KCR) provides much better image quality compared to conventional reconstruction techniques (e.g. FBP). Moreover, the performance closely approximates that of Poly- KCR with an a prior shape model. In simulation studies, we find that imaging performance generally follows segmentation accuracy with slight under- or over-estimation based on the shape of the implant. In both simulation and physical data studies we find that the proposed approach can remove most of the blooming and streak artifacts around the component permitting visualization of the surrounding soft-tissues. Conclusion: This work shows that it is possible to perform known-component reconstruction without prior knowledge of the known component. In conjunction with the Poly-KCR technique that does not require knowledge of beam quality or material composition, very little needs to be known about the metal implant and system beforehand. These generalizations will allow more widespread application of KCR techniques in real patient studies where the information of surgical tools and implants is limited or not available.
Practical interior tomography with small region piecewise model prior
Ryosuke Ueda, Takuya Nemoto, Hiroyuki Kudo
Interior CT is one of the reconstruction methods from incomplete projection data. In the method, the X-ray passes through only the Region-of-Interest (ROI). The method has various advantages, e.g., the reduction of radiation dose and the application of the large size target. For a long time, the approximate solution has been studied because the interior CT has not the unique solution. Recently, it has been proved that the unique solution for interior CT problem can be obtained if we have a priori knowledge for the ROI. However, the priors in previous studies may have problems for practical application, e.g., difficult to know the true prior or unrealistic assumption to the image. This paper proposes more practical prior assuming piecewise polynomial on an arbitrarily small part of ROI. The uniqueness of the solution in the proposed prior is shown. For practical application, another issue that the placement of the prior region remains. We also propose the method for the placement. The experimental results and the reduction of the artifacts are presented.
SparseCT: interrupted-beam acquisition and sparse reconstruction for radiation dose reduction
Thomas Koesters, Florian Knoll, Aaron Sodickson, et al.
State-of-the-art low-dose CT methods reduce the x-ray tube current and use iterative reconstruction methods to denoise the resulting images. However, due to compromises between denoising and image quality, only moderate dose reductions up to 30-40% are accepted in clinical practice. An alternative approach is to reduce the number of x-ray projections and use compressed sensing to reconstruct the full-tube-current undersampled data. This idea was recognized in the early days of compressed sensing and proposals for CT dose reduction appeared soon afterwards. However, no practical means of undersampling has yet been demonstrated in the challenging environment of a rapidly rotating CT gantry. In this work, we propose a moving multislit collimator as a practical incoherent undersampling scheme for compressed sensing CT and evaluate its application for radiation dose reduction. The proposed collimator is composed of narrow slits and moves linearly along the slice dimension (z), to interrupt the incident beam in different slices for each x-ray tube angle (θ). The reduced projection dataset is then reconstructed using a sparse approach, where 3D image gradients are employed to enforce sparsity. The effects of the collimator slits on the beam profile were measured and represented as a continuous slice profile. SparseCT was tested using retrospective undersampling and compared against commercial current-reduction techniques on phantoms and in vivo studies. Initial results suggest that SparseCT may enable higher performance than current-reduction, particularly for high dose reduction factors.
Localized and efficient cardiac CT reconstruction
D. P. Clark, C. T. Badea
The superiority of iterative reconstruction techniques over classic analytical ones is well documented in a variety of CT imaging applications where radiation dose and sampling time are limiting factors. However, by definition, the iterative nature of advanced reconstruction techniques is accompanied by a substantial increase in data processing time. This problem is further exacerbated in temporal and spectral CT reconstruction problems where the gap between the amount of data acquired and the amount of data to be reconstructed is exaggerated within the framework of compressive sensing. Two keys to overcoming this barrier include (1) advancements in parallel-computing technology and (2) advancements in data-efficient reconstruction. In this work, we propose a novel, two-stage strategy for 4D cardiac CT reconstruction which leverages these two keys by (1) exploiting GPU computing hardware and by (2) reconstructing temporal contrast on a limited spatial domain. Following a review of the proposed algorithm, we demonstrate its application in retrospectively gated cardiac CT reconstruction using the 4D MOBY mouse phantom. Quantitatively, reconstructing the temporal contrast on a limited domain reduces the overall reconstruction error by 20% and the reconstruction error within the dynamic portion of the phantom by 15% (root-mean-square error metric). A complementary in vivo mouse experiment demonstrates a suitable reconstruction fidelity to allow the measurement of cardiac functional metrics while reducing computation time by 75% relative to direct reconstruction of ten phases of the cardiac cycle. We believe that the proposed algorithm will serve as the basis for novel, data-efficient, multi-dimensional CT reconstruction techniques.
Keynote and Radiation Dose
icon_mobile_dropdown
Driving CT developments the last mile: case examples of successful and somewhat less successful translations into clinical practice
Aaron D. Sodickson
CT technology has advanced rapidly in recent years, yet not all innovations translate readily into clinical practice. Technology advances must meet certain key requirements to make it into routine use: They must provide a well-defined clinical benefit. They must be easy to use and integrate readily into existing workflows, or better still, further streamline these workflows. These requirements heavily favor fully integrated or automated solutions that remove the human factor and provide a reproducible output independent of operator skill level. Further, to achieve these aims, collaboration with the ultimate end users is needed as early as possible in the development cycle, not just at the point of product testing. Technology innovators are encouraged to engage such collaborators even at early stages of feature or product definition. This manuscript highlights these concepts through exploration of challenging areas in CT imaging in an Emergency Department setting. Technique optimization for pulmonary embolus CT is described as an example of successful integration of multiple advances in radiation dose reduction and imaging speed. The typical workflow of a trauma “pan-scan” (incorporating scans from head through pelvis) is described to highlight workflow challenges and opportunities for improvement. Finally, Dual Energy CT is discussed to highlight the undeniable clinical value of the material characterization it provides, yet also its surprisingly slow integration into routine use beyond early adopters.
Dose comparison between CTDI and the AAPM Report No. 111 methodology in adult, adolescent, and child head phantom
Celina L. Li, Yogesh Thakur, Nancy L. Ford
The standard computed tomography dose index (CTDI) metric tends to underestimate scatter radiation in cone beam computed tomography (CBCT) acquisition; therefore, the American Association of Physicists in Medicine (AAPM) Task Group 111 proposed a new dosimetry methodology to measure equilibrium dose at the center of a phantom (z = 0) using a 2-cm thimble ionization chamber. In this study, we implement the CTDI and the AAPM method with a thimble chamber on adult, adolescent, and child head phantoms using the Toshiba Aquilion One CBCT and compare the results to the CTDI measured with a 10-cm pencil chamber. Following the AAPM protocol, the normalized (100 mAs) equilibrium doses (Deq) computed using dose measurements taken in the central hole of the phantom (Deq,c), the peripheral hole of the phantom, (Deq,p), and by the CTDIw equation (Deq,w) are 20.13 ± 0.19, 21.53 ± 0.48, and 20.93 ± 0.40 mGy for adult; 21.55 ± 0.40, 21.14 ± 0.43, and 21.08 ± 0.45 mGy for adolescent; and 24.58 ± 0.40, 24.92 ± 0.85, and 24.77 ± 0.72 mGy for child, respectively. The CTDIw, which measured 17.70, 19.86, and 22.43 mGy for adult, adolescent and child respectively, is about 10% lower than their corresponding Deq’s. The extended AAPM method proposed by Deman et al., which estimates the dose profile along the rotational axis (z axis), has demonstrated consistency between theoretical and experimental results for all phantoms. With the introduction of the child and the adolescent head phantoms, we not only have emphasized the practical aspects including relative convenience of the CTDI method and accuracy of the AAPM method, but also proposed a method to approximate Deq for different sized patients.
Skin dose mapping for non-uniform x-ray fields using a backscatter point spread function
Beam shaping devices like ROI attenuators and compensation filters modulate the intensity distribution of the xray beam incident on the patient. This results in a spatial variation of skin dose due to the variation of primary radiation and also a variation in backscattered radiation from the patient. To determine the backscatter component, backscatter point spread functions (PSF) are generated using EGS Monte-Carlo software. For this study, PSF’s were determined by simulating a 1 mm beam incident on the lateral surface of an anthropomorphic head phantom and a 20 cm thick PMMA block phantom. The backscatter PSF’s for the head phantom and PMMA phantom are curve fit with a Lorentzian function after being normalized to the primary dose intensity (PSFn). PSFn is convolved with the primary dose distribution to generate the scatter dose distribution, which is added to the primary to obtain the total dose distribution. The backscatter convolution technique is incorporated in the dose tracking system (DTS), which tracks skin dose during fluoroscopic procedures and provides a color map of the dose distribution on a 3D patient graphic model. A convolution technique is developed for the backscatter dose determination for the nonuniformly spaced graphic-model surface vertices. A Gafchromic film validation was performed for shaped x-ray beams generated with an ROI attenuator and with two compensation filters inserted into the field. The total dose distribution calculated by the backscatter convolution technique closely agreed with that measured with the film.
Photon Counting I: Instrumentation
icon_mobile_dropdown
Effect of spatio-energy correlation in PCD due to charge sharing, scatter, and secondary photons
Charge sharing, scatter and fluorescence events in a photon counting detector (PCD) can result in multiple counting of a single incident photon in neighboring pixels. This causes energy distortion and correlation of data across energy bins in neighboring pixels (spatio-energy correlation). If a “macro-pixel” is formed by combining multiple small pixels, it will exhibit correlations across its energy bins. Charge sharing and fluorescence escape are dependent on pixel size and detector material. Accurately modeling these effects can be crucial for detector design and for model based imaging applications. This study derives a correlation model for the multi-counting events and investigates the effect in virtual non-contrast and effective monoenergetic imaging. Three versions of 1 mm2 square CdTe macro-pixel were compared: a 4×4 grid, 2×2 grid, or 1×1 composed of pixels with side length 250 μm, 500 μm, or 1 mm, respectively. The same flux was applied to each pixel, and pulse pile-up was ignored. The mean and covariance matrix of measured photon counts is derived analytically using pre-computed spatio-energy response functions (SERF) estimated from Monte Carlo simulations. Based on the Cramer-Rao Lower Bound, a macro-pixel with 250×250 μm2 sub-pixels shows ~2.2 times worse variance than a single 1 mm2 pixel for spectral imaging, while its penalty for effective monoenergetic imaging is <10% compared to a single 1 mm2 pixel.
Improving material separation of high-flux whole-body photon counting computed tomography by K-edge pre-filtration
C. Polster, R. Gutjahr, M. Berner, et al.
Photon-counting detectors in computed tomography (CT) allow for measuring the energy of the incident xray photons within certain energy windows. This information can be used to enhance contrast or reconstruct CT images of different material bases. Compared to energy-integrating CT-detectors, pixel dimensions have to be smaller to limit the negative effect of pulse pile-up at high X-ray fluxes. Unfortunately, reducing the pixel size leads to increased K-escape and charge sharing effects. As a consequence, an incident X-ray may generate more than one detector signal, and with deteriorated energy information. In earlier simulation studies it has been shown that these limitations can be mitigated by optimizing the X-ray spectrum using K-edge pre-filtration. In the current study, we have used a whole-body research CT scanner with a high-flux capable photon-counting detector, in which for the first time a pre-patient hafnium filter was installed. Our measurement results demonstrate substantial improvement of the material decomposition capability at comparable dose levels. The results are in agreement with the predictions provided in simulations.
Nanoparticle imaging probes for molecular imaging with computed tomography and application to cancer imaging
Ryan K. Roeder, Tyler E. Curtis, Prakash D. Nallathamby, et al.
Precision imaging is needed to realize precision medicine in cancer detection and treatment. Molecular imaging offers the ability to target and identify tumors, associated abnormalities, and specific cell populations with overexpressed receptors. Nuclear imaging and radionuclide probes provide high sensitivity but subject the patient to a high radiation dose and provide limited spatiotemporal information, requiring combined computed tomography (CT) for anatomic imaging. Therefore, nanoparticle contrast agents have been designed to enable molecular imaging and improve detection in CT alone. Core-shell nanoparticles provide a powerful platform for designing tailored imaging probes. The composition of the core is chosen for enabling strong X-ray contrast, multi-agent imaging with photon-counting spectral CT, and multimodal imaging. A silica shell is used for protective, biocompatible encapsulation of the core composition, volume-loading fluorophores or radionuclides for multimodal imaging, and facile surface functionalization with antibodies or small molecules for targeted delivery. Multi-agent (k-edge) imaging and quantitative molecular imaging with spectral CT was demonstrated using current clinical agents (iodine and BaSO4) and a proposed spectral library of contrast agents (Gd2O3, HfO2, and Au). Bisphosphonate-functionalized Au nanoparticles were demonstrated to enhance sensitivity and specificity for the detection of breast microcalcifications by conventional radiography and CT in both normal and dense mammary tissue using murine models. Moreover, photon-counting spectral CT enabled quantitative material decomposition of the Au and calcium signals. Immunoconjugated Au@SiO2 nanoparticles enabled highly-specific targeting of CD133+ ovarian cancer stem cells for contrast-enhanced detection in model tumors.
Ultra-high spatial resolution multi-energy CT using photon counting detector technology
S. Leng, R. Gutjahr, A. Ferrero, et al.
Two ultra-high-resolution (UHR) imaging modes, each with two energy thresholds, were implemented on a research, whole-body photon-counting-detector (PCD) CT scanner, referred to as sharp and UHR, respectively. The UHR mode has a pixel size of 0.25 mm at iso-center for both energy thresholds, with a collimation of 32 × 0.25 mm. The sharp mode has a 0.25 mm pixel for the low-energy threshold and 0.5 mm for the high-energy threshold, with a collimation of 48 × 0.25 mm. Kidney stones with mixed mineral composition and lung nodules with different shapes were scanned using both modes, and with the standard imaging mode, referred to as macro mode (0.5 mm pixel and 32 × 0.5 mm collimation). Evaluation and comparison of the three modes focused on the ability to accurately delineate anatomic structures using the high-spatial resolution capability and the ability to quantify stone composition using the multi-energy capability. The low-energy threshold images of the sharp and UHR modes showed better shape and texture information due to the achieved higher spatial resolution, although noise was also higher. No noticeable benefit was shown in multi-energy analysis using UHR compared to standard resolution (macro mode) when standard doses were used. This was due to excessive noise in the higher resolution images. However, UHR scans at higher dose showed improvement in multi-energy analysis over macro mode with regular dose. To fully take advantage of the higher spatial resolution in multi-energy analysis, either increased radiation dose, or application of noise reduction techniques, is needed.
Cone Beam CT II: Optimization and Reconstruction
icon_mobile_dropdown
Low signal correction scheme for low dose CBCT: the good, the bad, and the ugly
Daniel Gomez-Cardona, John Hayes, Ran Zhang, et al.
Reducing radiation dose in C-arm Cone-beam CT (CBCT) image-guided interventional procedures is of great importance. However, reducing radiation dose may increase noise magnitude and generate noise streaks in the reconstructed image. Several approaches, ranging from simple to highly complex methods, have been proposed in an attempt to reduce noise and mitigate artifacts caused by low detector counts. These approaches include apodizing the ramp kernel used before backprojection, using an adaptive trimmed mean filter based on local flux information, employing penalized-likelihood approaches or edge-preserving filters for sinogram smoothing, incorporating statistical models into the so-called model based iterative reconstruction framework, and more. This work presents a simple yet powerful scheme for low signal correction in low dose CBCT by applying local anisotropic diffusion filtration to the raw detector data prior to the logarithmic transform. It was found that low signal correction efficiently reduced noise magnitude and noise streaks without considerably sacrificing spatial resolution. Yet caution must be taken when selecting the parameters used for low signal correction so that no spurious information is enhanced and noise streaks are effectively reduced.
High-resolution extremity cone-beam CT with a CMOS detector: task-based optimization of scintillator thickness
Q. Cao, M. Brehler, A. Sisniega, et al.
Purpose: CMOS x-ray detectors offer small pixel sizes and low electronic noise that may support the development of novel high-resolution imaging applications of cone-beam CT (CBCT). We investigate the effects of CsI scintillator thickness on the performance of CMOS detectors in high resolution imaging tasks, in particular in quantitative imaging of bone microstructure in extremity CBCT. Methods: A scintillator thickness-dependent cascaded systems model of CMOS x-ray detectors was developed. Detectability in low-, high- and ultra-high resolution imaging tasks (Gaussian with FWHM of ~250 μm, ~80 𝜇m and ~40 μm, respectively) was studied as a function of scintillator thickness using the theoretical model. Experimental studies were performed on a CBCT test bench equipped with DALSA Xineos3030 CMOS detectors (99 μm pixels) with CsI scintillator thicknesses of 400 μm and 700 𝜇m, and a 0.3 FS compact rotating anode x-ray source. The evaluation involved a radiographic resolution gauge (0.6-5.0 lp/mm), a 127 μm tungsten wire for assessment of 3D resolution, a contrast phantom with tissue-mimicking inserts, and an excised fragment of human tibia for visual assessment of fine trabecular detail. Results: Experimental studies show ~35% improvement in the frequency of 50% MTF modulation when using the 400 μm scintillator compared to the standard nominal CsI thickness of 700 μm. Even though the high-frequency DQE of the two detectors is comparable, theoretical studies show a 14% to 28% increase in detectability index (d’2) of high- and ultra- high resolution tasks, respectively, for the detector with 400 μm CsI compared to 700 μm CsI. Experiments confirm the theoretical findings, showing improvements with the adoption of 400 μm panel in the visibility of the radiographic pattern (2x improvement in peak-to-through distance at 4.6 lp/mm) and a 12.5% decrease in the FWHM of the tungsten wire. Reconstructions of the tibial plateau reveal enhanced visibility of trabecular structures with the CMOS detector with 400 μm scinitllator. Conclusion: Applications on CMOS detectors in high resolution CBCT imaging of trabecular bone will benefit from using a thinner scintillator than the current standard in general radiography. The results support the translation of the CMOS sensor with 400 μm scinitllator.
Integration of prior CT into CBCT reconstruction for improved image quality via reconstruction of difference: first patient studies
Purpose: There are many clinical situations where diagnostic CT is used for an initial diagnosis or treatment planning, followed by one or more CBCT scans that are part of an image-guided intervention. Because the high-quality diagnostic CT scan is a rich source of patient-specific anatomical knowledge, this provides an opportunity to incorporate the prior CT image into subsequent CBCT reconstruction for improved image quality. We propose a penalized-likelihood method called reconstruction of difference (RoD), to directly reconstruct differences between the CBCT scan and the CT prior. In this work, we demonstrate the efficacy of RoD with clinical patient datasets. Methods: We introduce a data processing workflow using the RoD framework to reconstruct anatomical changes between the prior CT and current CBCT. This workflow includes processing steps to account for non-anatomical differences between the two scans including 1) scatter correction for CBCT datasets due to increased scatter fractions in CBCT data; 2) histogram matching for attenuation variations between CT and CBCT; and 3) registration for different patient positioning. CBCT projection data and CT planning volumes for two radiotherapy patients – one abdominal study and one head-and-neck study – were investigated. Results: In comparisons between the proposed RoD framework and more traditional FDK and penalized-likelihood reconstructions, we find a significant improvement in image quality when prior CT information is incorporated into the reconstruction. RoD is able to provide additional low-contrast details while correctly incorporating actual physical changes in patient anatomy. Conclusions: The proposed framework provides an opportunity to either improve image quality or relax data fidelity constraints for CBCT imaging when prior CT studies of the same patient are available. Possible clinical targets include CBCT image-guided radiotherapy and CBCT image-guided surgeries.
Brain perfusion imaging using a Reconstruction-of-Difference (RoD) approach for cone-beam computed tomography
M. Mow, W. Zbijewski, A. Sisniega, et al.
Purpose: To improve the timely detection and treatment of intracranial hemorrhage or ischemic stroke, recent efforts include the development of cone-beam CT (CBCT) systems for perfusion imaging and new approaches to estimate perfusion parameters despite slow rotation speeds compared to multi-detector CT (MDCT) systems. This work describes development of a brain perfusion CBCT method using a reconstruction of difference (RoD) approach to enable perfusion imaging on a newly developed CBCT head scanner prototype. Methods: A new reconstruction approach using RoD with a penalized-likelihood framework was developed to image the temporal dynamics of vascular enhancement. A digital perfusion simulation was developed to give a realistic representation of brain anatomy, artifacts, noise, scanner characteristics, and hemo-dynamic properties. This simulation includes a digital brain phantom, time-attenuation curves and noise parameters, a novel forward projection method for improved computational efficiency, and perfusion parameter calculation. Results: Our results show the feasibility of estimating perfusion parameters from a set of images reconstructed from slow scans, sparse data sets, and arc length scans as short as 60 degrees. The RoD framework significantly reduces noise and time-varying artifacts from inconsistent projections. Proper regularization and the use of overlapping reconstructed arcs can potentially further decrease bias and increase temporal resolution, respectively. Conclusions: A digital brain perfusion simulation with RoD imaging approach has been developed and supports the feasibility of using a CBCT head scanner for perfusion imaging. Future work will include testing with data acquired using a 3D-printed perfusion phantom currently and translation to preclinical and clinical studies.
Deformable known component model-based reconstruction for coronary CT angiography
Purpose: Atherosclerosis detection remains challenging in coronary CT angiography for patients with cardiac implants. Pacing electrodes of a pacemaker or lead components of a defibrillator can create substantial blooming and streak artifacts in the heart region, severely hindering the visualization of a plaque of interest. We present a novel reconstruction method that incorporates a deformable model for metal leads to eliminate metal artifacts and improve anatomy visualization even near the boundary of the component. Methods: The proposed reconstruction method, referred as STF-dKCR, includes a novel parameterization of the component that integrates deformation, a 3D-2D preregistration process that estimates component shape and position, and a polyenergetic forward model for x-ray propagation through the component where the spectral properties are jointly estimated. The methodology was tested on physical data of a cardiac phantom acquired on a CBCT testbench. The phantom included a simulated vessel, a metal wire emulating a pacing lead, and a small Teflon sphere attached to the vessel wall, mimicking a calcified plaque. The proposed method was also compared to the traditional FBP reconstruction and an interpolation-based metal correction method (FBP-MAR). Results: Metal artifacts presented in standard FBP reconstruction were significantly reduced in both FBP-MAR and STF- dKCR, yet only the STF-dKCR approach significantly improved the visibility of the small Teflon target (within 2 mm of the metal wire). The attenuation of the Teflon bead improved to 0.0481 mm-1 with STF-dKCR from 0.0166 mm-1 with FBP and from 0.0301 mm-1 with FBP-MAR – much closer to the expected 0.0414 mm-1. Conclusion: The proposed method has the potential to improve plaque visualization in coronary CT angiography in the presence of wire-shaped metal components.
Phase Contrast Imaging
icon_mobile_dropdown
Improving image quality in laboratory x-ray phase-contrast imaging
F. De Marco, M. Marschner, L. Birnbacher, et al.
Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≈ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.
First experience with x-ray dark-field radiography for human chest imaging (Conference Presentation)
Peter B. Noel, Konstantin Willer, Alexander A. Fingerle, et al.
Purpose: To evaluate the performance of an experimental X-ray dark-field radiography system for chest imaging in humans and to compare with conventional diagnostic imaging. Materials and Methods: The study was institutional review board (IRB) approved. A single human cadaver (52 years, female, height: 173 cm, weight: 84 kg, chest circumference: 97 cm) was imaged within 24 hours post mortem on the experimental x-ray dark-field system. In addition, the cadaver was imaged on a clinical CT system to obtain a reference scan. The grating-based dark-field radiography setup was equipped with a set of three gratings to enable grating-based dark-field contrast x-ray imaging. The prototype operates at an acceleration voltage of up to 70 kVp and with a field-of-view large enough for clinical chest x-ray (>35 x 35 cm2). Results: It was feasible to extract x-ray dark-field signal of the whole human thorax, clearly demonstrating that human x-ray dark-field chest radiography is feasible. Lung tissue produced strong scattering, reflected in a pronounced x-ray dark-field signal. The ribcage and the backbone are less prominent than the lung but are also distinguishable. Finally, the soft tissue is not present in the dark-field radiography. The regions of the lungs affected by edema, as verified by CT, showed less dark-field signal compared to healthy lung tissue. Conclusion: Our results reveal the current status of translating dark-field imaging from a micro (small animal) scale to a macro (patient) scale. The performance of the experimental x-ray dark-field radiography setup offers, for the first time, obtaining multi-contrast chest x-ray images (attenuation and dark-field signal) from a human cadaver.
A resolution-enhancing image reconstruction method for few-view differential phase-contrast tomography
It is well-known that properly designed image reconstruction methods can facilitate reductions in imaging doses and data-acquisition times in tomographic imaging. The ability to do so is particularly important for emerging modalities such as differential X-ray phase-contrast tomography (D-XPCT), which are currently limited by these factors. An important application of D-XPCT is high-resolution imaging of biomedical samples. However, reconstructing high-resolution images from few-view tomographic measurements remains a challenging task. In this work, a two-step sub-space reconstruction strategy is proposed and investigated for use in few-view D-XPCT image reconstruction. It is demonstrated that the resulting iterative algorithm can mitigate the high-frequency information loss caused by data incompleteness and produce images that have better preserved high spatial frequency content than those produced by use of a conventional penalized least squares (PLS) estimator.
A joint-reconstruction approach for single-shot edge illumination x-ray phase-contrast tomography
Yujia Chen, Huifeng Guan, Charlotte K. Hagen, et al.
Edge illumination X-ray phase-contrast tomography (EIXPCT) is an imaging technique that estimates the spatially variant X-ray refractive index and absorption distribution within an object while seeking to circumvent the limitations of previous benchtop implementations of X-ray phase-contrast tomography. As with gratingor analyzer-based methods, conventional image reconstruction methods for EIXPCT require that two or more images be acquired at each tomographic view angle. This requirement leads to increased data acquisition times, hindering in vivo applications. To circumvent these limitations, a joint reconstruction (JR) approach is proposed that concurrently produces estimates of the refractive index and absorption distributions from a tomographic data set containing only a single image per tomographic view angle. The JR reconstruction method solves a nonlinear optimization problem by use of a novel iterative gradient-based algorithm. The JR method is demonstrated in both computer-simulated and experimental EIXPCT studies.
Design of a sensitive grating-based phase contrast mammography prototype (Conference Presentation)
Carolina Arboleda Clavijo, Zhentian Wang, Thomas Köhler, et al.
Grating-based phase contrast mammography can help facilitate breast cancer diagnosis, as several research works have demonstrated. To translate this technique to the clinics, it has to be adapted to cover a large field of view within a limited exposure time and with a clinically acceptable radiation dose. This indicates that a straightforward approach would be to install a grating interferometer (GI) into a commercial mammography device. We developed a wave propagation based optimization method to select the most convenient GI designs in terms of phase and dark-field sensitivities for the Philips Microdose Mammography (PMM) setup. The phase sensitivity was defined as the minimum detectable breast tissue electron density gradient, whereas the dark-field sensitivity was defined as its corresponding signal-to-noise Ratio (SNR). To be able to derive sample-dependent sensitivity metrics, a visibility reduction model for breast tissue was formulated, based on previous research works on the dark-field signal and utilizing available Ultra-Small-Angle X-ray Scattering (USAXS) data and the outcomes of measurements on formalin-fixed breast tissue specimens carried out in tube-based grating interferometers. The results of this optimization indicate the optimal scenarios for each metric are different and fundamentally depend on the noise behavior of the signals and the visibility reduction trend with respect to the system autocorrelation length. In addition, since the inter-grating distance is constrained by the space available between the breast support and the detector, the best way we have to improve sensitivity is to count on a small G2 pitch.
Potential bias in signal estimation for grating-based x-ray multi-contrast imaging
Xu Ji, Yongshuai Ge, Ran Zhang, et al.
In grating based multi-contrast x-ray imaging, signals of three contrast mechanisms, namely absorption contrast, differential phase contrast (DPC) and dark-field contrast, can be estimated from a single data acquisition with several phase steps. The extracted signals, N0 (related to absorption), N1 (related to dark-field) and φ (related to DPC) may be intrinsically biased. In this work, the biases of the extracted N0, N1 and φ from the well-known least square fitting method were theoretically derived. Furthermore, numerical simulation experiments were used to validate the derived theoretical formulae for the signal bias of all three contrast mechanisms. The theoretical predictions were in good agreement with the results of the simulations. The bias of the absorption contrast is zero. The signal bias for N1 is inversely proportional to the number of phase steps and to the average fringe visibility of the grating interferometer. The bias of φ is related to several parameters, including the total exposure, the fringe visibility produced by the interferometer system, and the ground truth of φ. The larger the exposure and fringe visibility, the smaller the bias of φ.
Photon Counting II: Algorithms
icon_mobile_dropdown
Estimating basis line-integrals in spectral distortion-modeled photon counting CT: K-edge imaging using dictionary learning-based x-ray transmittance modeling
Okkyun Lee, Steffen Kappler, Christoph Polster, et al.
Photon counting detector (PCD) provides spectral information for estimating basis line-integrals; however, the recorded spectrum is distorted from spectral response effect (SRE). One of the conventional approaches to compensate for the SRE is to incorporate the SRE model in the forward imaging process. For this purpose, we recently developed a three-step algorithm as a (~×1, 500) fast alternative to maximum likelihood (ML) estimator based on the modeling of x-ray transmittance, exp ( − ∫ µa(r, E)dr ) , with low-order polynomials. However, it is limited on the case when K-edge is absent due to the smoothness property of the low-order polynomials. In this paper, we propose a dictionary learning-based x-ray transmittance modeling to address this limitation. More specifically, we design a dictionary which consists of several energy-dependent bases to model an unknown x-ray transmittance by training the dictionary based on various known x-ray transmittance as a training data. We show that the number of bases in the dictionary can be as large as the number of energy bins and that the modeling error is relatively small considering a practical number of energy bins. Once the dictionary is trained, the three-step algorithm can be derived as follows: estimating the unknown coefficients of the dictionary, estimating basis line-integrals, and then correcting for a bias. We validate the proposed method with various simulation studies for K-edge imaging with gadolinium contrast agent, and show that both bias and computational time are substantially reduced compared to those of the ML estimator.
Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm
Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained ‘One-Step’ Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.
A multi-step method for material decomposition in spectral computed tomography
When using a photon counting detector for material decomposition problems, a major issue is the low-count rate per energy bin which may lead to high image-noise with compromised contrast and accuracy. A multi-step algorithmic method of material decomposition is proposed for spectral computed tomography (CT), where the problem is formulated as series of simpler and dose efficient decompositions rather than solved simultaneously. A simple domain of four materials; water, hydroxyapatite, iodine and gold was explored. The results showed an improvement in accuracy with low-noise over a similar method where the materials were decomposed simultaneously. In the multi-step approach, for the same acquired energy bin data, the problem is reformulated in each step with decreasing number of energy bins (resulting in a higher count levels per bin) and unknowns in each step. This offers flexibility in the choice of energy bins for each material type. Our results are very preliminary but show promise and potential to tackle challenging decomposition tasks. Complete work will include detailed analysis of this approach and experimental data with more complex mixtures.
Resolution improvement in x-ray imaging with an energy-resolving detector
Mats Persson, Mats Danielsson
In x-ray imaging, improving spatial resolution is an important goal, but developing detectors with smaller pixels is technically challenging. We demonstrate a technique for improving the spatial resolution by utilizing the fact that linear attenuation coefficients of all substances within the human body can be expressed, to a good approximation, as a linear combination of two basis functions, or three if there is iodine contrast present in the image. When the x rays pass an interface parallel to the beam direction, the exponential attenuation law makes the linear attenuation coefficient measured by the detector a nonlinear combination of the linear attenuation coefficients on each side of the interface. This so-called nonlinear partial volume effect causes the spectral response to be dependent on the steepness of interfaces in the imaged volume. In this work, we show how this effect can be used to improve the spatial resolution in spectral projection x-ray imaging and quantify the achievable resolution improvement. We simulate x-ray transmission imaging of sharp and gradual changes in the projected path length of iodine contrast with an ideal energy-resolving photon-counting detector and demonstrate that the slope of the transition can be determined from the registered spectrum. We simulate piecewise-linear transitions and show that the algorithm is able to reproduce the transition profile on a subpixel scale. The FWHM resolution of the method is 5-30 % of the pixel width. The results show that an energy-resolving detector can be used to improve spatial resolution when imaging interfaces of highly attenuating objects.
Classification of breast microcalcifications using spectral mammography
Purpose: To investigate the potential of spectral mammography to distinguish between type I calcifications, consisting of calcium oxalate dihydrate or weddellite compounds that are more often associated with benign lesions, and type II calcifications containing hydroxyapatite which are predominantly associated with malignant tumors. Methods: Using a ray tracing algorithm, we simulated the total number of x-ray photons recorded by the detector at one pixel from a single pencil-beam projection through a breast of 50/50 (adipose/glandular) tissues with inserted microcalcifications of different types and sizes. Material decomposition using two energy bins was then applied to characterize the simulated calcifications into hydroxyapatite and weddellite using maximumlikelihood estimation, taking into account the polychromatic source, the detector response function and the energy dependent attenuation. Results: Simulation tests were carried out for different doses and calcification sizes for multiple realizations. The results were summarized using receiver operating characteristic (ROC) analysis with the area under the curve (AUC) taken as an overall indicator of discrimination performance and showing high AUC values up to 0.99. Conclusion: Our simulation results obtained for a uniform breast imaging phantom indicate that spectral mammography using two energy bins has the potential to be used as a non-invasive method for discrimination between type I and type II microcalcifications to improve early breast cancer diagnosis and reduce the number of unnecessary breast biopsies.
Nuclear Medicine and Magnetic Resonance Imaging
icon_mobile_dropdown
MLAA-based RF surface coil attenuation estimation in hybrid PET/MR imaging
Thorsten Heußer, Christopher M. Rank, Martin T. Freitag, et al.
Attenuation correction (AC) for both patient and hardware attenuation of the 511 keV annihilation photons is required for accurate PET quantification. In hybrid PET/MR imaging, AC for stationary hardware components such as patient table and MR head coil is performed using CT{derived attenuation templates. AC for flexible hardware components such as MR radiofrequency (RF) surface coils is more challenging. Registration{based approaches, aligning scaled CT{derived attenuation templates with the current patient position, have been proposed but are not used in clinical routine. Ignoring RF coil attenuation has been shown to result in regional activity underestimation values of up to 18 %. We propose to employ a modified version of the maximum{ likelihood reconstruction of attenuation and activity (MLAA) algorithm to obtain an estimate of the RF coil attenuation. Starting with an initial attenuation map not including the RF coil, the attenuation update of MLAA is applied outside the body outline only, allowing to estimate RF coil attenuation without changing the patient attenuation map. Hence, the proposed method is referred to as external MLAA (xMLAA). In this work, xMLAA for RF surface coil attenuation estimation is investigated using phantom and patient data acquired with a Siemens Biograph mMR. For the phantom data, average activity errors compared to the ground truth was reduced from -8:1% to +0:8% when using the proposed method. Patient data revealed an average activity underestimation of -6:1% for the abdominal region and -5:3% for the thoracic region when ignoring RF coil attenuation.
Nonlinear PET parametric image reconstruction with MRI information using kernel method
Kuang Gong, Guobao Wang, Kevin T. Chen, et al.
Positron Emission Tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neurology. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information. Previously we have used kernel learning to embed MR information in static PET reconstruction and direct Patlak reconstruction. Here we extend this method to direct reconstruction of nonlinear parameters in a compartment model by using the alternating direction of multiplier method (ADMM) algorithm. Simulation studies show that the proposed method can produce superior parametric images compared with existing methods.
Fast and accurate Monte Carlo-based system response modeling for a digital whole-body PET
Xiangyu Sun, Yanzhao Li, Lingli Yang, et al.
Recently, we have developed a digital whole-body PET scanner based on multi-voltage threshold (MVT) digitizers. To mitigate the impact of resolution degrading factors, an accurate system response is calculated by Monte Carlo simulation, which is computationally expensive. To address the problem, here we improve the method of using symmetries by simulating an axial wedge region. This approach takes full advantage of intrinsic symmetries in the cylindrical PET system without significantly increasing the computation cost in the process of symmetries. A total of 4224 symmetries are exploited. It took 17 days to generate the system maxtrix on 160 cores of Xeon 2.5 GHz. Both simulation and experimental data are used to evaluate the accuracy of system response modeling. The simulation studies show the full-width-half-maximum of a line source being 2.1 mm and 3.8 mm at the center of FOV and 200 mm at the center of FOV. Experimental results show the 2.4 mm rods in the Derenzo phantom image, which can be well distinguished.
Improved attenuation correction for respiratory gated PET/CT with extended-duration cine CT: a simulation study
Ruoqiao Zhang, Adam M. Alessio, Larry A Pierce II, et al.
Due to the wide variability of intra-patient respiratory motion patterns, traditional short-duration cine CT used in respiratory gated PET/CT may be insufficient to match the PET scan data, resulting in suboptimal attenuation correction that eventually compromises the PET quantitative accuracy. Thus, extending the duration of cine CT can be beneficial to address this data mismatch issue. In this work, we propose to use a long-duration cine CT for respiratory gated PET/CT, whose cine acquisition time is ten times longer than a traditional short-duration cine CT. We compare the proposed long-duration cine CT with the traditional short-duration cine CT through numerous phantom simulations with 11 respiratory traces measured during patient PET/CT scans. Experimental results show that, the long-duration cine CT reduces the motion mismatch between PET and CT by 41% and improves the overall reconstruction accuracy by 42% on average, as compared to the traditional short-duration cine CT. The long-duration cine CT also reduces artifacts in PET images caused by misalignment and mismatch between adjacent slices in phase-gated CT images. The improvement in motion matching between PET and CT by extending the cine duration depends on the patient, with potentially greater benefits for patients with irregular breathing patterns or larger diaphragm movements.
Estimating posterior image variance with sparsity-based object priors for MRI
Yujia Chen, Yang Lou, Cihat Eldeniz, et al.
Point estimates, such as the maximum a posteriori (MAP) estimate, are commonly computed in image re- construction tasks. However, such point estimates provide no information about the range of highly probable solutions, namely the uncertainty in the computed estimate. Bayesian inference methods that seek to compute the posterior probability distribution function (PDF) of the object can provide exactly this information, but are generally computationally intractable. Markov Chain Monte Carlo (MCMC) methods, which avoid explicit posterior computation by directly sampling from the PDF, require considerable expertise to run in a proper way. This work investigates a computationally efficient variational Bayesian inference approach for computing the posterior image variance with application to MRI. The methodology employs a sparse object prior model that is consistent with the model assumed in most sparse reconstruction methods. The posterior variance map generated by the proposed method provides valuable information that reveals how data-acquisition parameters and the specification of the object prior affect the reliability of a reconstructed MAP image. The proposed method is demonstrated by use of computer-simulated MRI data.
Affordable CZT SPECT with dose-time minimization (Conference Presentation)
James W. Hugg, Brian W. Harris, Ian Radley
PURPOSE Pixelated CdZnTe (CZT) detector arrays are used in molecular imaging applications that can enable precision medicine, including small-animal SPECT, cardiac SPECT, molecular breast imaging (MBI), and general purpose SPECT. The interplay of gamma camera, collimator, gantry motion, and image reconstruction determines image quality and dose-time-FOV tradeoffs. Both dose and exam time can be minimized without compromising diagnostic content. METHODS Integration of pixelated CZT detectors with advanced ASICs and readout electronics improves system performance. Because historically CZT was expensive, the first clinical applications were limited to small FOV. Radiation doses were initially high and exam times long. Advances have significantly improved efficiency of CZT-based molecular imaging systems and the cost has steadily declined. We have built a general purpose SPECT system using our 40 cm x 53 cm CZT gamma camera with 2 mm pixel pitch and characterized system performance. RESULTS Compared to NaI scintillator gamma cameras: intrinsic spatial resolution improved from 3.8 mm to 2.0 mm; energy resolution improved from 9.8% to <4 % at 140 keV; maximum count rate is <1.5 times higher; non-detection camera edges are reduced ~3-fold. Scattered photons are greatly reduced in the photopeak energy window; image contrast is improved; and the optimal FOV is increased to the entire camera area. CONCLUSION Continual improvements in CZT detector arrays for molecular imaging, coupled with optimal collimator and image reconstruction, result in minimized dose and exam time. With CZT cost improving, affordable whole-body CZT general purpose SPECT is expected to enable precision medicine applications.
New Systems and Technologies
icon_mobile_dropdown
3D-printed focused collimator for intra-operative gamma-ray detection
David W. Holdsworth, Hristo N. Nikolov, Steven I. Pollmann
Recent developments in targeted radiopharmaceutical labels have increased the need for sensitive, real-time gamma detection during cancer surgery and biopsy. Additive manufacturing (3D printing) in metal has now made it possible to design and fabricate complex metal collimators for compact gamma probes. We describe the design and implementation of a 3D-printed focused collimator that allows for real-time detection of gamma radiation from within a small volume of interest, using a single-crystal large-area detector. The collimator was fabricated using laser melting of powdered stainless steel (316L), using a commercial 3D metal printer (AM125, Renishaw plc). The prototype collimator is 20 mm thick, with hexagonal close-packed holes designed to focus to a point 35 mm below the surface of the collimator face. Tests were carried out with a low-activity (<1 μCi) 241 Am source, using a conventional gamma-ray detector probe, incorporating a 2.5 cm diameter, 2.5 cm thick NaI crystal coupled to a photomultiplier. The measured full-width half maximum (FWHM) was less than 5.6 mm, and collimator detection efficiency was 44%. The ability to fabricate fine features in solid metal makes it possible to develop optimized designs for high-efficiency, focused gamma collimators for real-time intraoperative imaging applications.
Blood-pool contrast agent for pre-clinical computed tomography
Charmainne Cruje, Justin J. Tse, David W. Holdsworth, et al.
Advances in nanotechnology have led to the development of blood-pool contrast agents for micro-computed tomography (micro-CT). Although long-circulating nanoparticle-based agents exist for micro-CT, they are predominantly based on iodine, which has a low atomic number. Micro-CT contrast increases when using elements with higher atomic numbers (i.e. lanthanides), particularly at higher energies. The purpose of our work was to develop and evaluate a lanthanide-based blood-pool contrast agent that is suitable for in vivo micro-CT. We synthesized a contrast agent in the form of polymer-encapsulated Gd nanoparticles and evaluated its stability in vitro. The synthesized nanoparticles were shown to have an average diameter of 127 ± 6 nm, with good size dispersity. Particle size distribution -- evaluated by dynamic light scattering over the period of two days -- demonstrated no change in size of the contrast agent in water and saline. Additionally, our contrast agent was stable in a mouse serum mimic for up to 30 minutes. CT images of the synthesized contrast agent (containing 27 mg/mL of Gd) demonstrated an attenuation of over 1000 Hounsfield Units. This approach to synthesizing a Gd-based blood-pool contrast agent promises to enhance the capabilities of micro-CT imaging.
Automated 3D coronary sinus catheter detection using a scanning-beam digital x-ray system
David A. P. Dunkerley, Jordan M. Slagowski, Lindsay E. Bodart, et al.
Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3D tracking of catheter electrodes concurrent with fluoroscopic display. To facilitate respiratory motion-compensated 3D catheter tracking, an automated coronary sinus (CS) catheter detection algorithm for SBDX was developed. The technique uses the 3D localization capability of SBDX and prior knowledge of the catheter shape. Candidate groups of points representing the CS catheter are obtained from a 3D shape-constrained search. A cost function is then minimized over the groups to select the most probable CS catheter candidate. The algorithm was implemented in MATLAB and tested offline using recorded image sequences of a chest phantom containing a CS catheter, ablation catheter, and fiducial clutter. Fiducial placement was varied to create challenging detection scenarios. Table panning and elevation was used to simulate motion. The CS catheter detection method had 98.1% true positive rate and 100% true negative rate in 2755 frames of imaging. Average processing time was 12.7 ms/frame on a PC with a 3.4 GHz CPU and 8 GB memory. Motion compensation based on 3D CS catheter tracking was demonstrated in a moving chest phantom with a fixed CS catheter and an ablation catheter pulled along a fixed trajectory. The RMS error in the tracked ablation catheter trajectory was 1.41 mm, versus 10.35 mm without motion compensation. A computationally efficient method of automated 3D CS catheter detection has been developed to assist with motion-compensated 3D catheter tracking and registration of 3D cardiac models to tracked catheters.
An x-ray-based capsule for colorectal cancer screening incorporating single photon counting technology
Ronen Lifshitz, Yoav Kimchy, Nir Gelbard, et al.
An ingestible capsule for colorectal cancer screening, based on ionizing-radiation imaging, has been developed and is in advanced stages of system stabilization and clinical evaluation. The imaging principle allows future patients using this technology to avoid bowel cleansing, and to continue the normal life routine during procedure. The Check-Cap capsule, or C-Scan ® Cap, imaging principle is essentially based on reconstructing scattered radiation, while both radiation source and radiation detectors reside within the capsule. The radiation source is a custom-made radioisotope encased in a small canister, collimated into rotating beams. While traveling along the human colon, irradiation occurs from within the capsule towards the colon wall. Scattering of radiation occurs both inside and outside the colon segment; some of this radiation is scattered back and detected by sensors onboard the capsule. During procedure, the patient receives small amounts of contrast agent as an addition to his/her normal diet. The presence of contrast agent inside the colon dictates the dominant physical processes to become Compton Scattering and X-Ray Fluorescence (XRF), which differ mainly by the energy of scattered photons. The detector readout electronics incorporates low-noise Single Photon Counting channels, allowing separation between the products of these different physical processes. Separating between radiation energies essentially allows estimation of the distance from the capsule to the colon wall, hence structural imaging of the intraluminal surface. This allows imaging of structural protrusions into the colon volume, especially focusing on adenomas that may develop into colorectal cancer.
Simulation of a compact analyzer-based imaging system with a regular x-ray source
Oriol Caudevilla, Wei Zhou, Stanislav Stoupin, et al.
Analyzer-based Imaging (ABI) belongs to a broader family of phase-contrast (PC) X-ray techniques. PC measures X-ray deflection phenomena when interacting with a sample, which is known to provide higher contrast images of soft tissue than other X-ray methods. This is of high interest in the medical field, in particular for mammogram applications. This paper presents a simulation tool for table-top ABI systems using a conventional polychromatic X-ray source.
Modeling and Simulations I: CT
icon_mobile_dropdown
Airways, vasculature, and interstitial tissue: anatomically informed computational modeling of human lungs for virtual clinical trials
This study aimed to model virtual human lung phantoms including both non-parenchymal and parenchymal structures. Initial branches of the non-parenchymal structures (airways, arteries, and veins) were segmented from anatomical data in each lobe separately. A volume-filling branching algorithm was utilized to grow the higher generations of the airways and vessels to the level of terminal branches. The diameters of the airways and vessels were estimated using established relationships between flow rates and diameters. The parenchyma was modeled based on secondary pulmonary lobule units. Polyhedral shapes with variable sizes were modeled, and the borders were assigned to interlobular septa. A heterogeneous background was added inside these units using a non-parametric texture synthesis algorithm which was informed by a high-resolution CT lung specimen dataset. A voxelized based CT simulator was developed to create synthetic helical CT images of the phantom with different pitch values. Results showed the progressive degradation in depiction of lung details with increased pitch. Overall, the enhanced lung models combined with the XCAT phantoms prove to provide a powerful toolset to perform virtual clinical trials in the context of thoracic imaging. Such trials, not practical using clinical datasets or simplistic phantoms, can quantitatively evaluate and optimize advanced imaging techniques towards patient-based care.
A virtual clinical trial using projection-based nodule insertion to determine radiologist reader performance in lung cancer screening CT
Lifeng Yu, Qiyuan Hu, Chi Wan Koo, et al.
Task-based image quality assessment using model observers is promising to provide an efficient, quantitative, and objective approach to CT dose optimization. Before this approach can be reliably used in practice, its correlation with radiologist performance for the same clinical task needs to be established. Determining human observer performance for a well-defined clinical task, however, has always been a challenge due to the tremendous amount of efforts needed to collect a large number of positive cases. To overcome this challenge, we developed an accurate projection-based insertion technique. In this study, we present a virtual clinical trial using this tool and a low-dose simulation tool to determine radiologist performance on lung-nodule detection as a function of radiation dose, nodule type, nodule size, and reconstruction methods. The lesion insertion and low-dose simulation tools together were demonstrated to provide flexibility to generate realistically-appearing clinical cases under well-defined conditions. The reader performance data obtained in this virtual clinical trial can be used as the basis to develop model observers for lung nodule detection, as well as for dose and protocol optimization in lung cancer screening CT.
Inter-algorithm lesion volumetry comparison of real and 3D simulated lung lesions in CT
The purpose of this study was to establish volumetric exchangeability between real and computational lung lesions in CT. We compared the overall relative volume estimation performance of segmentation tools when used to measure real lesions in actual patient CT images and computational lesions virtually inserted into the same patient images (i.e., hybrid datasets). Pathologically confirmed malignancies from 30 thoracic patient cases from Reference Image Database to Evaluate Therapy Response (RIDER) were modeled and used as the basis for the comparison. Lesions included isolated nodules as well as those attached to the pleura or other lung structures. Patient images were acquired using a 16 detector row or 64 detector row CT scanner (Lightspeed 16 or VCT; GE Healthcare). Scans were acquired using standard chest protocols during a single breath-hold. Virtual 3D lesion models based on real lesions were developed in Duke Lesion Tool (Duke University), and inserted using a validated image-domain insertion program. Nodule volumes were estimated using multiple commercial segmentation tools (iNtuition, TeraRecon, Inc., Syngo.via, Siemens Healthcare, and IntelliSpace, Philips Healthcare). Consensus based volume comparison showed consistent trends in volume measurement between real and virtual lesions across all software. The average percent bias (± standard error) shows -9.2±3.2% for real lesions versus -6.7±1.2% for virtual lesions with tool A, 3.9±2.5% and 5.0±0.9% for tool B, and 5.3±2.3% and 1.8±0.8% for tool C, respectively. Virtual lesion volumes were statistically similar to those of real lesions (< 4% difference) with p >.05 in most cases. Results suggest that hybrid datasets had similar inter-algorithm variability compared to real datasets.
Learning-based stochastic object models for use in optimizing imaging systems
Steven R. Dolly, Mark A. Anastasio, Lifeng Yu, et al.
It is widely known that the optimization of imaging systems based on objective, or task-based, measures of image quality via computer-simulation requires use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in anatomy within a specified ensemble of patients remains a challenging task. Because they are established by use of image data corresponding a single patient, previously reported numerical anatomical models lack of the ability to accurately model inter- patient variations in anatomy. In certain applications, however, databases of high-quality volumetric images are available that can facilitate this task. In this work, a novel and tractable methodology for learning a SOM from a set of volumetric training images is developed. The proposed method is based upon geometric attribute distribution (GAD) models, which characterize the inter-structural centroid variations and the intra-structural shape variations of each individual anatomical structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations learned from training data. By use of the GAD models, random organ shapes and positions can be generated and integrated to form an anatomical phantom. The randomness in organ shape and position will reflect the variability of anatomy present in the training data. To demonstrate the methodology, a SOM corresponding to the pelvis of an adult male was computed and a corresponding ensemble of phantoms was created. Additionally, computer-simulated X-ray projection images corresponding to the phantoms were computed, from which tomographic images were reconstructed.
False dyssynchrony: problem with image-based cardiac functional analysis using x-ray computed tomography
We have developed a digitally synthesized patient which we call “Zach” (Zero millisecond Adjustable Clinical Heart) phantom, which allows for an access to the ground truth and assessment of image-based cardiac functional analysis (CFA) using CT images with clinically realistic settings. The study using Zach phantom revealed a major problem with image-based CFA: "False dyssynchrony." Even though the true motion of wall segments is in synchrony, it may appear to be dyssynchrony with the reconstructed cardiac CT images. It is attributed to how cardiac images are reconstructed and how wall locations are updated over cardiac phases. The presence and the degree of false dyssynchrony may vary from scan-to-scan, which could degrade the accuracy and the repeatability (or precision) of image-based CT-CFA exams.
Reanimating patients: cardio-respiratory CT and MR motion phantoms based on clinical CT patient data
Johannes Mayer, Sebastian Sauppe, Christopher M. Rank, et al.
Until today several algorithms have been developed that reduce or avoid artifacts caused by cardiac and respiratory motion in computed tomography (CT). The motion information is converted into so-called motion vector fields (MVFs) and used for motion compensation (MoCo) during the image reconstruction. To analyze these algorithms quantitatively there is the need for ground truth patient data displaying realistic motion. We developed a method to generate a digital ground truth displaying realistic cardiac and respiratory motion that can be used as a tool to assess MoCo algorithms. By the use of available MoCo methods we measured the motion in CT scans with high spatial and temporal resolution and transferred the motion information onto patient data with different anatomy or imaging modality, thereby reanimating the patient virtually. In addition to these images the ground truth motion information in the form of MVFs is available and can be used to benchmark the MVF estimation of MoCo algorithms. We here applied the method to generate 20 CT volumes displaying detailed cardiac motion that can be used for cone-beam CT (CBCT) simulations and a set of 8 MR volumes displaying respiratory motion. Our method is able to reanimate patient data virtually. In combination with the MVFs it serves as a digital ground truth and provides an improved framework to assess MoCo algorithms.
Modeling and Simulations II: Breast Imaging
icon_mobile_dropdown
High-resolution, anthropomorphic, computational breast phantom: fusion of rule-based structures with patient-based anatomy
Xinyuan Chen, Xiaolin Gong, Christian G. Graff, et al.
While patient-based breast phantoms are realistic, they are limited by low resolution due to the image acquisition and segmentation process. The purpose of this study is to restore the high frequency components for the patient-based phantoms by adding power law noise (PLN) and breast structures generated based on mathematical models. First, 3D radial symmetric PLN with β=3 was added at the boundary between adipose and glandular tissue to connect broken tissue and create a high frequency contour of the glandular tissue. Next, selected high-frequency features from the FDA rule-based computational phantom (Cooper’s ligaments, ductal network, and blood vessels) were fused into the phantom. The effects of enhancement in this study were demonstrated by 2D mammography projections and digital breast tomosynthesis (DBT) reconstruction volumes. The addition of PLN and rule-based models leads to a continuous decrease in β. The new β is 2.76, which is similar to what typically found for reconstructed DBT volumes. The new combined breast phantoms retain the realism from segmentation and gain higher resolution after restoration.
Detectability of artificial lesions in anthropomorphic virtual breast phantoms of variable glandular fraction
Thomas J. Sauer, Christian G. Graff, Rongping Zeng, et al.
This work seeks to utilize a cohort of computational, patient-based breast phantoms and anthropomorphic lesions inserted therein to determine trends in breast lesion detectability as a function of several clinically relevant variables. One of the measures of local density proposed gives rise to a statistically significant trend in lesion detectability, and it is apparent that lesion type is also a predictor of relative detectability.
Third generation anthropomorphic physical phantom for mammography and DBT: incorporating voxelized 3D printing and uniform chest wall QC region
Physical breast phantoms provide a standard method to test, optimize, and develop clinical mammography systems, including new digital breast tomosynthesis (DBT) systems. In previous work, we produced an anthropomorphic phantom based on 500x500x500 μm breast CT data using commercial 3D printing. We now introduce an improved phantom based on a new cohort of virtual models with 155x155x155 μm voxels and fabricated through voxelized 3D printing and dithering, which confer higher resolution and greater control over contrast. This new generation includes a uniform chest wall extension for evaluating conventional QC metrics. The uniform region contains a grayscale step wedge, chest wall coverage markers, fiducial markers, spheres, and metal ink stickers of line pairs and edges to assess contrast, resolution, artifact spread function, MTF, and other criteria. We also experimented with doping photopolymer material with calcium, iodine, and zinc to increase our current contrast. In particular, zinc was discovered to significantly increase attenuation beyond 100% breast density with a linear relationship between zinc concentration and attenuation or breast density. This linear relationship was retained when the zinc-doped material was applied in conjunction with 3D printing. As we move towards our long term goal of phantoms that are indistinguishable from patients, this new generation of anthropomorphic physical breast phantom validates our voxelized printing process, demonstrates the utility of a uniform QC region with features from 3D printing and metal ink stickers, and shows potential for improved contrast via doping.
A physical breast phantom for 2D and 3D x-ray imaging made through inkjet printing
Lynda C. Ikejimba, Christian G. Graff, Shani Rosenthal, et al.
Physical breast phantoms are used for imaging evaluation studies with 2D and 3D breast x-ray systems, serving as surrogates for human patients. However, there is a presently a limited selection of available phantoms that are realistic, in terms of containing the complex tissue architecture of the human breast. In addition, not all phantoms can be successfully utilized for both 2D and 3D breast imaging. Additionally, many of the phantoms are uniform or unrealistic in appearance, expensive, or difficult to obtain. The purpose of this work was to develop a new method to generate realistic physical breast phantoms using easy to obtain and inexpensive materials. First, analytical modeling was used to design a virtual model, which was then compressed using finite element modeling. Next, the physical phantom was realized through inkjet printing with a standard inkjet printer using parchment paper and specialized inks, formulated using silver nanoparticles and a bismuth salt. The printed phantom sheets were then aligned and held together using a custom designed support plate made of PMMA, and imaged on clinical FFDM and DBT systems. Objects of interest were also placed within the phantom to simulate microcalcifications, pathologies that often occur in the breast. The linear attenuation coefficients of the inks and parchment were compared against tissue equivalent samples and found to be similar to breast tissue. The phantom is promising for use in imaging studies and developing QC protocols.
In silico imaging clinical trials for regulatory evaluation: initial considerations for VICTRE, a demonstration study
Expensive and lengthy clinical trials can delay regulatory evaluation and add significant burden that stifles innovation affecting patient access to novel, high-quality imaging technologies. In silico imaging holds promise for evaluating the safety and effectiveness of imaging technologies with much less burden than clinical trials. We define in silico imaging as a computer simulation of an entire imaging system (including source, object, task, and observer components) used for research, development, optimization, technology assessment, and regulatory evaluation of new technology. In this work we describe VICTRE (our study of virtual imaging clinical trials for regulatory evaluation) and the considerations for building an entire imaging pipeline in silico including device (physics), patient (anatomy, disease), and image interpretation models for regulatory evaluation using open-source tools.
Breast Imaging: Tomosynthesis
icon_mobile_dropdown
Metal artifact reduction using a patch-based reconstruction for digital breast tomosynthesis
Lucas R. Borges, Predrag R. Bakic, Andrew D. A. Maidment, et al.
Digital breast tomosynthesis (DBT) is rapidly emerging as the main clinical tool for breast cancer screening. Although several reconstruction methods for DBT are described by the literature, one common issue is the interplane artifacts caused by out-of-focus features. For breasts containing highly attenuating features, such as surgical clips and large calcifications, the artifacts are even more apparent and can limit the detection and characterization of lesions by the radiologist. In this work, we propose a novel method of combining backprojected data into tomographic slices using a patch-based approach, commonly used in denoising. Preliminary tests were performed on a geometry phantom and on an anthropomorphic phantom containing metal inserts. The reconstructed images were compared to a commercial reconstruction solution. Qualitative assessment of the reconstructed images provides evidence that the proposed method reduces artifacts while maintaining low noise levels. Objective assessment supports the visual findings. The artifact spread function shows that the proposed method is capable of suppressing artifacts generated by highly attenuating features. The signal difference to noise ratio shows that the noise levels of the proposed and commercial methods are comparable, even though the commercial method applies post-processing filtering steps, which were not implemented on the proposed method. Thus, the proposed method can produce tomosynthesis reconstructions with reduced artifacts and low noise levels.
Comparing the imaging performance of computed super resolution and magnification tomosynthesis
Tristan D. Maidment, Trevor L. Vent, William S. Ferris, et al.
Computed super-resolution (SR) is a method of reconstructing images with pixels that are smaller than the detector element size; superior spatial resolution is achieved through the elimination of aliasing and alteration of the sampling function imposed by the reconstructed pixel aperture. By comparison, magnification mammography is a method of projection imaging that uses geometric magnification to increase spatial resolution. This study explores the development and application of magnification digital breast tomosynthesis (MDBT). Four different acquisition geometries are compared in terms of various image metrics. High-contrast spatial resolution was measured in various axes using a lead star pattern. A modified Defrise phantom was used to determine the low-frequency spatial resolution. An anthropomorphic phantom was used to simulate clinical imaging. Each experiment was conducted at three different magnifications: contact (1.04x), MAG1 (1.3x), and MAG2 (1.6x). All images were taken on our next generation tomosynthesis system, an in-house solution designed to optimize SR. It is demonstrated that both computed SR and MDBT (MAG1 and MAG2) provide improved spatial resolution over non-SR contact imaging. To achieve the highest resolution, SR and MDBT should be combined. However, MDBT is adversely affected by patient motion at higher magnifications. In addition, MDBT requires more radiation dose and delays diagnosis, since MDBT would be conducted upon recall. By comparison, SR can be conducted with the original screening data. In conclusion, this study demonstrates that computed SR and MDBT are both viable methods of imaging the breast.
An alternate design for the Defrise phantom to quantify resolution in digital breast tomosynthesis
Raymond J. Acciavatti, William Mannherz, Margaret Nolan, et al.
Our previous work analyzed the Defrise phantom as a test object for evaluating image quality in digital breast tomosynthesis (DBT). The phantom is assembled from multiple plastic plates, which are arranged to form a square wave. In our previous work, there was no explicit analysis of how image quality varies with the thickness of the plates. To investigate this concept, a modified design of the phantom is now considered. For this purpose, each rectangular plate was laser-cut at an angle, creating a slope along which thickness varies continuously. The phantom was imaged using a clinical DBT system, and the relative modulation of the plastic-air separations was calculated in the reconstruction. In addition, a theoretical model was developed to determine whether modulation can be optimized by modifying the x-ray tube trajectory. It is demonstrated that modulation is dependent on the orientation of the frequency. Modulation is within detectable limits over a broad range of phantom thicknesses if frequency is parallel with the tube travel direction. Conversely, there is marked loss of modulation if frequency is oriented along the posteroanterior direction. In particular, as distance from the chest wall increases, there is a smaller range of thicknesses over which modulation is within detectable limits. Theoretical modeling suggests that this anisotropy is minimized by introducing tube motion along the posteroanterior direction. In conclusion, this paper demonstrates that the Defrise phantom is a tool for analyzing the limits of resolution in DBT systems.
Metal and calcification artifact reduction for digital breast tomosynthesis
Julia Wicklein, Anna Jerebko, Ludwig Ritschl, et al.
Tomosynthesis images of the breast suffer from artifacts caused by the presence of highly absorbing materials. These can be either induced by metal objects like needles or clips inserted during biopsy devices, or larger calcifications inside the examined breast. Mainly two different kinds of artifacts appear after the filtered backprojection procedure. The first type is undershooting artifacts near edges of high-contrast objects caused by the filtering step. The second type is out-of-plane (ripple) artifacts that appear even in slices where the metal object or macrocalcifications does not exist. Due to the limited angular range of tomosynthesis systems, overlapping structures have high influence on neighboring regions. To overcome these problems, a segmentation of artifact introducing objects is performed on the projection images. Both projection versions, with and without high-contrast objects are filtered independently to avoid undershootings. During backprojection a decision is made for each reconstructed voxel, if it is artifact or high-contrast object. This is based on a mask image, gained from the segmentation of high-contrast objects. This procedure avoids undershooting artifacts and additionally reduces out-of-plane ripple. Results are demonstrated for different kinds of artifact inducing objects and calcifications.
Contrast enhanced imaging with a stationary digital breast tomosynthesis system
Connor Puett, Jabari Calliste, Gongting Wu, et al.
Digital breast tomosynthesis (DBT) captures some depth information and thereby improves the conspicuity of breast lesions, compared to standard mammography. Using contrast during DBT may also help distinguish malignant from benign sites. However, adequate visualization of the low iodine signal requires a subtraction step to remove background signal and increase lesion contrast. Additionally, attention to factors that limit contrast, including scatter, noise, and artifact, are important during the image acquisition and post-acquisition processing steps. Stationary DBT (sDBT) is an emerging technology that offers a higher spatial and temporal resolution than conventional DBT. This phantom-based study explored contrast-enhanced sDBT (CE sDBT) across a range of clinically-appropriate iodine concentrations, lesion sizes, and breast thicknesses. The protocol included an effective scatter correction method and an iterative reconstruction technique that is unique to the sDBT system. The study demonstrated the ability of this CE sDBT system to collect projection images adequate for both temporal subtraction (TS) and dual-energy subtraction (DES). Additionally, the reconstruction approach preserved the improved contrast-to-noise ratio (CNR) achieved in the subtraction step. Finally, scatter correction increased the iodine signal and CNR of iodine-containing regions in projection views and reconstructed image slices during both TS and DES. These findings support the ongoing study of sDBT as a potentially useful tool for contrast-enhanced breast imaging and also highlight the significant effect that scatter has on image quality during DBT.
Effects of detector blur and correlated noise on digital breast tomosynthesis reconstruction
To improve digital breast tomosynthesis (DBT) image quality, we are developing model-based iterative reconstruction methods. We developed the SQS-DBCN algorithm, which incorporated detector blur into the system model and correlation into the noise model under some simplifying assumptions. In this paper, we further improved the regularization in the SQS-DBCN method by incorporating neighbors along the diagonal directions. To further understand the role of the different components in the system model of the SQS-DBCN method, we reconstructed DBT images without modeling either the detector blur or noise correlation for comparison. Visual comparison of the reconstructed images showed that regularizing with diagonal directions reduced artifacts and the noise level. The SQS-DBCN reconstructed images had better image quality than reconstructions without models for detector blur or correlated noise, as indicated by the contrast-to-noise ratios (CNR) of MCs and textural artifacts. These results indicated that regularized DBT reconstruction with detector blur and correlated noise modeling, even with simplifying assumptions, can improve DBT image quality compared to that without system modeling.
Poster Session: Cone-Beam CT
icon_mobile_dropdown
Cone-beam CT image contrast and attenuation-map linearity improvement (CALI) for brain stereotactic radiosurgery procedures
Sayed Masoud Hashemi, Young Lee, Markus Eriksson, et al.
A Contrast and Attenuation–map (CT-number) Linearity Improvement (CALI) framework is proposed for cone-beam CT (CBCT) images used for brain stereotactic radiosurgery (SRS). The proposed framework is used together with our high spatial resolution iterative reconstruction algorithm and is tailored for the Leksell Gamma Knife ICON (Elekta, Stockholm, Sweden). The incorporated CBCT system in ICON facilitates frameless SRS planning and treatment delivery. The ICON employs a half-cone geometry to accommodate the existing treatment couch. This geometry increases the amount of artifacts and together with other physical imperfections causes image inhomogeneity and contrast reduction. Our proposed framework includes a preprocessing step, involving a shading and beam-hardening artifact correction, and a post-processing step to correct the dome/capping artifact caused by the spatial variations in x-ray energy generated by bowtie-filter. Our shading correction algorithm relies solely on the acquired projection images (i.e. no prior information required) and utilizes filtered-back-projection (FBP) reconstructed images to generate a segmented bone and soft-tissue map. Ideal projections are estimated from the segmented images and a smoothed version of the difference between the ideal and measured projections is used in correction. The proposed beam-hardening and dome artifact corrections are segmentation free. The CALI was tested on CatPhan, as well as patient images acquired on the ICON system. The resulting clinical brain images show substantial improvements in soft contrast visibility, revealing structures such as ventricles and lesions which were otherwise un-detectable in FBP-reconstructed images. The linearity of the reconstructed attenuation-map was also improved, resulting in more accurate CT#.
Dual energy approach for cone beam artifacts correction
Cone beam computed tomography systems generate 3D volumetric images, which provide further morphological information compared to radiography and tomosynthesis systems. However, reconstructed images by FDK algorithm contain cone beam artifacts when a cone angle is large. To reduce the cone beam artifacts, two-pass algorithm has been proposed. The two-pass algorithm considers the cone beam artifacts are mainly caused by high density materials, and proposes an effective method to estimate error images (i.e., cone beam artifacts images) by the high density materials. While this approach is simple and effective with a small cone angle (i.e., 5 - 7 degree), the correction performance is degraded as the cone angle increases. In this work, we propose a new method to reduce the cone beam artifacts using a dual energy technique. The basic idea of the proposed method is to estimate the error images generated by the high density materials more reliably. To do this, projection data of the high density materials are extracted from dual energy CT projection data using a material decomposition technique, and then reconstructed by iterative reconstruction using total-variation regularization. The reconstructed high density materials are used to estimate the error images from the original FDK images. The performance of the proposed method is compared with the two-pass algorithm using root mean square errors. The results show that the proposed method reduces the cone beam artifacts more effectively, especially with a large cone angle.
A patch-based CBCT scatter artifact correction using prior CT
Xiaofeng Yang, Tian Liu, Xue Dong, et al.
We have developed a novel patch-based cone beam CT (CBCT) artifact correction method based on prior CT images. First, we used the image registration to align the planning CT with the CBCT to reduce the geometry difference between the two images. Then, we brought the planning CT-based prior information into the Bayesian deconvolution framework to perform the CBCT scatter artifact correction based on patchwise nonlocal mean strategy. We evaluated the proposed correction method using a Catphan phantom with multiple inserts based on contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR), and the image spatial non-uniformity (ISN). All values of CNR, SNR and ISN in the corrected CBCT image were much closer to those in the planning CT images. The results demonstrated that the proposed CT-guided correction method could significantly reduce scatter artifacts and improve the image quality. This method has great potential to correct CBCT images allowing its use in adaptive radiotherapy.
Shading correction algorithm for cone-beam CT in radiotherapy: extensive clinical validation of image quality improvement
K. D. Joshi, T. E. Marchant, C. J. Moore
A shading correction algorithm for the improvement of cone-beam CT (CBCT) images (Phys. Med. Biol. 53 5719{33) has been further developed, optimised and validated extensively using 135 clinical CBCT images of patients undergoing radiotherapy treatment of the pelvis, lungs and head and neck. An automated technique has been developed to efficiently analyse the large number of clinical images. Small regions of similar tissue (for example fat tissue) are automatically identified using CT images. The same regions on the corresponding CBCT image are analysed to ensure that they do not contain pixels representing multiple types of tissue. The mean value of all selected pixels and the non-uniformity, defined as the median absolute deviation of the mean values in each small region, are calculated. Comparisons between CT and raw and corrected CBCT images are then made. Analysis of fat regions in pelvis images shows an average difference in mean pixel value between CT and CBCT of 136:0 HU in raw CBCT images, which is reduced to 2:0 HU after the application of the shading correction algorithm. The average difference in non-uniformity of fat pixels is reduced from 33:7 in raw CBCT to 2:8 in shading-corrected CBCT images. Similar results are obtained in the analysis of lung and head and neck images.
A biomechanical modeling guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction
Four-dimensional (4D) cone-beam computed tomography (CBCT) enables motion tracking of anatomical structures and removes artifacts introduced by motion. However, the imaging time/dose of 4D-CBCT is substantially longer/higher than traditional 3D-CBCT. We previously developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, to reconstruct high-quality 4D-CBCT from limited number of projections to reduce the imaging time/dose. However, the accuracy of SMEIR is limited in reconstructing low-contrast regions with fine structure details. In this study, we incorporate biomechanical modeling into the SMEIR algorithm (SMEIR-Bio), to improve the reconstruction accuracy at low-contrast regions with fine details. The efficacy of SMEIR-Bio is evaluated using 11 lung patient cases and compared to that of the original SMEIR algorithm. Qualitative and quantitative comparisons showed that SMEIR-Bio greatly enhances the accuracy of reconstructed 4D-CBCT volume in low-contrast regions, which can potentially benefit multiple clinical applications including the treatment outcome analysis.
4D DSA reconstruction using tomosynthesis projections
Marc Buehler, Jordan M. Slagowski, Charles A. Mistretta, et al.
We investigate the use of tomosynthesis in 4D DSA to improve the accuracy of reconstructed vessel time-attenuation curves (TACs). It is hypothesized that a narrow-angle tomosynthesis dataset for each time point can be exploited to reduce artifacts caused by vessel overlap in individual projections. 4D DSA reconstructs time-resolved 3D angiographic volumes from a typical 3D DSA scan consisting of mask and iodine-enhanced C-arm rotations. Tomosynthesis projections are obtained either from a conventional C-arm rotation, or from an inverse geometry scanning-beam digital x-ray (SBDX) system. In the proposed method, rays of the tomosynthesis dataset which pass through multiple vessels can be ignored, allowing the non-overlapped rays to impart temporal information to the 4D DSA. The technique was tested in simulated scans of 2 mm diameter vessels separated by 2 to 5 cm, with TACs following either early or late enhancement. In standard 4D DSA, overlap artifacts were clearly present. Use of tomosynthesis projections in 4D DSA reduced TAC artifacts caused by vessel overlap, when a sufficient fraction of non-overlapped rays was available in each time frame. In cases where full overlap between vessels occurred, information could be recovered via a proposed image space interpolation technique. SBDX provides a tomosynthesis scan for each frame period in a rotational acquisition, whereas a standard C-arm geometry requires the grouping of multiple frames.
Estimating 3D local noise power spectrum from a few FDK-reconstructed cone-beam CT scans
For CT whose noise is nonstationary, a local NPS is often needed to characterize the system’s noise property. A good estimation of the local NPS for CT usually requires many repeated scans. To overcome this data demand, we have previously developed a radial NPS method to estimate the 2D local NPS for FBP-reconstructed fan-beam CT from a few repeats utilizing the polar separability of CT NPS in polar coordinates [1]. In this work we extend the 2D approach to estimate the 3D local NPS for FDK-reconstructed cone-beam CT (CBCT) scans, since the CBCT NPS has similar separability in cylindrical coordinates. We evaluate the accuracy of the 3D local radial NPS method by comparing it to the traditional local NPS estimates using simulated CBCT data. The results show that the 3D radial local NPS method with only 2 to 6 scans yields mean squared error less than 5%) relative to the reference local NPS and can predict signal detectability accurately for evaluating system performance.
Motion vector field upsampling for improved 4D cone-beam CT motion compensation of the thorax
Sebastian Sauppe, Christopher M. Rank, Marcus Brehm, et al.
To improve the accuracy of motion vector fields (MVFs) required for respiratory motion compensated (MoCo) CT image reconstruction without increasing the computational complexity of the MVF estimation approach, we propose a MVF upsampling method that is able to reduce the motion blurring in reconstructed 4D images. While respiratory gating improves the temporal resolution, it leads to sparse view sampling artifacts. MoCo image reconstruction has the potential to remove all motion artifacts while simultaneously making use of 100% of the rawdata. However the MVF accuracy is still below the temporal resolution of the CBCT data acquisition. Increasing the number of motion bins would increase reconstruction time and amplify sparse view artifacts, but not necessarily the accuracy of MVF. Therefore we propose a new method to upsample estimated MVFs and use those for MoCo. To estimate the MVFs, a modified version of the Demons algorithm is used. Our proposed method is able to interpolate the original MVFs up to a factor that each projection has its own individual MVF. To validate the method we use an artificially deformed clinical CT scan, with a breathing pattern of a real patient, and patient data acquired with a TrueBeamTM4D CBCT system (Varian Medical Systems). We evaluate our method for different numbers of respiratory bins, each again with different upsampling factors. Employing our upsampling method, motion blurring in the reconstructed 4D images, induced by irregular breathing and the limited temporal resolution of phase–correlated images, is substantially reduced.
Automated framework for estimation of lung tumor locations in kV-CBCT images for tumor-based patient positioning in stereotactic lung body radiotherapy
Satoshi Yoshidome, Hidetaka Arimura, Koutarou Terashima, et al.
Recently, image-guided radiotherapy (IGRT) systems using kilovolt cone-beam computed tomography (kV-CBCT) images have become more common for highly accurate patient positioning in stereotactic lung body radiotherapy (SLBRT). However, current IGRT procedures are based on bone structures and subjective correction. Therefore, the aim of this study was to evaluate the proposed framework for automated estimation of lung tumor locations in kV-CBCT images for tumor-based patient positioning in SLBRT. Twenty clinical cases are considered, involving solid, pure ground-glass opacity (GGO), mixed GGO, solitary, and non-solitary tumor types. The proposed framework consists of four steps: (1) determination of a search region for tumor location detection in a kV-CBCT image; (2) extraction of a tumor template from a planning CT image; (3) preprocessing for tumor region enhancement (edge and tumor enhancement using a Sobel filter and a blob structure enhancement (BSE) filter, respectively); and (4) tumor location estimation based on a template-matching technique. The location errors in the original, edge-, and tumor-enhanced images were found to be 1.2 ± 0.7 mm, 4.2 ± 8.0 mm, and 2.7 ± 4.6 mm, respectively. The location errors in the original images of solid, pure GGO, mixed GGO, solitary, and non-solitary types of tumors were 1.2 ± 0.7 mm, 1.3 ± 0.9 mm, 0.4 ± 0.6 mm, 1.1 ± 0.8 mm and 1.0 ± 0.7 mm, respectively. These results suggest that the proposed framework is robust as regards automatic estimation of several types of tumor locations in kV-CBCT images for tumor-based patient positioning in SLBRT.
Poster Session: CTI: New Technologies and Corrections
icon_mobile_dropdown
Comparative study of bowtie and patient scatter in diagnostic CT
Prakhar Prakash, John M. Boudry
A fast, GPU accelerated Monte Carlo engine for simulating relevant photon interaction processes over the diagnostic energy range in third-generation CT systems was developed to study the relative contributions of bowtie and object scatter to the total scatter reaching an imaging detector. Primary and scattered projections for an elliptical water phantom (major axis set to 300mm) with muscle and fat inserts were simulated for a typical diagnostic CT system as a function of anti-scatter grid (ASG) configurations. The ASG design space explored grid orientation, i.e. septa either a) parallel or b) parallel and perpendicular to the axis of rotation, as well as septa height. The septa material was Tungsten. The resulting projections were reconstructed and the scatter induced image degradation was quantified using common CT image metrics (such as Hounsfield Unit (HU) inaccuracy and loss in contrast), along with a qualitative review of image artifacts. Results indicate object scatter dominates total scatter in the detector channels under the shadow of the imaged object with the bowtie scatter fraction progressively increasing towards the edges of the object projection. Object scatter was shown to be the driving factor behind HU inaccuracy and contrast reduction in the simulated images while shading artifacts and elevated loss in HU accuracy at the object boundary were largely attributed to bowtie scatter. Because the impact of bowtie scatter could not be sufficiently mitigated with a large grid ratio ASG, algorithmic correction may be necessary to further mitigate these artifacts.
A deterministic integral spherical harmonics method for scatter simulation in computed tomography
Yujie Lu, Yu Zou, Xiaohui Zhan, et al.
Scatter is an important problem in computed tomography especially with the increase of X-ray illumination coverage in one single view. Poor scatter correction results in CT HU number inaccuracy, degrades low contrast detectability, and introduces artifacts. Hardware method can be used to handle scatter problem. However, hardware design optimization and scatter correction improvement require an efficient scatter simulation tool. Although Monte Carlo (MC) method can perform precise scatter simulation, simulated noise due to its statistical nature affects the simulation results. In this paper, a deterministic scatter simulation method with radiative transfer equation (RTE) is proposed. Compared to MC method, the deterministic RTE method is free from statistical noise. In order to solve the RTE, a novel iterative spherical harmonics integral formula is developed. Compared to MC method, the results show the accuracy of the proposed method.
Optimal sinogram sampling with temporally offset pixels in continuous rotation CT
Martin Sjölin, Mats Persson
Insufficient angular sampling in computed tomography can lead to aliasing artifacts that impair the quality of the reconstructed images. However, the angular sampling rate is often constrained due to practical limitations, such as the bandwidth of the data read-out or read-out noise. In this work, we present a new sampling scheme that allows aliasing-free image reconstruction with fewer angular samples. This is achieved by introducing a temporal offset between the samples acquired by adjacent detector pixels in the detector array. The temporal shift implies that the positions where the detector pixels sample the 2D Radon transform are interleaved in the angular direction, and if the shift is carefully selected, an optimal (hexagonal) sampling grid can be obtained. Optimal sampling grids are particularly effective in tomographic imaging since the bowtie-shaped spectral support of the sinogram allows a close tiling of the replicated spectra. We derive the sampling requirements when the proposed method is used and demonstrate that the obtained sampling grid reduces the aliasing artifacts compared to standard rectangular sampling at equal number of angular samples in simulated and experimental images. It is shown that the required number of angular samples can be reduced by 25-40%. The method is robust and easy to implement, and can therefore be of practical use for CT imaging where the number of views is limited.
Beam hardening correction using length linearization
Daejoong Oh, Sewon Kim, Doohyun Park, et al.
Computed tomography (CT) has been used to obtain 3D data from an object or patient. However, most of CT uses polychromatic energy and that results in beam hardening artifact. Therefore, many methods for correcting beam hardening were proposed. Linearization method and post reconstruction method are main category of beam hardening correction method. Especially empirical approaches were commonly used at linearization method; however, empirical methods do not guarantee the linearity of projection data because it uses reconstructed image to decide linearity. Therefore, corrected images are not monochromatic CT images because we could not specify the energy. Proposed method use linearization method as a basic concept. However, we had considered about the relationship between path length and projection data and then found a way to specify the energy of corrected images because proposed method linearizes the projection data fundamentally. Moreover, calculation time for making corrected sinogram was very short. Therefore, this method can be used practically.
Fast frame rate rodent cardiac x-ray imaging using scintillator lens coupled to CMOS camera
Swathi Lakshmi B., M.K.N. Sai Varsha, Ashwin Kumar N., et al.
Micro-Computed Tomography (MCT) systems for small animal imaging plays a critical role for monitoring disease progression and therapy evaluation. In this work, an in-house built micro-CT system equipped with a X-ray scintillator lens coupled to a commercial CMOS camera was used to test the feasibility of its application to Digital Subtraction Angiography (DSA). Literature has reported such studies being done with clinical X-ray tubes that can be pulsed rapidly or with rotating gantry systems, thus increasing the cost and infrastructural requirements.The feasibility of DSA was evaluated by injected Iodinated contrast agent (ICA) through the tail vein of a mouse. Projection images of the heart were acquired pre and post contrast using the high frame rate X-ray detector and processing done to visualize transit of ICA through the heart.
Low-dose 4D myocardial perfusion with x-ray micro-CT
D. P. Clark, C. T. Badea
X-ray CT is widely used, both clinically and pre-clinically, for fast, high-resolution, anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, temporally-resolved CT data can detail cardiac motion and blood flow dynamics for one-stop cardiovascular CT imaging procedures. In previous work, we demonstrated efficient, low-dose projection acquisition and reconstruction strategies for cardiac micro-CT imaging and for multiple-injection micro-CT perfusion imaging. Here, we extend this previous work with regularization based on rank-sparse kernel regression and on filtration with the Karhunen-Loeve transform. Using a dual source, prospectively gated sampling strategy which produces an approximately uniform distribution of projections, we apply this revised algorithm to the assessment of both myocardial perfusion and cardiac functional metrics from the same set of projection data. We test the algorithm in simulations using a modified version of the MOBY mouse phantom which contains realistic perfusion and cardiac dynamics. The proposed algorithm reduces the reconstruction error by 81% relative to unregularized, algebraic reconstruction. The results confirm our ability to simultaneously solve for cardiac temporal motion and perfusion dynamics. In future work, we will apply the algorithm and sampling protocol to small animal cardiac studies.
An investigation of low-dose 3D scout scans for computed tomography
Juliana Gomes, Grace J. Gang, Aswin Mathews, et al.
Purpose: Commonly 2D scouts or topograms are used prior to CT scan acquisition. However, low-dose 3D scouts could potentially provide additional information for more effective patient positioning and selection of acquisition protocols. We propose using model-based iterative reconstruction to reconstruct low exposure tomographic data to maintain image quality in both low-dose 3D scouts and reprojected topograms based on those 3D scouts. Methods: We performed tomographic acquisitions on a CBCT test-bench using a range of exposure settings from 16.6 to 231.9 total mAs. Both an anthropomorphic phantom and a 32 cm CTDI phantom were scanned. The penalized-likelihood reconstructions were made using Matlab and CUDA libraries and reconstruction parameters were tuned to determine the best regularization strength and delta parameter. RMS error between reconstructions and the highest exposure reconstruction were computed, and CTDIW values were reported for each exposure setting. RMS error for reprojected topograms were also computed. Results: We find that we are able to produce low-dose (0.417 mGy) 3D scouts that show high-contrast and large anatomical features while maintaining the ability to produce traditional topograms. Conclusions: We demonstrated that iterative reconstruction can mitigate noise in very low exposure CT acquisitions to enable 3D CT scout. Such additional 3D information may lead to improved protocols for patient positioning and acquisition refinements as well as a number of advanced dose reduction strategies that require localization of anatomical features and quantities that are not provided by simple 2D topograms.
Adaptability index: quantifying CT tube current modulation performance from dose and quality informatics
F. Ria, J. M. Wilson, Y. Zhang, et al.
The balance between risk and benefit in modern CT scanners is governed by the automatic adaptation mechanisms that adjust x-ray flux for accommodating patient size to achieve certain image noise values. The effectiveness of this adaptation is an important aspect of CT performance and should ideally be characterized in the context of real patient cases. Objective of this study was to characterize CT performance with an index that includes image-noise and radiation dose across a clinical patient population. The study included 1526 examinations performed by three scanners, from two vendors, used for two clinical protocols (abdominopelvic and chest). The dose-patient size and noise-patient size dependencies were linearized, and a 3D-fit was performed for each protocol and each scanner with a planar function. In the fit residual plots the Root Mean Square Error (RMSE) values were estimated as a metric of CT adaptability across the patient population. The RMSE values were between 0.0344 HU1/2 and 0.0215 HU1/2: different scanners offer varying degrees of reproducibility of noise and dose across the population. This analysis could be performed with phantoms, but phantom data would only provide information concerning specific exposure parameters for a scan: instead, a general population comparison is a way to obtain new information related to the relevant clinical adaptability of scanner models. A theoretical relationship between image noise, CTDIvol and patient size was determined based on real patient data. This relationship may provide a new index related to the scanners' adaptability concerning image quality and radiation dose across a patient population.
Experimental evaluation of dual multiple aperture devices for fluence field modulated x-ray computed tomography
Acquisition of CT images with comparable diagnostic power can potentially be achieved with lower radiation exposure than the current standard of care through the adoption of hardware-based fluence-field modulation (e.g. dynamic bowtie filters). While modern CT scanners employ elements such as static bowtie filters and tube-current modulation, such solutions are limited in the fluence patterns that they can achieve, and thus are limited in their ability to adapt to broad classes of patient morphology. Fluence-field modulation also enables new applications such as region-of-interest imaging, task specific imaging, reducing measurement noise or improving image quality. The work presented in this paper leverages a novel fluence modulation strategy that uses “Multiple Aperture Devices” (MADs) which are, in essence, binary filters, blocking or passing x-rays on a fine scale. Utilizing two MAD devices in series provides the capability of generating a large number of fluence patterns via small relative motions between the MAD filters. We present the first experimental evaluation of fluence-field modulation using a dual-MAD system, and demonstrate the efficacy of this technique with a characterization of achievable fluence patterns and an investigation of experimental projection data.
Estimation of non-solid lung nodule volume with low-dose CT protocols: effect of reconstruction algorithm and measurement method
Marios A. Gavrielides, Gino DeFilippo, Benjamin P. Berman, et al.
Computed tomography is primarily the modality of choice to assess stability of nonsolid pulmonary nodules (sometimes referred to as ground-glass opacity) for three or more years, with change in size being the primary factor to monitor. Since volume extracted from CT is being examined as a quantitative biomarker of lung nodule size, it is important to examine factors affecting the performance of volumetric CT for this task. More specifically, the effect of reconstruction algorithms and measurement method in the context of low-dose CT protocols has been an under-examined area of research. In this phantom study we assessed volumetric CT with two different measurement methods (model-based and segmentation-based) for nodules with radiodensities of both nonsolid (-800HU and -630HU) and solid (-10HU) nodules, sizes of 5mm and 10mm, and two different shapes (spherical and spiculated). Imaging protocols included CTDIvol typical of screening (1.7mGy) and sub-screening (0.6mGy) scans and different types of reconstruction algorithms across three scanners. Results showed that radio-density was the factor contributing most to overall error based on ANOVA. The choice of reconstruction algorithm or measurement method did not affect substantially the accuracy of measurements; however, measurement method affected repeatability with repeatability coefficients ranging from around 3-5% for the model-based estimator to around 20-30% across reconstruction algorithms for the segmentation–based method. The findings of the study can be valuable toward developing standardized protocols and performance claims for nonsolid nodules.
Novel method to calibrate CT scanners with a conic probe body
A new and simple object for calibrating tomographic scanners has been proposed. Instead of a conventional high-density ball as an object for calibration, we propose a high-density conic body. The cone is advantageous compare to the ball both because of its easy availability (uncomplicated manufacturing) and the straightforward and less error-prone analysis necessary for the identification of a space point (ball’s center vs. cone apex). Applying the conic body instead of a ball as a calibration object enables to reduce calibration errors substantially. Additionally we propose an efficient way to determine the discrepancy between ideal and misaligned positions of the detector that may be crucial for the quality of the reconstruction.
Poster Session: CTII: Image Reconstruction and Artifact Reduction
icon_mobile_dropdown
ROI reconstruction for model-based iterative reconstruction (MBIR) via a coupled dictionary learning
Dong Hye Ye, Somesh Srivastava, Jean-Baptiste Thibault, et al.
Model based iterative reconstruction (MBIR) algorithms have shown significant improvement in CT image quality by increasing resolution as well as reducing noise and artifacts. In diagnostic protocols, radiologists often need the high-resolution reconstruction of a limited region of interest (ROI). This ROI reconstruction is complicated for MBIR which should reconstruct an image in a full field of view (FOV) given full sinogram measurements. Multi-resolution approaches are widely used for this ROI reconstruction of MBIR, in which the image with a full FOV is reconstructed in a low-resolution and the forward projection of non-ROI is subtracted from the original sinogram measurements for high-resolution ROI reconstruction. However, a low-resolution reconstruction of a full FOV can be susceptible to streaking and blurring artifacts and these can be propagated into the following high-resolution ROI reconstruction. To tackle this challenge, we use a coupled dictionary representation model between low- and high-resolution training dataset for artifact removal and super resolution of a low-resolution full FOV reconstruction. Experimental results on phantom data show that the restored full FOV reconstruction via a coupled dictionary learning significantly improve the image quality of high-resolution ROI reconstruction for MBIR.
Accelerating separable footprint (SF) forward and back projection on GPU
Xiaobin Xie, Madison G. McGaffin, Yong Long, et al.
Statistical image reconstruction (SIR) methods for X-ray CT can improve image quality and reduce radiation dosages over conventional reconstruction methods, such as filtered back projection (FBP). However, SIR methods require much longer computation time. The separable footprint (SF) forward and back projection technique simplifies the calculation of intersecting volumes of image voxels and finite-size beams in a way that is both accurate and efficient for parallel implementation. We propose a new method to accelerate the SF forward and back projection on GPU with NVIDIA’s CUDA environment. For the forward projection, we parallelize over all detector cells. For the back projection, we parallelize over all 3D image voxels. The simulation results show that the proposed method is faster than the acceleration method of the SF projectors proposed by Wu and Fessler.13 We further accelerate the proposed method using multiple GPUs. The results show that the computation time is reduced approximately proportional to the number of GPUs.
A new approach to solving the prior image constrained compressed sensing (PICCS) with applications in CT image reconstruction
Yuchao Tang, Chunxiang Zong
Reduce does exposure in computed tomography (CT) scan has been received much attention in recent years. It is reasonable to reduce the number of projections for reducing does. However, conventional CT image reconstruction methods will lead to streaking artifact due to few-view data. Inspired by the theory of compressive sensing, the total variation minimization method was widely studied in the CT image reconstruction from few-view and limited-angle data. It takes full advantage of the sparsity in the image gradient magnitude. In this paper, we propose a general prior image constrained compressed sensing model and develop an efficient iterative algorithm to solve it. The main idea of our approach is to reformulate the optimization problem as an unconstrained optimization problem with the sum of two convex functions. Then we derive the iterative algorithm by use of the primal dual proximity method. The prior image is reconstructed by a conventional analytic algorithm such as filtered backprojection (FBP) or from a dynamic CT image sequences. We demonstrate the performance of the proposed iterative algorithm in a quite few-view projection data with just 3 percent of the reconstructed image size. The numerical simulation results show that the proposed reconstruction algorithm outperforms the commonly used total variation minimization method.
Computer simulation of low-dose CT with clinical lung image database: a preliminary study
Large samples of raw low-dose CT (LDCT) projections on lungs are needed for evaluating or designing novel and effective reconstruction algorithms suitable for lung LDCT imaging. However, there exists radiation risk when getting them from clinical CT scanning. To avoid the problem, a new strategy for producing large samples of lung LDCT projections with computer simulations is proposed in this paper. In the simulation, clinical images from the publicly available medical image database-the Lung Image Database Consortium(LIDC) and Image Database Resource Initiative (IDRI) database (LIDC/IDRI) are used as the projected object to form the noise-free sinogram. Then by adding a Poisson distributed quantum noise plus Gaussian distributed electronic noise to the projected transmission data calculated from the noise-free sinogram, different noise levels of LDCT projections are obtained. At last the LDCT projections are used for evaluating two reconstruction strategies. One is the conventional filtered back projection (FBP) algorithm and the other is FBP reconstruction from the filtered sinogram with penalized weighted least square criterion (PWLS-FBP). Images reconstructed with the LDCT simulations have shown that the PWLS-FBP algorithm performs better than the FBP algorithm in reducing streaking artifacts and preserving resolution. Preliminary results indicate that the feasibility of the proposed lung LDCT simulation strategy for helping to determine advanced reconstruction algorithms.
Reconstruction of four-dimensional computed tomography images during treatment time using electronic portal imaging device images based on a dynamic 2D/3D registration
T. Nakamoto, H. Arimura, T. A. Hirose, et al.
The goal of our study was to develop a computational framework for reconstruction of four-dimensional computed tomography (4D-CT) images during treatment time using electronic portal imaging device (EPID) images based on a dynamic 2D/3D registration. The 4D-CT images during treatment time (“treatment” 4D-CT images) were reconstructed by performing an affine transformation-based dynamic 2D/3D registration between dynamic clinical portal dose images (PDIs) derived from the EPID images with planning CT images through planning PDIs for all frames. Elements of the affine transformation matrices (transformation parameters) were optimized using a Levenberg-Marquardt (LM) algorithm so that the planning PDIs could be similar to the dynamic clinical PDIs for all frames. Initial transformation parameters in each frame should be determined for finding optimum transformation parameters in the LM algorithm. In this study, the optimum transformation parameters in a frame employed as the initial transformation parameters for optimizing the transformation parameter in the consecutive frame. Gamma pass rates (3 mm/3%) were calculated for evaluating a similarity of the dose distributions between the dynamic clinical PDIs and “treatment” PDIs, which were calculated from “treatment” 4D-CT images, for all frames. The framework was applied to eight lung cancer patients who were treated with stereotactic body radiation therapy (SBRT). A mean of the average gamma pass rates between the dynamic clinical PDIs and the “treatment” PDIs for all frames was 98.3±1.2% for eight cases. In conclusion, the proposed framework makes it possible to dynamically monitor patients’ movement during treatment time.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
Lars Gjesteby, Qingsong Yang, Yan Xi, et al.
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
Choosing anisotropic voxel dimensions in optimization-based image reconstruction for limited angle CT
C. Sheng, R. Chaudhari, Sean D. Rose, et al.
Resolution of reconstructions in limited angle X-ray computed tomography (CT) is inherently anisotropic due to the limited angular range of acquired projections. This justifies the use of anisotropic voxels in limited angle image reconstruction. For analytic reconstruction algorithms, this only changes the intervals at which the reconstruction is sampled, but for optimization-based image reconstruction, changing the voxel dimensions redefines the reconstruction optimization problem and can have pronounced effects on the reconstructed image. In this work we investigate the choice of anisotropic voxel dimensions in optimization-based image reconstruction for limited angle CT. In particular, a 2D simulation study is performed to assess the optimal choice of pixel dimension in the longitudinal direction - the direction of lowest resolution. It is demonstrated that as this pixel dimension is decreased, deterioration of system matrix conditioning can lead to severe distortion in reconstructions performed with low regularization strength. This conditioning issue occurs at approximately the point where the number of pixels is equal to the number of measurements. While the distortion can be mitigated by increasing regularization, our results suggest that there are structures which are only resolvable by using even smaller voxel sizes.
A data-driven regularization strategy for statistical CT reconstruction
D. P. Clark, C. T. Badea
There is an unmet need for CT image reconstruction algorithms that reliably provide diagnostic image quality at reduced radiation dose. Toward this end, we integrate a state-of-the-art statistical reconstruction algorithm, ordered subsets, and separable quadratic surrogates (OS-SQS) accelerated with Nesterov’s method, with our own data-driven regularization strategy using the split Bregman method. The regularization enforces intensity-gradient sparsity by minimizing bilateral total variation through the application of bilateral filtration. Adding to the advantages of statistical reconstruction, our implementation of bilateral filtration dynamically varies the regularization strength based on the noise level algorithmically measured within the data, accommodating variations in patient size and photon flux. We refer to this modified form of OS-SQS as OS-SQS with bilateral filtration (OS-SQS-BF), and we apply it to reconstruct clinical, helical CT data provided to us as part of the Low Dose CT Grand Challenge. Specifically, we evaluate OS-SQS-BF for quarter-dose statistical reconstruction and compare its performance with quarter-dose and full-dose filtered backprojection reconstruction. We present results for both the American College of Radiology (ACR) phantom and an abdominal CT scan. Our algorithm reduces noise by approximately 52% relative to filtered backprojection in the ACR phantom, while maintaining contrast and spatial-resolution performance relative to commercial filtered backprojection reconstruction. The quarter-dose scan for the abdominal data set confirmed the identification of 3 liver lesions when using OS-SQS-BF. The reconstruction time is a limitation that we will address in the future.
Image quality improvement in MDCT cardiac imaging via SMART-RECON method
Yinsheng Li, Ximiao Cao, Zhanfeng Xing, et al.
Coronary CT angiography (CCTA) is a challenging imaging task currently limited by the achievable temporal resolution of modern Multi-Detector CT (MDCT) scanners. In this paper, the recently proposed SMARTRECON method has been applied in MDCT-based CCTA imaging to improve the image quality without any prior knowledge of cardiac motion. After the prospective ECG-gated data acquisition from a short-scan angular span, the acquired data were sorted into several sub-sectors of view angles; each corresponds to a 1/4th of the short-scan angular range. Information of the cardiac motion was thus encoded into the data in each view angle sub-sector. The SMART-RECON algorithm was then applied to jointly reconstruct several image volumes, each of which is temporally consistent with the data acquired in the corresponding view angle sub-sector. Extensive numerical simulations were performed to validate the proposed technique and investigate the performance dependence.
Projection-based motion estimation for cardiac functional analysis with high temporal resolution: a proof-of-concept study with digital phantom experiment
Cardiac motion (or functional) analysis has shown promise not only for non-invasive diagnosis of cardiovascular diseases but also for prediction of cardiac future events. Current imaging modalities has limitations that could degrade the accuracy of the analysis indices. In this paper, we present a projection-based motion estimation method for x-ray CT that estimates cardiac motion with high spatio-temporal resolution using projection data and a reference 3D volume image. The experiment using a synthesized digital phantom showed promising results for motion analysis.
Investigation into image quality difference between total variation and nonlinear sparsifying transform based compressed sensing
Jian Dong, Hiroyuki Kudo
Compressed sensing (CS) is attracting growing concerns in sparse-view computed tomography (CT) image reconstruction. The most standard approach of CS is total variation (TV) minimization. However, images reconstructed by TV usually suffer from distortions, especially in reconstruction of practical CT images, in forms of patchy artifacts, improper serrate edges and loss of image textures. Most existing CS approaches including TV achieve image quality improvement by applying linear transforms to object image, but linear transforms usually fail to take discontinuities into account, such as edges and image textures, which is considered to be the key reason for image distortions. Actually, discussions on nonlinear filter based image processing has a long history, leading us to clarify that the nonlinear filters yield better results compared to linear filters in image processing task such as denoising. Median root prior was first utilized by Alenius as nonlinear transform in CT image reconstruction, with significant gains obtained. Subsequently, Zhang developed the application of nonlocal means-based CS. A fact is gradually becoming clear that the nonlinear transform based CS has superiority in improving image quality compared with the linear transform based CS. However, it has not been clearly concluded in any previous paper within the scope of our knowledge. In this work, we investigated the image quality differences between the conventional TV minimization and nonlinear sparsifying transform based CS, as well as image quality differences among different nonlinear sparisying transform based CSs in sparse-view CT image reconstruction. Additionally, we accelerated the implementation of nonlinear sparsifying transform based CS algorithm.
Image-based metal artifact reduction in x-ray computed tomography utilizing local anatomical similarity
Xue Dong, Xiaofeng Yang, Jonathan Rosenfield, et al.
X-ray computed tomography (CT) is widely used in radiation therapy treatment planning in recent years. However, metal implants such as dental fillings and hip prostheses can cause severe bright and dark streaking artifacts in reconstructed CT images. These artifacts decrease image contrast and degrade HU accuracy, leading to inaccuracies in target delineation and dose calculation. In this work, a metal artifact reduction method is proposed based on the intrinsic anatomical similarity between neighboring CT slices. Neighboring CT slices from the same patient exhibit similar anatomical features. Exploiting this anatomical similarity, a gamma map is calculated as a weighted summation of relative HU error and distance error for each pixel in an artifact-corrupted CT image relative to a neighboring, artifactfree image. The minimum value in the gamma map for each pixel is used to identify an appropriate pixel from the artifact-free CT slice to replace the corresponding artifact-corrupted pixel. With the proposed method, the mean CT HU error was reduced from 360 HU and 460 HU to 24 HU and 34 HU on head and pelvis CT images, respectively. Dose calculation accuracy also improved, as the dose difference was reduced from greater than 20% to less than 4%. Using 3%/3mm criteria, the gamma analysis failure rate was reduced from 23.25% to 0.02%. An image-based metal artifact reduction method is proposed that replaces corrupted image pixels with pixels from neighboring CT slices free of metal artifacts. This method is shown to be capable of suppressing streaking artifacts, thereby improving HU and dose calculation accuracy.
Compressed sensing of sparsity-constrained total variation minimization for CT image reconstruction
Jian Dong, Hiroyuki Kudo, Essam A. Rashed
Sparse-view CT image reconstruction is becoming a potential strategy for radiation dose reduction of CT scans. Compressed sensing (CS) has been utilized to address this problem. Total Variation (TV) minimization, a method which can reduce streak artifacts and preserve object boundaries well, is treated as the most standard approach of CS. However, TV minimization cannot be solved by using classical differentiable optimization techniques such as the gradient method, because the expression of TV (TV norm) is non-differentiable. In early stages, approximated solving methods were proposed by changing TV norm to be differentiable in the way of adding a small constant in TV norm to enable the usage of gradient methods. But this reduces the power of TV in preserving accuracy object boundaries. Subsequently, approaches which can optimize TV norm exactly were proposed based on the convex optimization theory, such as generalizations of the iterative soft-thresholding (GIST) algorithm and Chambolle-Pock algorithm. However, these methods are simultaneous-iterative-type algorithms. It means that their convergence is rather slower compared with row-action-type algorithms. The proposed method, called sparsity-constrained total variation (SCTV), is developed by using the alternating direction method of multipliers (ADMM). On the method we succeeded in solving the main optimization problem by iteratively splitting the problem into processes of row-action-type algebraic reconstruction technique (ART) procedure and TV minimization procedure which can be processed using Chambolle’s projection algorithm. Experimental results show that the convergence speed of the proposed method is much faster than the conventional simultaneous iterative methods.
FBP embedded iterative method to efficiently solve the low-dose CT
Ryosuke Ueda, Fukashi Yamazaki, Hiroyuki Kudo
Low-dose X-ray CT is the reconstruction under the less X-ray intensity. In exchange for the intensity reduction, the noise level increases relatively. It is known that the Statistical Iterative Reconstruction (SIR) method is effective in reducing the image degradation due to the noise. One of the SIR formulations is the penalized weighted least squares (PWLS). Since the PWLS requires a large computational cost, the acceleration method such as the Iterative FBP (IFBP) has been studied. However, IFBP cannot exactly minimize the PWLS cost function. This paper shows a new acceleration method for efficiently solving the PWLS problem. Based on the Alternating Projection Proximal (APP) method, our approach exactly solves the PWLS. The design of the FBP filter is also presented. The reconstruction of the chest X-ray image is carried out. It is shown that the proposed method can give the highly acceleration and the exact solution.
Poster Session: Photon Counting: Spectral CT, Instrumentation, and Algorithms
icon_mobile_dropdown
Discrimination of clinically significant calcium salts using MARS spectral CT
T. E. Kirkbride, A. Raja, K. Mueller, et al.
Calcium compounds within tissues are usually a sign of pathology, and calcium crystal type is often a pointer to the diagnosis. There are clinical advantages in being able to determine the quantity and type of calcifications non-invasively in cardiovascular, genitourinary and musculoskeletal disorders, and treatment differs depending on the crystal type and quantity. The problem arises when trying to distinguish between different calcium compounds within the same image due to their similar attenuation properties. There are spectroscopic differences between calcium salts at very low energies. As calcium oxalate and calcium hydroxyapatite can co-exist in breast and musculoskeletal pathologies of the breast, we wished to determine whether Spectral CT could distinguish between them in the same image at clinical X-ray energy ranges. Energy thresholds of 15, 22, 29 and 36keV and tube voltages of 50, 80 and 110kVp were chosen, and images were analysed to determine the percentage difference in the attenuation coefficients of calcium hydroxyapatite samples at concentrations of 54.3, 211.7, 808.5 and 1169.3mg/ml, and calcium oxalate at a concentration of 2000 mg/ml. The two lower concentrations of calcium hydroxyapatite were distinguishable from calcium oxalate at all energies and all tube voltages, whereas the ability to discriminate oxalate from hydroxyapatite at higher concentrations was dependent on the threshold energy but only mildly dependent on the tube voltage used. Spectral CT shows promise for distinguishing clinically important calcium salts.
Response functions of multi-pixel-type CdTe detector: toward development of precise material identification on diagnostic x-ray images by means of photon counting
Hiroaki Hayashi, Takashi Asahara, Natsumi Kimoto, et al.
Currently, an X-ray imaging system which can produced information used to identify various materials has been developed based on photon counting. It is important to estimate the response function of the detector in order to accomplish highly accurate material identification. Our aim is to simulate the response function of a CdTe detector using Monte-Carlo simulation; at this time, the transportation of incident and scattered photons and secondary produced electrons were precisely simulated without taking into consideration the charge spread in the collecting process of the produced charges (charge sharing effect). First, we set pixel sizes of 50-500μm, the minimum irradiation fields which produce equilibrium conditions were determined. Then, observed peaks in the response function were analyzed with consideration paid to the interactions between incident X-rays and the detector components, Cd and Te. The secondary produced characteristic X-rays play an important role. Accordingly ratios of full energy peak (FEP), scattering X-rays and penetrating X-rays in the calculated response functions were analyzed. When the pixel size of 200μm was used the scattered X-rays were saturated at equilibrium with relatively small fields and efficiency of FEP was kept at a high value (<50%). Finally, we demonstrated the X-ray spectrum which is folded by the response function. Even if the charge sharing effect is not completely corrected when using the electric circuit, there is a possibility that disturbed portions in the measured X-ray spectra can be corrected by using proper calibration, in which the above considerations are taken into account.
Dual energy CT kidney stone differentiation in photon counting computed tomography
R. Gutjahr, C. Polster, A. Henning, et al.
This study evaluates the capabilities of a whole-body photon counting CT system to differentiate between four common kidney stone materials, namely uric acid (UA), calcium oxalate monohydrate (COM), cystine (CYS), and apatite (APA) ex vivo. Two different x-ray spectra (120 kV and 140 kV) were applied and two acquisition modes were investigated. The macro-mode generates two energy threshold based image-volumes and two energy bin based image-volumes. In the chesspattern-mode four energy thresholds are applied. A virtual low energy image, as well as a virtual high energy image are derived from initial threshold-based images, while considering their statistically correlated nature. The energy bin based images of the macro-mode, as well as the virtual low and high energy image of the chesspattern-mode serve as input for our dual energy evaluation. The dual energy ratio of the individually segmented kidney stones were utilized to quantify the discriminability of the different materials. The dual energy ratios of the two acquisition modes showed high correlation for both applied spectra. Wilcoxon-rank sum tests and the evaluation of the area under the receiver operating characteristics curves suggest that the UA kidney stones are best differentiable from all other materials (AUC = 1.0), followed by CYS (AUC ≈ 0.9 compared against COM and APA). COM and APA, however, are hardly distinguishable (AUC between 0.63 and 0.76). The results hold true for the measurements of both spectra and both acquisition modes.
Statistical iterative material image reconstruction for spectral CT using a semi-empirical forward model
Korbinian Mechlem, Sebastian Ehn, Thorsten Sellerer, et al.
In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.
Development of a novel method based on a photon counting technique with the aim of precise material identification in clinical x-ray diagnosis
Natsumi Kimoto, Hiroaki Hayashi, Takashi Asahara, et al.
A photon counting system has the ability of energy discrimination, therefore obtaining new information using X-rays for material identification is an expected goal to achieve precise diagnosis. The aim of our study is to propose a novel method for material identification based on a photon counting technique. First, X-ray spectra at 40-60 kV were constructed using a published database. Second, X-ray spectra penetrating different materials having atomic numbers from 5-13 were calculated. These spectra were divided into two energy regions, then linear attenuation factors concerning these regions were obtained. In addition, in order to accomplish highly accurate material identification, correction of beam hardening effects based on soft-tissue was applied to each linear attenuation factor. Then, using the linear attenuation factors, a normalized linear attenuation coefficient was derived. Finally, an effective atomic number was determined using the theoretical relationship between the normalized linear attenuation coefficient and atomic number. In order to demonstrate our method, four different phantoms (acrylic, soft-tissue equivalent, bone equivalent, and aluminum) were measured using a single-probe-type CdTe detector under the assumption that the response of the single-probe-type CdTe detector is equal to the response of one pixel of a multi-pixel-type photon counting detector. Each of these phantoms can be completely separated using our method. Furthermore, we evaluated an adoptive limit of beam hardening correction. We found that the adoptive limit depends on the mass thickness and atomic number. Our vision is to realize highly accurate identification for material with narrow range in atomic number.
Material decomposition in an arbitrary number of dimensions using noise compensating projection
Thomas O'Donnell, Ahmed Halaweish, David Cormode, et al.
Purpose: Multi-energy CT (e.g., dual energy or photon counting) facilitates the identification of certain compounds via data decomposition. However, the standard approach to decomposition (i.e., solving a system of linear equations) fails if – due to noise - a pixel’s vector of HU values falls outside the boundary of values describing possible pure or mixed basis materials. Typically, this is addressed by either throwing away those pixels or projecting them onto the closest point on this boundary. However, when acquiring four (or more) energy volumes, the space bounded by three (or more) materials that may be found in the human body (either naturally or through injection) can be quite small. Noise may significantly limit the number of those pixels to be included within. Therefore, projection onto the boundary becomes an important option. But, projection in higher than 3 dimensional space is not possible with standard vector algebra: the cross-product is not defined. Methods: We describe a technique which employs Clifford Algebra to perform projection in an arbitrary number of dimensions. Clifford Algebra describes a manipulation of vectors that incorporates the concepts of addition, subtraction, multiplication, and division. Thereby, vectors may be operated on like scalars forming a true algebra. Results: We tested our approach on a phantom containing inserts of calcium, gadolinium, iodine, gold nanoparticles and mixtures of pairs thereof. Images were acquired on a prototype photon counting CT scanner under a range of threshold combinations. Comparison of the accuracy of different threshold combinations versus ground truth are presented. Conclusions: Material decomposition is possible with three or more materials and four or more energy thresholds using Clifford Algebra projection to mitigate noise.
Theoretical characterization of performance effectiveness of photon-counting technique for digital radiography applications
Seungman Yun, Jaehyuk Kim, Yoonsuk Huh, et al.
Photon-counting (PC) technique has been paid attention to digital radiography applications due to its potential in lowdose operation and multi-energy imaging capability. In this study, we theoretically investigate the performance gain in digital radiography when the PC detectors are used instead of the conventional energy-integrating (EI) detectors. We use the Monte Carlo technique for estimating energy-absorption distributions in detector materials such as CdTe for the PC detector and CsI for the EI detector. To estimate the signal and noise transfers through the two different detectoroperation schemes, we use the cascaded linear-systems approach. In the Monte Carlo simulations, the square and rectangle focal spots are considered to mimic the advanced carbon nanotube (CNT) and conventional filament cathodes, respectively. From the simulation results, the modulation-transfer functions of the PC detector are more sensitive to asymmetric focal spot geometry than those of the EI detector. On the other hand, the PC detector shows better image signal-to-noise ratio than the EI detector; hence better dose efficiency with the PC detector. The dose efficiency of the PC detector in comparison with the EI detector is however marginal for the filament x-ray beam whereas the dose efficiency is not negligible for the CNT x-ray beam. The theoretical upper limits of the imaging performance of the advanced digital radiography technology are reported in this study.
Effects of dead time on quantitative dual-energy imaging using a position-sensitive spectroscopic detector
Louise M. Dummott, Giuseppe Schettino, Paul Seller, et al.
Dual energy imaging (DE) is a potential alternative to conventional mammography for patients with dense breasts. It requires intravenous injection of contrast agent (CA) and subsequent acquisition of images at two different energies. Each pixel is seen as a vector and is projected onto a two-material basis, e.g. water, CA, to form separate water-equivalent and CA-equivalent images. On conventional detectors, this requires two separate exposures. Spectroscopic detectors allow multiple images from a single exposure by integrating appropriate energy bands. This work investigates the effects of high count rates on quantitative DE imaging using a CdTe spectroscopic detector. Because of its small pixel size (250 μm), a limitation of the detector is charge sharing between pixels, which must be corrected to avoid degradation of the detected spectrum. However, as charge sharing is identified by neighbouring pixels registering a count in a given readout frame, an effective maximum count rate (EMR) is imposed, above which linearity between incident and detected counts is lost. A simulation was used to model detector response of a test object composed of water and iodine, with different EMRs and incident count rates. Using a known iodine thickness of 0.03 cm, and an EMR of 103 s−1 , the reconstructed thickness of iodine was found to be 97%, 74% and 24% of the true value for incident count rates of 100, 1000 and 10000 photons/pixel/s respectively. The simulation was validated by imaging a water-equivalent test phantom containing iodinated CA at different X-ray currents, to determine the optimum beam conditions.
X-ray spectral calibration from transmission measurements using Gaussian blur model
In recent years, there has been a resurgence of interest towards spectral computed tomography (CT) driven by a growing demand in photon-counting detectors. In performing spectral CT scanning, a practical issue is to accurately calibrate the spectral response of the X-ray imaging system. Mis-calibrated detector elements can lead to strong ring artifacts in the reconstructed tomographic image. For the purpose of modeling the spectral response, we propose a Gaussian blur model combined with the prior information on the X-ray spectra that accurately predicts the transmission curve and at the same time recovers realistic estimate of the spectra. This proposed method uses a low dimensional representation of the X-ray spectra by enforcing a sparsity constraint on the parameters of the Gaussian blur model. These parameters are estimated by formulating a constrained optimization problem, and two algorithms are suggested to solve such problem in an efficient way. The effectiveness of the model is evaluated on the simulated transmission measurements of known thicknesses of known materials. The performance of the two algorithms are also compared through the error between estimated and model X-ray spectra and the error between the predicted and simulated transmission curves.
A TV-constrained decomposition method for spectral CT
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
A study of modeling x-ray transmittance for material decomposition without contrast agents
Okkyun Lee, Steffen Kappler, Christoph Polster, et al.
This study concerns how to model x-ray transmittance, exp ( -- ∫ μa(r, E) dr), of the object using a small number of energy-dependent bases, which plays an important role for estimating basis line-integrals in photon counting detector (PCD)-based computed tomography (CT). Recently, we found that low-order polynomials can model the smooth x-ray transmittance, i.e. object without contrast agents, with sufficient accuracy, and developed a computationally efficient three-step estimator. The algorithm estimates the polynomial coefficients in the first step, estimates the basis line-integrals in the second step, and corrects for bias in the third step. We showed that the three-step estimator was ~1,500 times faster than conventional maximum likelihood (ML) estimator while it provided comparable bias and noise. The three-step estimator was derived based on the modeling of x-ray transmittance; thus, the accurate modeling of x-ray transmittance is an important issue. For this purpose, we introduce a modeling of the x-ray transmittance via dictionary learning approach. We show that the relative modeling error of dictionary learning-based approach is smaller than that of the low-order polynomials.
A polychromatic adaption of the Beer-Lambert model for spectral decomposition
Thorsten Sellerer, Sebastian Ehn, Korbinian Mechlem, et al.
We present a semi-empirical forward-model for spectral photon-counting CT which is fully compatible with state-of-the-art maximum-likelihood estimators (MLE) for basis material line integrals. The model relies on a minimum calibration effort to make the method applicable in routine clinical set-ups with the need for periodic re-calibration. In this work we present an experimental verifcation of our proposed method. The proposed method uses an adapted Beer-Lambert model, describing the energy dependent attenuation of a polychromatic x-ray spectrum using additional exponential terms. In an experimental dual-energy photon-counting CT setup based on a CdTe detector, the model demonstrates an accurate prediction of the registered counts for an attenuated polychromatic spectrum. Thereby deviations between model and measurement data lie within the Poisson statistical limit of the performed acquisitions, providing an effectively unbiased forward-model. The experimental data also shows that the model is capable of handling possible spectral distortions introduced by the photon-counting detector and CdTe sensor. The simplicity and high accuracy of the proposed model provides a viable forward-model for MLE-based spectral decomposition methods without the need of costly and time-consuming characterization of the system response.
Establishing a method to measure bone structure using spectral CT
M. Ramyar, C. Leary, A. Raja, et al.
Combining bone structure and density measurement in 3D is required to assess site-specific fracture risk. Spectral molecular imaging can measure bone structure in relation to bone density by measuring macro and microstructure of bone in 3D. This study aimed to optimize spectral CT methodology to measure bone structure in excised bone samples. MARS CT with CdTe Medipix3RX detector was used in multiple energy bins to calibrate bone structure measurements. To calibrate thickness measurement, eight different thicknesses of Aluminium (Al) sheets were scanned one in air and the other around a falcon tube and then analysed. To test if trabecular thickness measurements differed depending on scan plane, a bone sample from sheep proximal tibia was scanned in two orthogonal directions. To assess the effect of air on thickness measurement, two parts of the same human femoral head were scanned in two conditions (in the air and in PBS). The results showed that the MARS scanner (with 90μm voxel size) is able to accurately measure the Al (in air) thicknesses over 200μm but it underestimates the thicknesses below 200μm because of partial volume effect in Al-air interface. The Al thickness measured in the highest energy bin is overestimated at Al-falcon tube interface. Bone scanning in two orthogonal directions gives the same trabecular thickness and air in the bone structure reduced measurement accuracy. We have established a bone structure assessment protocol on MARS scanner. The next step is to combine this with bone densitometry to assess bone strength.
Renal stone characterization using high resolution imaging mode on a photon counting detector CT system
A. Ferrero, R. Gutjahr, A. Henning, et al.
In addition to the standard-resolution (SR) acquisition mode, a high-resolution (HR) mode is available on a research photon-counting-detector (PCD) whole-body CT system. In the HR mode each detector consists of a 2x2 array of 0.225 mm x 0.225 mm subpixel elements. This is in contrast to the SR mode that consists of a 4x4 array of the same subelements, and results in 0.25 mm isotropic resolution at iso-center for the HR mode. In this study, we quantified ex vivo the capabilities of the HR mode to characterize renal stones in terms of morphology and mineral composition. Forty pure stones - 10 uric acid (UA), 10 cystine (CYS), 10 calcium oxalate monohydrate (COM) and 10 apatite (APA) - and 14 mixed stones were placed in a 20 cm water phantom and scanned in HR mode, at radiation dose matched to that of routine dual-energy stone exams. Data from micro CT provided a reference for the quantification of morphology and mineral composition of the mixed stones. The area under the ROC curve was 1.0 for discriminating UA from CYS, 0.89 for CYS vs COM and 0.84 for COM vs APA. The root mean square error (RMSE) of the percent UA in mixed stones was 11.0% with a medium-sharp kernel and 15.6% with the sharpest kernel. The HR showed qualitatively accurate characterization of stone morphology relative to micro CT.
A BVMF-B algorithm for nonconvex nonlinear regularized decomposition of spectral x-ray projection images
Spectral computed tomography (CT) exploits the measurements obtained by a photon counting detector to reconstruct the chemical composition of an object. In particular, spectral CT has shown a very good ability to image K-edge contrast agent. Spectral CT is an inverse problem that can be addressed solving two subproblems, namely the basis material decomposition (BMD) problem and the tomographic reconstruction problem. In this work, we focus on the BMD problem, which is ill-posed and nonlinear. The BDM problem is classically either linearized, which enables reconstruction based on compressed sensing methods, or nonlinearly solved with no explicit regularization scheme. In a previous communication, we proposed a nonlinear regularized Gauss-Newton (GN) algorithm.1 However, this algorithm can only be applied to convex regularization functionals. In particular, the ℓp (p < 1) norm or the `0 quasi-norm, which are known to provider sparse solutions, cannot be considered. In order to better promote the sparsity of contrast agent images, we propose a nonlinear reconstruction framework that can handle nonconvex regularization terms. In particular, the ℓ1/ℓ2 norm ratio is considered.2 The problem is solved iteratively using the block variable metric forward-backward (BVMF-B) algorithm,3 which can also enforce the positivity of the material images. The proposed method is validated on numerical data simulated in a thorax phantom made of soft tissue, bone and gadolinium, which is scanned with a 90-kV x-ray tube and a 3-bin photon counting detector.
Calibration methods influence quantitative material decomposition in photon-counting spectral CT
Tyler E. Curtis, Ryan K. Roeder
Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.
Sensitivity analysis of pulse pileup model parameter in photon counting detectors
Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.
Enhancement of weakly tagged fecal materials in dual-energy CT colonography using spectral-driven iterative reconstruction technique
Dual-energy computed tomography is used increasingly in CT colonography (CTC). The combination of computer-aided detection (CAD) and dual-energy CTC has a high clinical value because it can automatically detect clinically significant colonic lesions in CTC images with higher accuracy than does single-energy CTC. While CAD has demonstrated its ability to detect small polyps, its performance is highly dependent on the quality of the input images. The presence of artifacts such as beam hardening and image noise in ultra-low-dose CTC may severely degrade detection performance for small polyps. A further limitation to the effectiveness of CAD are the weakly tagged fecal materials in the colon that may cause false-positive detections. In this work, we developed a dual-energy method for enhancing the appearance of weakly tagged fecal materials in CTC images. The proposed method consists of two stages: 1) the detection of weakly tagged fecal materials by use of sinogram-based image decomposition and 2) the enhancement of the detected tagged fecal materials in the images using an iterative reconstruction method. In the first stage, the ultra-low-dose dual-energy projection data obtained from a CT scanner are decomposed into two basis materials – soft tissue and fecal-tagged material (iodine). Virtual monochromatic projection data are calculated from the material decomposition at a pre-determined energy. The iodine-decomposed sinogram and the virtual monochromatic projection data are then used as input to an iterative reconstruction method. In the second stage, virtual monochromatic images are reconstructed iteratively while the intensity of weakly tagged iodine in the images is enhanced. The performance of the proposed method was assessed qualitatively and quantitatively. Preliminary results show that our method effectively enhances the visual appearance of weakly tagged fecal materials in the reconstructed CT images while reducing noise and improving the overall quality of the reconstructed images.
Detection of increased vasa vasorum in artery walls: improving CT number accuracy using image deconvolution
Changes in arterial wall perfusion are an indicator of early atherosclerosis. This is characterized by an increased spatial density of vasa vasorum (VV), the micro-vessels that supply oxygen and nutrients to the arterial wall. Detection of increased VV during contrast-enhanced computed tomography (CT) imaging is limited due to contamination from blooming effect from the contrast-enhanced lumen. We report the application of an image deconvolution technique using a measured system point-spread function, on CT data obtained from a photon-counting CT system to reduce blooming and to improve the CT number accuracy of arterial wall, which enhances detection of increased VV. A phantom study was performed to assess the accuracy of the deconvolution technique. A porcine model was created with enhanced VV in one carotid artery; the other carotid artery served as a control. CT images at an energy range of 25-120 keV were reconstructed. CT numbers were measured for multiple locations in the carotid walls and for multiple time points, pre and post contrast injection. The mean CT number in the carotid wall was compared between the left (increased VV) and right (control) carotid arteries. Prior to deconvolution, results showed similar mean CT numbers in the left and right carotid wall due to the contamination from blooming effect, limiting the detection of increased VV in the left carotid artery. After deconvolution, the mean CT number difference between the left and right carotid arteries was substantially increased at all the time points, enabling detection of the increased VV in the artery wall.
Lung nodule volume quantification and shape differentiation with an ultra-high resolution technique on a photon counting detector CT system
W. Zhou, J. Montoya, R. Gutjahr, et al.
A new ultra high-resolution (UHR) mode has been implemented on a whole body photon counting-detector (PCD) CT system. The UHR mode has a pixel size of 0.25 mm by 0.25 mm at the iso-center, while the conventional (macro) mode is limited to 0.5 mm by 0.5 mm. A set of synthetic lung nodules (two shapes, five sizes, and two radio-densities) was scanned using both the UHR and macro modes and reconstructed with 2 reconstruction kernels (4 sets of images in total). Linear regression analysis was performed to compare measured nodule volumes from CT images to reference volumes. Surface curvature was calculated for each nodule and the full width half maximum (FWHM) of the curvature histogram was used as a shape index to differentiate sphere and star shape nodules. Receiver operating characteristic (ROC) analysis was performed and area under the ROC curve (AUC) was used as a figure of merit for the differentiation task. Results showed strong linear relationship between measured nodule volume and reference standard for both UHR and macro mode. For all nodules, volume estimation was more accurate using UHR mode with sharp kernel (S80f), with lower mean absolute percent error (MAPE) (6.5%) compared with macro mode (11.1% to 12.9%). The improvement of volume measurement from UHR mode was more evident particularly for small nodule size (3mm, 5mm), or star-shape nodules. Images from UHR mode with sharp kernel (S80f) consistently demonstrated the best performance (AUC = 0.85) when separating star from sphere shape nodules among all acquisition and reconstruction modes. Our results showed the advantages of UHR mode on a PCD CT scanner in lung nodule characterization. Various clinical applications, including quantitative imaging, can benefit substantially from this high resolution mode.
Development of a photon counting detector response model using multiple transmission spectra
Dimple Modgil, Andrew Smith, Buxin Chen, et al.
Photon counting x-ray detectors (PCD) offer a great potential for energy-resolved imaging that would allow for promising applications such as low-dose imaging, quantitative contrast-enhanced imaging, as well as spectral tissue decomposition. However, physical processes in photon counting detectors produce undesirable effects like charge sharing and pulse-pile up that can adversely affect the imaging application. Existing detector response models for photon counting detectors have mainly used either X-ray fluorescence imaging or radionuclides to calibrate their detector and estimate the model parameters. The purpose of our work was to apply one such model to our photon counting detector and to determine the model parameters from transmission measurements. This model uses a polynomial fit to model the charge sharing response and energy resolution of the detector as well as an Aluminum filter to model the modification of the spectrum by the X-ray. Our experimental setup includes a Si-based photon counting detector to generate transmission spectra from multiple materials at varying thicknesses. Materials were selected so as to exhibit k-edges within the 15-35 keV region. We find that transmission measurements can be used to successfully model the detector response. Ultimately, this approach could be used for practical detector energy calibration. A fully validated detector response model will allow for exploration of imaging applications for a given detector.
Empirical neural network forward model for maximum likelihood material decomposition in spectral CT
Kevin C. Zimmerman, Adam Petschke
CT measurements using photon counting detectors provide spectral information that can be used to estimate a material's composition. This material decomposition task is complicated by pulse pileup and charge-sharing phenomena. Physics-based methods that use maximum likelihood to estimate a material's composition rely on accurate modeling of the forward spectral measurement process, including the source spectrum and detector response. An empirical projection-domain decomposition method is proposed that uses energy-bin measurements from known basis material path lengths. The known basis material path lengths and energy-bin measurements are used to train a neural network to model the forward spectral measurement process. The neural network is used with a maximum likelihood algorithm to estimate basis material path lengths with optimal noise properties. The method does not require a model of the source spectrum or detector response. Simulations of a step-wedge phantom containing 10 path lengths of polymethyl methacrylate and 10 path lengths of aluminum resulted in 100 calibration measurements for training. Path lengths not included in calibration were used to evaluate the estimator's performance. Projections of the test path lengths contained 1000 Poisson noise realizations and the bias and variance of the estimated path lengths were used as evaluation metrics. The proposed method had less than 2% bias in the test path lengths and had a variance that achieved the Cramèr-Rao lower bound. The proposed method is an efficient estimator that estimates basis material path lengths with optimal noise properties.
Impact of Compton scatter on material decomposition using a photon counting spectral detector
Photon counting spectral detectors are being investigated to allow better discrimination of multiple materials by collecting spectral data for every detector pixel. The process of material decomposition or discrimination starts with an accurate estimation of energy dependent attenuation of the composite object. Photoelectric effect and Compton scattering are two important constituents of the attenuation. Compton scattering while results in a loss of primary photon, also results in an increase in photon counts in the lower ene1rgy bins via multiple orders of scatter. This contribution to each energy bin may change with material properties, thickness and x-ray energies. There has been little investigation into the effect of this increase in counts at lower energies due to presence of these Compton scattered photons using photon counting detectors. Our investigations show that it is important to account for this effect in spectral decomposition problems.
Poster Session: Detectors
icon_mobile_dropdown
Modeling blur in various detector geometries for MeV radiography
Nicola M. Winch, Scott A. Watson, James F. Hunter
Monte Carlo transport codes have been used to model the detector blur and energy deposition in various detector geometries for applications in MeV radiography. Segmented scintillating detectors, where low Z scintillators combined with a high-Z metal matrix, can be designed in which the resolution increases with increasing metal fraction. The combination of various types of metal intensification screens and storage phosphor imaging plates has also been studied. A storage phosphor coated directly onto a metal intensification screen has superior performance over a commercial plate. Stacks of storage phosphor plates and tantalum intensification screens show an increase in energy deposited and detective quantum efficiency with increasing plate number, at the expense of resolution. Select detector geometries were tested by comparing simulation and experimental modulation transfer functions to validate the approach.
High density scintillating glass proton imaging detector
C. J. Wilkinson, K. Goranson, A. Turney, et al.
In recent years, proton therapy has achieved remarkable precision in delivering doses to cancerous cells while avoiding healthy tissue. However, in order to utilize this high precision treatment, greater accuracy in patient positioning is needed. An accepted approximate uncertainty of ±3% exists in the current practice of proton therapy due to conversions between x-ray and proton stopping power. The use of protons in imaging would eliminate this source of error and lessen the radiation exposure of the patient. To this end, this study focuses on developing a novel proton-imaging detector built with high-density glass scintillator. The model described herein contains a compact homogeneous proton calorimeter composed of scintillating, high density glass as the active medium. The unique geometry of this detector allows for the measurement of both the position and residual energy of protons, eliminating the need for a separate set of position trackers in the system. Average position and energy of a pencil beam of 106 protons is used to reconstruct the image rather than by analyzing individual proton data. Simplicity and efficiency were major objectives in this model in order to present an imaging technique that is compact, cost-effective, and precise, as well as practical for a clinical setting with pencil-beam scanning proton therapy equipment. In this work, the development of novel high-density glass scintillator and the unique conceptual design of the imager are discussed; a proof-of-principle Monte Carlo simulation study is performed; preliminary two-dimensional images reconstructed from the Geant4 simulation are presented.
A CMOS-based high-resolution fluoroscope (HRF) detector prototype with 49.5 µm pixels for use in endovascular image guided interventions (EIGI)
X-ray detectors to meet the high-resolution requirements for endovascular image-guided interventions (EIGIs) are being developed and evaluated. A new 49.5-micron pixel prototype detector is being investigated and compared to the current suite of high-resolution fluoroscopic (HRF) detectors. This detector featuring a 300-micron thick CsI(Tl) scintillator, and low electronic noise CMOS readout is designated the HRF- CMOS50. To compare the abilities of this detector with other existing high resolution detectors, a standard performance metric analysis was applied, including the determination of the modulation transfer function (MTF), noise power spectra (NPS), noise equivalent quanta (NEQ), and detective quantum efficiency (DQE) for a range of energies and exposure levels. The advantage of the smaller pixel size and reduced blurring due to the thin phosphor was exemplified when the MTF of the HRF-CMOS50 was compared to the other high resolution detectors, which utilize larger pixels, other optical designs or thicker scintillators. However, the thinner scintillator has the disadvantage of a lower quantum detective efficiency (QDE) for higher diagnostic x-ray energies. The performance of the detector as part of an imaging chain was examined by employing the generalized metrics GMTF, GNEQ, and GDQE, taking standard focal spot size and clinical imaging parameters into consideration. As expected, the disparaging effects of focal spot unsharpness, exacerbated by increasing magnification, degraded the higher-frequency performance of the HRF-CMOS50, while increasing scatter fraction diminished low-frequency performance. Nevertheless, the HRF-CMOS50 brings improved resolution capabilities for EIGIs, but would require increased sensitivity and dynamic range for future clinical application.
2x2 oversampling in digital radiography imaging for CsI-based scintillator detectors
Dong Sik Kim, Eun Kim, Eunae Lee, et al.
In order to efficiently conduct the anti-aliasing filtering in digital radiography imaging, the oversampling scheme using an oversampling detector, in which the sampling frequency is higher than that of the desired detector, is considered in this paper. Instead of using difficult analog anti-aliasing filters, digital anti-aliasing filters are applied to the oversampled data and then their downsampling enables acquiring the desired x-ray images. Supposing an ideal anti-aliasing filtering, the detective quantum efficiency (DQE) performance of the desired detector can be close to that of the oversampling detector since the overlap of the adjacent noise aliases can be minimized while maintaining the frequency amplitude response for the fundamental frequency range. In this paper, a 2 x 2 oversampling is conducted for the desired pixel pitch of 152 μm/pixel and various filters are tested for anti-aliasing filtering. It is shown that securing an enough transition band is important to avoid the ringing artifacts even though the anti-aliasing performance deteriorates due to the wide transition band. From an experiment using a CsI(Tl)-based detector, the aliasing artifact problem is alleviated and a DQE improvement of 0.1 is achieved at 2.5 lp/mm from the oversampling radiography imaging over the binning scheme.
High spatial resolution performance of pixelated scintillators
Kazuki Shigeta, Nobuyasu Fujioka, Takahiro Murai, et al.
In indirect conversion flat panel detectors (FPDs) for digital X-ray imaging, scintillating materials such as Terbiumdoped Gadolinium Oxysulfide (Gadox) convert X-ray into visible light, and an amorphous silicon (a-Si) photodiode array converts the light into electrons. It is, however, desired that the detector spatial resolution is improved because the light spreading inside scintillator causes crosstalk to next a-Si photodiode pixels and the resolution is degraded compared with direct conversion FPDs which directly convert X-ray into electrons by scintillating material such as amorphous selenium. In this study, the scintillator was pixelated with same pixel pitch as a-Si photodiode array by barrier rib structure to limit the light spreading, and the detector spatial resolution was improved. The FPD with pixelated scintillator was manufactured as follows. The barrier rib structure with 127μm pitch was fabricated on a substrate by a photosensitive organic-inorganic paste method, and a reflective layer was coated on the surface of the barrier rib, then the structure was filled up with Gadox particles. The pixelated scintillator was aligned with 127μm pixel pitch of a-Si photodiode array and set as a FPD. The FPD with pixelated scintillator showed high modulation transfer function (MTF) and 0.94 at 1cycle/mm and 0.88 at 2cycles/mm were achieved. The MTF values were almost equal to the maximum value that can be theoretically achieved in the FPD with 127μm pixel pitch of a-Si photodiode array. Thus the FPD with pixelated scintillators has great potential to apply for high spatial resolution applications such as mammography and nondestructive testing.
Comparison of high resolution x-ray detectors with conventional FPDs using experimental MTFs and apodized aperture pixel design for reduced aliasing
Apodized Aperture Pixel (AAP) design, proposed by Ismailova et.al, is an alternative to the conventional pixel design. The advantages of AAP processing with a sinc filter in comparison with using other filters include non-degradation of MTF values and elimination of signal and noise aliasing, resulting in an increased performance at higher frequencies, approaching the Nyquist frequency. If high resolution small field-of-view (FOV) detectors with small pixels used during critical stages of Endovascular Image Guided Interventions (EIGIs) could also be extended to cover a full field-of-view typical of flat panel detectors (FPDs) and made to have larger effective pixels, then methods must be used to preserve the MTF over the frequency range up to the Nyquist frequency of the FPD while minimizing aliasing. In this work, we convolve the experimentally measured MTFs of an Microangiographic Fluoroscope (MAF) detector, (the MAF-CCD with 35μm pixels) and a High Resolution Fluoroscope (HRF) detector (HRF-CMOS50 with 49.5μm pixels) with the AAP filter and show the superiority of the results compared to MTFs resulting from moving average pixel binning and to the MTF of a standard FPD. The effect of using AAP is also shown in the spatial domain, when used to image an infinitely small point object. For detectors in neurovascular interventions, where high resolution is the priority during critical parts of the intervention, but full FOV with larger pixels are needed during less critical parts, AAP design provides an alternative to simple pixel binning while effectively eliminating signal and noise aliasing yet allowing the small FOV high resolution imaging to be maintained during critical parts of the EIGI.
1D pixelated MV portal imager with structured privacy film: a feasibility study
Pavlo Baturin, Daniel Shedlock, Marios Myronakis, et al.
Modern amorphous silicon flat panel-based electronic portal imaging devices that utilize thin gadolinium oxysulfide scintillators suffer from low quantum efficiencies (QEs). Thick two dimensionally (2D) pixelated scintillator arrays offer an effective but expensive option for increasing QE. To reduce costs, we have investigated the possibility of combining a thick one dimensional (1D) pixelated scintillator (PS) with an orthogonally placed 1D structured optical filter to provide for overall good 2D spatial resolution. In this work, we studied the potential for using a 1D video screen privacy film (PF) to serve as a directional optical attenuator and filter. A Geant4 model of the PF was built based on reflection and transmission measurements taken with a laser-based optical reflectometer. This information was incorporated into a Geant4-based x-ray detector simulator to generate modulation transfer functions (MTFs), noise power spectra (NPS), and detective quantum efficiencies (DQEs) for various 1D and 2D configurations. It was found that the 1D array with PF can provide the MTFs and DQEs of 2D arrays. Although the PF significantly reduced the amount of optical photons detected by the flat panel, we anticipate using a scintillator with an inherently high optical yield (e.g. cesium iodide) for MV imaging, where fluence rates are inherently high, will still provide adequate signal intensities for the imaging tasks associated with radiotherapy.
Poster Session: Radiation Dose
icon_mobile_dropdown
Dose conversion coefficients for partial-fan CBCT scans
Due to the increasing number of cone-beam CT (CBCT) devices on the market, reliable estimates of patient doses for these imaging modality is desired. Cone-beam CT devices differ from conventional CT not only by a larger collimation but also by different recording modes. In this work, it has been investigated whether reliable patient doses can be obtained for CBCT devices in partial-fan mode using pre-computed slices. As an exemplary case, chest CBCT scans for the ICRP reference adult models has been examined. By normalizing organ doses to CTDI100w , the resulting dose conversion coefficients for CBCT could be well reproduced by precomputed slices, with a relative difference in the effective dose conversion coefficients of less than 10%.
Organ and effective dose reduction for region-of-interest (ROI) CBCT and fluoroscopy
In some medical-imaging procedures using CBCT and fluoroscopy, it may be needed to visualize only the center of the field-of-view with optimal quality. To reduce the dose to the patient as well as enable increased contrast in the region of interest (ROI) during CBCT and fluoroscopy procedures, a 0.7 mm thick Cu ROI attenuator with a circular aperture 12% of the FOV was used. The aim of this study was to quantify the dose-reduction benefit of ROI imaging during a typical CBCT and interventional fluoroscopy procedures in the head and torso. The Toshiba Infinix C-Arm System was modeled in BEAMnrc/EGSnrc with and without the ROI attenuator. Patient organ and effective doses were calculated in DOSXYZnrc/EGSnrc Monte-Carlo software for CBCT and interventional procedures. We first compared the entrance dose with and without the ROI attenuator on a 20 cm thick solid-water block. Then we simulated a CBCT scan and an interventional fluoroscopy procedure on the head and torso with and without an ROI attenuator. The results showed that the entrance-surface dose reduction in the solid water is about 85.7% outside the ROI opening and 10.5% in the ROI opening. The results showed a reduction in most organ doses of 45%-70% and in effective dose of 46%-66% compared to the dose in a CBCT scan and in an interventional procedure without the ROI attenuator. This work provides evidence of substantial reduction of organ and effective doses when using an ROI attenuator during CBCT and fluoroscopic procedures.
Monte Carlo investigation of backscatter point spread function for x-ray imaging examinations
X-ray imaging examinations, especially complex interventions, may result in relatively high doses to the patient’s skin inducing skin injuries. A method was developed to determine the skin-dose distribution for non-uniform x-ray beams by convolving the backscatter point-spread-function (PSF) with the primary-dose distribution to generate the backscatter distribution that, when added to the primary dose, gives the total-dose distribution. This technique was incorporated in the dose-tracking system (DTS), which provides a real-time color-coded 3D-mapping of skin dose during fluoroscopic procedures. The aim of this work is to investigate the variation of the backscatter PSF with different parameters. A backscatter PSF of a 1-mm x-ray beam was generated by EGSnrc Monte-Carlo code for different x-ray beam energies, different soft-tissue thickness above bone, different bone thickness and different entrance-beam angles, as well as for different locations on the SK-150 anthropomorphic head phantom. The results show a reduction of the peak scatter to primary dose ratio of 48% when X-ray beam voltage is increased from 40 keV to 120 keV. The backscatter dose was reduced when bone was beneath the soft tissue layer and this reduction increased with thinner soft tissue and thicker bone layers. The backscatter factor increased about 21% as the angle of incidence of the beam with the entrance surface decreased from 90° (perpendicular) to 30°. The backscatter PSF differed for different locations on the SK-150 phantom by up to 15%. The results of this study can be used to improve the accuracy of dose calculation when using PSF convolution in the DTS.
Effects of sparse sampling in combination with iterative reconstruction on quantitative bone microstructure assessment
Kai Mei, Felix K. Kopp, Andreas Fehringer, et al.
The trabecular bone microstructure is a key to the early diagnosis and advanced therapy monitoring of osteoporosis. Regularly measuring bone microstructure with conventional multi-detector computer tomography (MDCT) would expose patients with a relatively high radiation dose. One possible solution to reduce exposure to patients is sampling fewer projection angles. This approach can be supported by advanced reconstruction algorithms, with their ability to achieve better image quality under reduced projection angles or high levels of noise. In this work, we investigated the performance of iterative reconstruction from sparse sampled projection data on trabecular bone microstructure in in-vivo MDCT scans of human spines. The computed MDCT images were evaluated by calculating bone microstructure parameters. We demonstrated that bone microstructure parameters were still computationally distinguishable when half or less of the radiation dose was employed.
Evaluation of methods to produce an image library for automatic patient model localization for dose mapping during fluoroscopically guided procedures
Josh Kilian-Meneghin, Z. Xiong, S. Rudin, et al.
The purpose of this work is to evaluate methods for producing a library of 2D-radiographic images to be correlated to clinical images obtained during a fluoroscopically-guided procedure for automated patient-model localization. The localization algorithm will be used to improve the accuracy of the skin-dose map superimposed on the 3D patient- model of the real-time Dose-Tracking-System (DTS). For the library, 2D images were generated from CT datasets of the SK-150 anthropomorphic phantom using two methods: Schmid’s 3D-visualization tool and Plastimatch’s digitally-reconstructed-radiograph (DRR) code. Those images, as well as a standard 2D-radiographic image, were correlated to a 2D-fluoroscopic image of a phantom, which represented the clinical-fluoroscopic image, using the Corr2 function in Matlab. The Corr2 function takes two images and outputs the relative correlation between them, which is fed into the localization algorithm. Higher correlation means better alignment of the 3D patient-model with the patient image. In this instance, it was determined that the localization algorithm will succeed when Corr2 returns a correlation of at least 50%. The 3D-visualization tool images returned 55-80% correlation relative to the fluoroscopic-image, which was comparable to the correlation for the radiograph. The DRR images returned 61-90% correlation, again comparable to the radiograph. Both methods prove to be sufficient for the localization algorithm and can be produced quickly; however, the DRR method produces more accurate grey-levels. Using the DRR code, a library at varying angles can be produced for the localization algorithm.
Estimation of breast dose reduction potential for organ-based tube current modulated CT with wide dose reduction arc
This study aimed to estimate the organ dose reduction potential for organ-dose-based tube current modulated (ODM) thoracic CT with wide dose reduction arc. Twenty-one computational anthropomorphic phantoms (XCAT, age range: 27– 75 years, weight range: 52.0-105.8 kg) were used to create a virtual patient population with clinical anatomic variations. For each phantom, two breast tissue compositions were simulated: 50/50 and 20/80 (glandular-to-adipose ratio). A validated Monte Carlo program was used to estimate the organ dose for standard tube current modulation (TCM) (SmartmA, GE Healthcare) and ODM (GE Healthcare) for a commercial CT scanner (Revolution, GE Healthcare) with explicitly modeled tube current modulation profile, scanner geometry, bowtie filtration, and source spectrum. Organ dose was determined using a typical clinical thoracic CT protocol. Both organ dose and CTDIvol-to-organ dose conversion coefficients (h factors) were compared between TCM and ODM. ODM significantly reduced all radiosensitive organ doses (p<0.01). The breast dose was reduced by 30±2%. For h factors, organs in the anterior region (e.g. thyroid, stomach) exhibited substantial decreases, and the medial, distributed, and posterior region either saw an increase or no significant change. The organ-dose-based tube current modulation significantly reduced organ doses especially for radiosensitive superficial anterior organs such as the breasts.
Poster Session: Mammography and Breast Tomosynthesis
icon_mobile_dropdown
Detection of microcalcifications and tumor tissue in mammography using a CdTe-series photon-counting detector
In this study, we proposed a method for detecting microcalcifications and tumor tissue using a cadmium telluride (CdTe) series linear detector. The CdTe series detector was used as an energy resolved photon-counting (hereafter referred to as the photon-counting) mammography detector. The CdTe series linear detector and two types of phantom were designed using a MATLAB simulation. Each phantom consisted of mammary gland and adipose tissue. One phantom contained microcalcifications and the other contained tumor tissue. We varied the size of these structures and the mammary gland composition. We divided the spectrum of an x-ray, which is transmitted to each phantom, into three energy bins and calculated the corresponding linear attenuation coefficients from the numbers of input and output photons. Subsequently, the absorption vector length that expresses the amount of absorption was calculated. When the material composition was different between objects, for example mammary gland and microcalcifications, the absorption vector length was also different. We compared each absorption vector length and tried to detect the microcalcifications and tumor tissue. However, as the size of microcalcifications and tumor tissue decreased and/or the mammary gland content rate increased, there was difficulty in distinguishing them. The microcalcifications and tumor tissue despite the reduction in size or increase in mammary gland content rate can be distinguished by increasing the x-ray dosage. Therefore, it is necessary to find a condition under which a low exposure dose is optimally balanced with high detection sensitivity. It is a new method to indicate the image using photon counting technology.
Contrast-enhanced spectral mammography based on a photon-counting detector: quantitative accuracy and radiation dose
Seungwan Lee, Sooncheol Kang, Jisoo Eom
Contrast-enhanced mammography has been used to demonstrate functional information about a breast tumor by injecting contrast agents. However, a conventional technique with a single exposure degrades the efficiency of tumor detection due to structure overlapping. Dual-energy techniques with energy-integrating detectors (EIDs) also cause an increase of radiation dose and an inaccuracy of material decomposition due to the limitations of EIDs. On the other hands, spectral mammography with photon-counting detectors (PCDs) is able to resolve the issues induced by the conventional technique and EIDs using their energy-discrimination capabilities. In this study, the contrast-enhanced spectral mammography based on a PCD was implemented by using a polychromatic dual-energy model, and the proposed technique was compared with the dual-energy technique with an EID in terms of quantitative accuracy and radiation dose. The results showed that the proposed technique improved the quantitative accuracy as well as reduced radiation dose comparing to the dual-energy technique with an EID. The quantitative accuracy of the contrast-enhanced spectral mammography based on a PCD was slightly improved as a function of radiation dose. Therefore, the contrast-enhanced spectral mammography based on a PCD is able to provide useful information for detecting breast tumors and improving diagnostic accuracy.
An adaptive toolkit for image quality evaluation in system performance test of digital breast tomosynthesis
Digital breast tomosynthesis (DBT) is a relatively new diagnostic imaging modality for women. Currently, various models of DBT systems are available on the market and the number of installations is rapidly increasing. EUREF, the European Reference Organization for Quality Assured Breast Screening and Diagnostic Services, has proposed a preliminary Guideline - protocol for the quality control of the physical and technical aspects of digital breast tomosynthesis systems, with an ultimate aim of providing limiting values guaranteeing proper performance for different applications of DBT. In this work, we introduce an adaptive toolkit developed in accordance with this guideline to facilitate the process of image quality evaluation in DBT performance test. This toolkit implements robust algorithms to quantify various technical parameters of DBT images and provides a convenient user interface in practice. Each test is built into a separate module with configurations set corresponding to the European guideline, which can be easily adapted to different settings and extended with additional tests. This toolkit largely improves the efficiency for image quality evaluation of DBT. It is also going to evolve with the development of protocols in quality control of DBT systems.
Evaluation of effective detective quantum efficiency considering breast thickness and glandularity in prototype digital breast tomosynthesis system
Digital breast tomosynthesis (DBT) system is a novel imaging modality which is strongly depended on the performance of a detector. Recently, effective detective quantum efficiency (eDQE) has been introduced to solve the disadvantages of conventional DQE evaluations which do not consider clinical operating conditions. For eDQE evaluation, the variety of patient breast, especially the glandularity and thickness needs to be studied to consider different races of patient. For these reasons, eDQE in a prototype DBT system considering different breast thickness and glandularity was evaluated. In this study, we used the prototype DBT system with CsI(Tl) scintillator/CMOS flat panel digital detector developed by Korea Electrotechnology Research Institute (KERI). A scatter fraction, a transmission factor, an effective modulation transfer function (eMTF) and an effective normalized noise power spectrum (eNNPS) were measured in different thickness and glandularity of breast equivalent phantom. As results, scatter fraction increased and transmission fraction decreased by a factor of 2.09 and 6.25, respectively, as increasing glandularity and thickness. We also found that the breast phantom with small thickness presented high eMTF and low eNNPS. As results, eDQE from 4 cm thick breast phantom with 30% and 70% glandularity showed small changes from 0.20 to 0.19 at 0.1 mm-1, whereas eDQE from 50% glandularity of 3 cm and 5 cm presented relatively significant increase from 0.16 to 0.20 at 0.1 mm-1 spatial frequency. These indicated that eDQE was strongly affected by phantom thickness, but the effect of glandularity seemed to be trivial. According to our study, the whole system evaluation considering the races of patients from standard to abnormal cases is needed to be studied in future works.
Geometric calibration for a next-generation digital breast tomosynthesis system
William S. Ferris, Trevor L. Vent, Tristan D. Maidment, et al.
A method for geometric calibration of a next-generation tomosynthesis (NGT) system is proposed and tested. The NGT system incorporates additional geometric movements between projections over conventional DBT. These movements require precise geometric calibration to support magnification DBT and isotropic SR. A phantom was created to project small tungsten-carbide ball bearings (BB’s) onto the detector at four different magnifications. Using a bandpass filter and template matching, a MATLAB program was written to identify the centroid locations of each BB projection on the images. An optimization algorithm calculated an effective location for the source and detector that mathematically projected the BB’s onto the same locations on the detector as found on the projection images. The average distance between the BB projections on the image and the mathematically computed projections was 0.11 mm. The effective locations for the source and detector were encoded in the DICOM file for each projection; these were then used by the reconstruction algorithm. Tomographic image reconstructions were performed for three acquisition modes of the NGT system; these successfully demonstrated isotropic SR, magnified SR, and oblique reconstruction.
Scatter reduction for grid-less mammography using the convolution-based image post-processing technique
Elena Marimón, Hammadi Nait-Charif, Asmar Khan, et al.
X-ray Mammography examinations are highly affected by scattered radiation, as it degrades the quality of the image and complicates the diagnosis process. Anti-scatter grids are currently used in planar mammography examinations as the standard physical scattering reduction technique. This method has been found to be inefficient, as it increases the dose delivered to the patient, does not remove all the scattered radiation and increases the price of the equipment. Alternative scattering reduction methods, based on post-processing algorithms, are being investigated to substitute anti-scatter grids. Methods such as the convolution-based scatter estimation have lately become attractive as they are quicker and more flexible than pure Monte Carlo (MC) simulations. In this study we make use of this specific method, which is based on the premise that the scatter in the system is spatially diffuse, thus it can be approximated by a two-dimensional low-pass convolution filter of the primary image. This algorithm uses the narrow pencil beam method to obtain the scatter kernel used to convolve an image, acquired without anti-scatter grid. The results obtained show an image quality comparable, in the worst case, to the grid image, in terms of uniformity and contrast to noise ratio. Further improvement is expected when using clinically-representative phantoms.
Comparison of effects of dose on image quality in digital breast tomosynthesis across multiple vendors
Amy Zhao, Maira Santana, Ehsan Samei, et al.
In traditional radiography and computed tomography (CT), contrast is an important measure of image quality that, in theory, does not vary with dose. While increasing dose may increase the overall contrast-to-noise ratio (CNR), the contrast in an image should be primarily dependent on variation in tissue density and attenuation. We investigated the behavior of all three currently FDA-approved vendors’ 3D DBT systems (Siemens, Hologic, and General Electric (GE)) using the Computerized Imaging Reference Systems (CIRS) Model 011A Breast Phantom and found that for both Siemens and Hologic systems, contrast increased with dose across multiple repeated trials. For these two systems, experimental CNR also appeared to increase above the expected CNR, which suggests that these systems seem to have introduced post-processing by manipulation of contrast, and thus DBT data cannot be used to reliably quantify tissue characteristics. Additional experimentation with both 2D mammography and 3D DBT systems from GE in addition to the previously mentioned vendors, however, suggested that this relationship is not true for all systems. An initial comparison of contrast vs. dose showed no relationship between contrast and dose for 2D mammography, with the contrast remaining relatively constant in the dose range of 33% of the automatic exposure control setting (AEC) to 300% AEC for all three vendors. The GE DBT system also did not exhibit increased contrast with increased dose, suggesting that the behavior of 3D DBT systems is vendor-specific.
Denoised ordered subset statistically penalized algebraic reconstruction technique (DOS-SPART) in digital breast tomosynthesis
Digital breast tomosynthesis (DBT) is a three dimensional (3D) breast imaging modality in which projections are acquired over a limited angular span around the compressed breast and reconstructed into image slices parallel to the detector. DBT has been shown to help alleviate the breast tissue overlapping issues of two dimensional (2D) mammography. Since the overlapping tissues may simulate cancer masses or obscure true cancers, this improvement is critically important for improved breast cancer screening and diagnosis. In this work, a model-based image reconstruction method is presented to show that spatial resolution in DBT volumes can be maintained while dose is reduced using the presented method when compared to that of a state-of-the-art commercial reconstruction technique. Spatial resolution was measured in phantom images and subjectively in a clinical dataset. Noise characteristics were explored in a cadaver study. In both the quantitative and subjective results the image sharpness was maintained and overall image quality was maintained at reduced doses when the model-based iterative reconstruction was used to reconstruct the volumes.
Scattered radiation in DBT geometries with flexible breast compression paddles: a Monte Carlo simulation study
Scattered radiation is an undesired signal largely present in most digital breast tomosynthesis (DBT) projection images as no physically rejection methods, i.e. anti-scatter grids, are regularly employed, in contrast to full- field digital mammography. This scatter signal might reduce the visibility of small objects in the image, and potentially affect the detection of small breast lesions. Thus accurate scatter models are needed to minimise the scattered radiation signal via post-processing algorithms. All prior work on scattered radiation estimation has assumed a rigid breast compression paddle (RP) and reported large contribution of scatter signal from RP in the detector. However, in this work, flexible paddles (FPs) tilting from 0° to 10° will be studied using Monte Carlo simulations to analyse if the scatter distribution differs from RP geometries. After reproducing the Hologic Selenia Dimensions geometry (narrow angle) with two (homogeneous and heterogeneous) compressed breast phantoms, results illustrate that the scatter distribution recorded at the detector varies up to 22% between RP and FP geometries (depending on the location), mainly due to the decrease in thickness of the breast observed for FP. However, the relative contribution from the paddle itself (3-12% of the total scatter) remains approximately unchanged for both setups and their magnitude depends on the distance to the breast edge.
Poster Session: New Systems and Technologies
icon_mobile_dropdown
New high-resolution imaging technology: application of advanced radar technology for medical imaging
Ashok Gorwara, Pavlo Molchanov
Image resolution for RF/microwave relatively long waves is limited by the diffraction limit (Abbe diffraction limit). Each point of reflecting object will work as source of diffractive waves if wavelength longer than object. But phase front of diffracted waves still will consist information about object shape. Image can be recovered from multi frequency digital hologram by combination of interferograms. In this case resolution of recovered image will be determined by receiver bandwidth, digitizing frequency and accuracy of processor time. Faster processor - better image resolution. Low frequency non-scanning wide beam in monopulse radar system can cover all object and same time provide high resolution phase measurement relative to reference beam.
Multi-grid finite element method used for enhancing the reconstruction accuracy in Cerenkov luminescence tomography
Hongbo Guo, Xiaowei He, Muhan Liu, et al.
Cerenkov luminescence tomography (CLT), as a promising optical molecular imaging modality, can be applied to cancer diagnostic and therapeutic. Most researches about CLT reconstruction are based on the finite element method (FEM) framework. However, the quality of FEM mesh grid is still a vital factor to restrict the accuracy of the CLT reconstruction result. In this paper, we proposed a multi-grid finite element method framework, which was able to improve the accuracy of reconstruction. Meanwhile, the multilevel scheme adaptive algebraic reconstruction technique (MLS-AART) based on a modified iterative algorithm was applied to improve the reconstruction accuracy. In numerical simulation experiments, the feasibility of our proposed method were evaluated. Results showed that the multi-grid strategy could obtain 3D spatial information of Cerenkov source more accurately compared with the traditional single-grid FEM.
Accelerated x-ray scatter projection imaging using multiple continuously moving pencil beams
Coherent x-ray scatter varies with angle and photon energy in a manner dependent on the chemical composition of the scattering material, even for amorphous materials. Therefore, images generated from scattered photons can have much higher contrast than conventional projection radiographs. We are developing a scatter projection imaging prototype at the BioMedical Imaging and Therapy (BMIT) facility of the Canadian Light Source (CLS) synchrotron in Saskatoon, Canada. The best images are obtained using step-and-shoot scanning with a single pencil beam and area detector to capture sequentially the scatter pattern for each primary beam location on the sample. Primary x-ray transmission is recorded simultaneously using photodiodes. The technological challenge is to acquire the scatter data in a reasonable time. Using multiple pencil beams producing partially-overlapping scatter patterns reduces acquisition time but increases complexity due to the need for a disentangling algorithm to extract the data. Continuous sample motion, rather than step-and-shoot, also reduces acquisition time at the expense of introducing motion blur. With a five-beam (33.2 keV, 3.5 mm2 beam area) continuous sample motion configuration, a rectangular array of 12 x 100 pixels with 1 mm sampling width has been acquired in 0.4 minutes (3000 pixels per minute). The acquisition speed is 38 times the speed for single beam step-and-shoot. A system model has been developed to calculate detected scatter patterns given the material composition of the object to be imaged. Our prototype development, image acquisition of a plastic phantom and modelling are described.
Coded aperture coherent scatter spectral imaging for assessment of breast cancers: an ex-vivo demonstration
James R. Spencer, Joshua E. Carter, Crystal K. Leung, et al.
A Coded Aperture Coherent Scatter Spectral Imaging (CACSSI) system was developed in our group to differentiate cancer and healthy tissue in the breast. The utility of the experimental system was previously demonstrated using anthropomorphic breast phantoms and breast biopsy specimens. Here we demonstrate CACSSI utility in identifying tumor margins in real time using breast lumpectomy specimens. Fresh lumpectomy specimens were obtained from Surgical Pathology with the suspected cancerous area designated on the specimen. The specimens were scanned using CACSSI to obtain spectral scatter signatures at multiple locations within the tumor and surrounding tissue. The spectral reconstructions were matched with literature form-factors to classify the tissue as cancerous or non-cancerous. The findings were then compared against pathology reports to confirm the presence and location of the tumor. The system was found to be capable of consistently differentiating cancerous and healthy regions in the breast with spatial resolution of 5 mm. Tissue classification results from the scanned specimens could be correlated with pathology results. We now aim to develop CACSSI as a clinical imaging tool to aid breast cancer assessment and other diagnostic purposes.
Mono-energy coronary angiography with a compact light source
Elena Eggl, Korbinian Mechlem, Eva Braig, et al.
While conventional x-ray tube sources reliably provide high-power x-ray beams for everyday clinical practice, the broad spectra that are inherent to these sources compromise the diagnostic image quality. For a monochromatic x-ray source on the other hand, the x-ray energy can be adjusted to optimal conditions with respect to contrast and dose. However, large-scale synchrotron sources impose high spatial and financial demands, making them unsuitable for clinical practice. During the last decades, research has brought up compact synchrotron sources based on inverse Compton scattering, which deliver a highly brilliant, quasi-monochromatic, tunable x-ray beam, yet fitting into a standard laboratory. One application that could benefit from the invention of these sources in clinical practice is coronary angiography. Being an important and frequently applied diagnostic tool, a high number of complications in angiography, such as renal failure, allergic reaction, or hyperthyroidism, are caused by the large amount of iodine-based contrast agent that is required for achieving sufficient image contrast. Here we demonstrate monochromatic angiography of a porcine heart acquired at the MuCLS, the first compact synchrotron source. By means of a simulation, the CNR in a coronary angiography image achieved with the quasi-mono-energetic MuCLS spectrum is analyzed and compared to a conventional x-ray-tube spectrum. The results imply that the improved CNR achieved with a quasi-monochromatic spectrum can allow for a significant reduction of iodine contrast material.
Full three-dimensional direction-dependent x-ray scattering tomography
Small-angle X-ray scattering (SAXS) detects the angular-dependent, coherently scattered X-ray photons, which provide improved contrast among different types of tissues or materials in medical diagnosis and material characterizations. By combining SAXS with computed tomography (CT), coherent scattering computed tomography (CSCT) enables the detection of spatially-resolved, material-specific scattering profile inside an extended object. However, conventional CSCT lacks the ability to distinguish direction-dependent coherent scattering signal, because of its assumptions that the materials are amorphous with isotropic scattering profiles. To overcome this issue, we propose a new CSCT imaging strategy, which can resolve the three-dimensional scattering profile for each object pixel, by incorporating detector movement into each CSCT projection measurement. The full reconstruction of the three-dimensional momentum transfer profile of a two-dimensional object has been successfully demonstrated. Our setup only requires a table-top Xray source and a panel detector. The presented method demonstrates the potential to achieve low-cost, high-specificity X-ray tissue imaging and material characterization.
3D reconstruction of synapses with deep learning based on EM Images
Chi Xiao, Qiang Rao, Dandan Zhang, et al.
Recently, due to the rapid development of electron microscope (EM) with its high resolution, stacks delivered by EM can be used to analyze a variety of components that are critical to understand brain function. Since synaptic study is essential in neurobiology and can be analyzed by EM stacks, the automated routines for reconstruction of synapses based on EM Images can become a very useful tool for analyzing large volumes of brain tissue and providing the ability to understand the mechanism of brain. In this article, we propose a novel automated method to realize 3D reconstruction of synapses for Automated Tapecollecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) with deep learning. Being different from other reconstruction algorithms, which employ classifier to segment synaptic clefts directly. We utilize deep learning method and segmentation algorithm to obtain synaptic clefts as well as promote the accuracy of reconstruction. The proposed method contains five parts: (1) using modified Moving Least Square (MLS) deformation algorithm and Scale Invariant Feature Transform (SIFT) features to register adjacent sections, (2) adopting Faster Region Convolutional Neural Networks (Faster R-CNN) algorithm to detect synapses, (3) utilizing screening method which takes context cues of synapses into consideration to reduce the false positive rate, (4) combining a practical morphology algorithm with a suitable fitting function to segment synaptic clefts and optimize the shape of them, (5) applying the plugin in FIJI to show the final 3D visualization of synapses. Experimental results on ATUM-SEM images demonstrate the effectiveness of our proposed method.
Estimating internal tissue temperature using microwave radiometry data and bioheat models
Jingyu Xu, Patrick Kelly
An ability to noninvasively measure the temperature of internal tissue regions would be valuable for applications including the detection of malignancy, inflammation, or ischemia. The output power of a microwave radiometer with an antenna at the skin surface is a weighted average of temperature in a tissue volume beneath the antenna. It is difficult, however, to translate radiometric measurements into temperature estimates for specific internal tissue regions. The chief difficulty is insufficient data: in a realistic system there are no more than a few measurements to characterize the entire volume. Efficient use must be made of available prior information together with the radiometric data in order to generate a useful temperature map. In this work we assume that we know the tissue configuration (obtained from another modality), along with arterial blood temperature, skin temperature, and nominal tissue-specific values for metabolic and blood perfusion rates, thermal conductivity, and dielectric constants. The Pennes bioheat equation can then be used to construct a nominal temperature map, and electromagnetic simulation software to construct the radiometric weighting functions for any given radiometer configuration. We show that deviations from the nominal conditions in localized regions (due, e.g., to the presence of a tumor) lead to changes in the tissue temperature that can also be approximated in terms of the nominal bioheat model. This enables the development of algorithms that use the nominal model along with radiometric data to detect areas of elevated temperature and estimate the temperature in specified tissue regions.
Optically tracked, single-coil, scanning magnetic induction tomography
Joe R. Feldkamp, Stephen Quirk
Recent work has shown the feasibility of single-coil, magnetic induction tomography, for visualizing a 3D distribution of electrical conductivity in portions of the human body. Loss is measured in a single, planar coil consisting of concentric circular loops while the coil is relocated to various non-redundant positions and orientations in the vicinity of the target. These loss values, together with measured coil position and orientation, are processed by a quantitative mapping equation that enables reconstruction of an electrical conductivity image. Up until now, the position of the coil had to be established by a template, which required assignment of locations for the coil to visit without necessarily giving any prior consideration to target geometry. We have now added optical tracking to our existing single-coil device so that position and orientation are tracked automatically, allowing collection of coil loss data at arbitrary positions or orientations as needed. Optical tracking is accomplished via a set of IR reflective spheres mounted on the same enclosure that supports the coil. Position for a select sphere within the set, together with the four quaternions specifying optical body orientation, is fed to a laptop at the same time coil loss data is streamed to the same laptop via Bluetooth. The coil center can be tracked with sub-millimeter accuracy while orientation angle is known to a fraction of a degree. This work illustrates the use of single-coil MIT in full, position-orientation-tracked scan mode while imaging laboratory phantoms. Phantoms are based upon simple materials having biologic conductivity (< 5 S/m), including a cut of bone-in steak. The goal is not just to reconstruct an image that contains the features of the actual target, but also return correct conductivity values for the various features within the image.
Quantitative 1D diffraction signatures during dual detector scatter VOI breast CBCT
Dual detector VOI scatter CBCT is similar to dual detector VOI CBCT except that during the high resolution scan, the low resolution flat panel detector is also used to capture the scattered photons. Simulations show a potential use of scatter to diagnose suspicious VOIs. Energy integrated signals due to scatter (EISs) were computed for a specific imaging task involving a malignant lesion and was labelled as a hypothetical experiment (expt) result. The signal was compared to predictions (pred) using benign and malignant lesions. The ΔEISs=EISs|expt - EISs|pred displayed eye catching diffraction structure when the prediction calculation used a benign lesion. The structure occurred even when the phantom compositions were different for prediction and experiment calculations. Since the diffraction structure has a circularly symmetric behaviour because the tissues are amorphous in nature, the 2D ΔEISs patterns were transformed to 1D signals. The 1D signals were obtained by calculating the mean ΔEISs signals in rings. The mean pixel values were a function of the momentum transfer argument q = 4π sin(θ/2)/λ which ranged from 12 to 46 nm-1. The 1D signals correlated well with the 2D profiles. Of particular interest were scatter signatures between q = 20 and 30 nm-1 where malignant tissue is predicted to scatter more than benign fibroglandular tissue. The 1D diffraction signatures could allow a better method to diagnose a suspicious lesion during dual detector scatter VOI CBCT.
Infrared microscopy imaging applied to obtain the index finger pad's thermoregulation curves
Laura A. Viafora, Sergio N. Torres, Wagner Ramírez, et al.
In this work, mid wavelength infrared microscopy imaging videos of several index finger pads, from voluntary people, are recorded to obtain their thermoregulation curves. The proposed non-invasive technique is able to capture spatial and temporal thermal information emitted from blood vessels under-skin, and the irrigation finger pad system, making possible to capture features that a visual-spectrum microscopy cannot detect. Using an infrared laboratory prepared method several voluntary patients exposed theirs fingers to thermal stress while the infrared data is recorded. Using standard infrared imaging and signal processing techniques the thermoregulation curves are estimated. The Cold/Hot Stress experiments have shown infrared data with exponential trend curves, with different recovering slopes for each voluntary person, and sometimes with two steps increasing slope in one person thermoregulation curve response.
Reconstruction method for x-ray imaging capsule
Daniel Rubin, Ronen Lifshitz, Omer Bar-Ilan, et al.
A colon imaging capsule has been developed by Check-Cap Ltd (C-Scan® Cap). For the procedure, the patient swallows a small amount of standard iodinated contrast agent. To create images, three rotating X-ray beams are emitted towards the colon wall. Part of the X-ray photons are backscattered from the contrast medium and the colon. These photons are collected by an omnidirectional array of energy discriminating photon counting detectors (CdTe/CZT) within the capsule. X-ray fluorescence (XRF) and Compton backscattering photons pertain different energies and are counted separately by the detection electronics. The current work examines a new statistical approach for the algorithm that reconstructs the lining of the colon wall from the X-ray detector readings. The algorithm performs numerical optimization for finding the solution to the inverse problem applied to a physical forward model, reflecting the behavior of the system. The forward model that was employed, accounts for the following major factors: the two mechanisms of dependence between the distance to the colon wall and the number photons, directional scatter distributions, and relative orientations between beams and detectors. A calibration procedure has been put in place to adjust the coefficients of the forward model for the specific capsule geometry, radiation source characteristics, and the detector response. The performance of the algorithm was examined in phantom experiments and demonstrated high correlation between actual phantom shape and x-ray image reconstruction. Evaluation is underway to assess the algorithm performance in clinical setting.
Poster Session: Nuclear Medicine and Magnetic Resonance Imaging
icon_mobile_dropdown
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
Benjamin Spencer, Jinyi Qi, Ramsey D. Badawi, et al.
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
Stability of gradient field corrections for quantitative diffusion MRI
Baxter P. Rogers, Justin Blaber, E. Brian Welch, et al.
In magnetic resonance diffusion imaging, gradient nonlinearity causes significant bias in the estimation of quantitative diffusion parameters such as diffusivity, anisotropy, and diffusion direction in areas away from the magnet isocenter. This bias can be substantially reduced if the scanner- and coil-specific gradient field nonlinearities are known. Using a set of field map calibration scans on a large (29 cm diameter) phantom combined with a solid harmonic approximation of the gradient fields, we predicted the obtained b-values and applied gradient directions throughout a typical field of view for brain imaging for a typical 32-direction diffusion imaging sequence. We measured the stability of these predictions over time. At 80 mm from scanner isocenter, predicted b-value was 1-6% different than intended due to gradient nonlinearity, and predicted gradient directions were in error by up to 1 degree. Over the course of one month the change in these quantities due to calibration-related factors such as scanner drift and variation in phantom placement was <0.5% for b-values, and <0.5 degrees for angular deviation. The proposed calibration procedure allows the estimation of gradient nonlinearity to correct b-values and gradient directions ahead of advanced diffusion image processing for high angular resolution data, and requires only a five-minute phantom scan that can be included in a weekly or monthly quality assurance protocol.
Attenuation correction in SPECT images using attenuation map estimation with its emission data
Meysam Tavakoli, Maryam Naji, Ali Abdollahi, et al.
Photon attenuation during SPECT imaging significantly degrades the diagnostic outcome and the quantitative accuracy of final reconstructed images. It is well known that attenuation correction can be done by using iterative reconstruction methods if we access to attenuation map. Two methods have been used to calculate the attenuation map: transmission-based and transmissionless techniques. In this phantom study, we evaluated the importance of attenuation correction by quantitative evaluation of errors associated with each method. For transmissionless approach, the attenuation map was estimated from the emission data only. An EM algorithm with attenuation model was developed and used for attenuation correction during image reconstruction. Finally, a comparison was done between reconstructed images using our OSEM code and analytical FBP method before and after attenuation correction. The results of measurements showed that: our programs are capable to reconstruct SPECT images and correct the attenuation effects. Moreover, to evaluate reconstructed image quality before and after attenuation correction we applied a novel approach using Image Quality Index. Attenuation correction increases the quality and quantitative accuracy in both methods. This increase is independent of activity in quantity factor and decreases with activity in quality factor. In EM algorithm, it is necessary to use regularization to obtain true distribution of attenuation coefficients.
Evaluation of the clinical efficacy of the PeTrack motion tracking system for respiratory gating in cardiac PET imaging
Spencer Manwell, Marc J. P. Chamberland, Ran Klein, et al.
Respiratory gating is a common technique used to compensate for patient breathing motion and decrease the prevalence of image artifacts that can impact diagnoses. In this study a new data-driven respiratory gating method (PeTrack) was compared with a conventional optical tracking system. The performance of respiratory gating of the two systems was evaluated by comparing the number of respiratory triggers, patient breathing intervals and gross heart motion as measured in the respiratory-gated image reconstructions of rubidium-82 cardiac PET scans in test and control groups consisting of 15 and 8 scans, respectively. We found evidence suggesting that PeTrack is a robust patient motion tracking system that can be used to retrospectively assess patient motion in the event of failure of the conventional optical tracking system.
Poster Session: Observers, Modeling, and Phantoms
icon_mobile_dropdown
Comparison of detectability in step-and-shoot mode and continuous mode digital tomosynthesis systems
Digital tomosynthesis system has been widely used in chest, dental, and breast imaging. Since the digital tomosynthesis system provides volumetric images from multiple projection data, structural noise inherent in X-ray radiograph can be reduced, and thus signal detection performance is improved. Currently, tomosynthesis system uses two data acquisition modes: step-and-shoot mode and continuous mode. Several studies have been conducted to compare the system performance of two acquisition modes with respect to spatial resolution and contrast. In this work, we focus on signal detectability in step-and-shoot mode and continuous mode. For evaluation, uniform background is considered, and eight spherical objects with diameters of 0.5, 0.8, 1, 2, 3, 5, 8, 10 mm are used as signals. Projection data with and without spherical objects are acquired in step-and-shoot mode and continuous mode, respectively, and quantum noise are added. Then, noisy projection data are reconstructed by FDK algorithm. To compare the detection performance of two acquisition modes, we calculate task signal-to-noise ratio (SNR) of channelized Hotelling observer with Laguerre-Gauss channels for each spherical object. While the task-SNR values of two acquisition modes are similar for spherical objects larger than 1 mm diameter, step-and-shoot mode yields higher detectability for small signal sizes. The main reason of this behavior is that small signal is more affected by X-ray tube motion blur than large signal. Our results indicate that it is beneficial to use step-and-shoot data acquisition mode to improve the detectability of small signals (i.e., less than 1 mm diameter) in digital tomosynthesis systems.
Improvements in low contrast detectability with iterative reconstruction and the effect of slice thickness
Iterative reconstruction has become a popular route for dose reduction in CT scans. One method for assessing the dose reduction of iterative reconstruction is to use a low contrast detectability phantom. The apparent improvement in detectability can be very large on these phantoms, with many studies showing dose reduction in excess of 50%. In this work, we show that much of the advantage of iterative reconstruction in this context can be explained by differences in slice thickness. After adjusting the effective reconstruction kernel by blurring filtered backprojection images to match the shape of the noise power spectrum of iterative reconstruction, we produce thick slices and compare the two reconstruction algorithms. The remaining improvement from iterative reconstruction, at least in scans with relatively uniform statistics in the raw data, is significantly reduced. Hence, the effective slice thickness in iterative reconstruction may be larger than that of filtered backprojection, explaining some of the improvement in image quality.
The effect of a finite focal spot size on location dependent detectability in a fan beam CT system
A finite focal spot size is one of the sources to degrade the resolution performance in a fan beam CT system. In this work, we investigated the effect of the finite focal spot size on signal detectability. For the evaluation, five spherical objects with diameters of 1 mm, 2 mm, 3 mm, 4 mm, and 5 mm were used. The optical focal spot size viewed at the iso-center was a 1 mm (height) × 1 mm (width) with a target angle of 7 degrees, corresponding to an 8.21 mm (i.e., 1 mm / sin (7°)) focal spot length. Simulated projection data were acquired using 8 × 8 source lets, and reconstructed by Hanning weighted filtered backprojection. For each spherical object, the detectability was calculated at (0 mm, 0 mm) and (0 mm, 200 mm) using two image quality metrics: pixel signal to noise ratio (SNR) and detection SNR. For all signal sizes, the pixel SNR is higher at the iso-center since the noise variance at the off-center is much higher than that at the iso-center due to the backprojection weightings used in direct fan beam reconstruction. In contrast, detection SNR shows similar values for different spherical objects except 1 mm and 2 mm diameter spherical objects. Overall, the results indicate the resolution loss caused by the finite focal spot size degrades the detection performance, especially for small objects with less than 2 mm diameter.
In-vivo detectability index: development and validation of an automated methodology
Taylor Brunton Smith, Justin Solomon, Ehsan Samei
The purpose of this study was to develop and validate a method to estimate patient-specific detectability indices directly from patients’ CT images (i.e., “in vivo”). The method works by automatically extracting noise (NPS) and resolution (MTF) properties from each patient’s CT series based on previously validated techniques. Patient images are thresholded into skin-air interfaces to form edge-spread functions, which are further binned, differentiated, and Fourier transformed to form the MTF. The NPS is likewise estimated from uniform areas of the image. These are combined with assumed task functions (reference function: 10 mm disk lesion with contrast of -15 HU) to compute detectability indices for a non-prewhitening matched filter model observer predicting observer performance. The results were compared to those from a previous human detection study on 105 subtle, hypo-attenuating liver lesions, using a two-alternative-forcedchoice (2AFC) method, over 6 dose levels using 16 readers. The in vivo detectability indices estimated for all patient images were compared to binary 2AFC outcomes with a generalized linear mixed-effects statistical model (Probit link function, linear terms only, no interactions, random term for readers). The model showed that the in vivo detectability indices were strongly predictive of 2AFC outcomes (P < 0.05). A linear comparison between the human detection accuracy and model-predicted detection accuracy (for like conditions) resulted in Pearson and Spearman correlations coefficients of 0.86 and 0.87, respectively. These data provide evidence that the in vivo detectability index could potentially be used to automatically estimate and track image quality in a clinical operation.
Using non-specialist observers in 4AFC human observer studies
Premkumar Elangovan, Alistair Mackenzie, David R. Dance, et al.
Virtual clinical trials (VCTs) are an emergent approach for rapid evaluation and comparison of various breast imaging technologies and techniques using computer-based modeling tools. Increasingly 4AFC (Four alternative forced choice) virtual clinical trials are used to compare detection performances of different breast imaging modalities. Most prior studies have used physicists and/or radiologists and physicists interchangeably. However, large scale use of statistically significant 4AFC observer studies is challenged by the individual time commitment and cost of such observers, often drawn from a limited local pool of specialists. This work aims to investigate whether non-specialist observers can be used to supplement such studies. A team of five specialist observers (medical physicists) and five non-specialists participated in a 4AFC study containing simulated 2D-mammography and DBT (digital breast tomosynthesis) images, produced using the OPTIMAM toolbox for VCTs. The images contained 4mm irregular solid masses and 4mm spherical targets at a range of contrast levels embedded in a realistic breast phantom background. There was no statistically significant difference between the detection performance of medical physicists and non-specialists (p>0.05). However, non-specialists took longer to complete the study than their physicist counterparts, which was statistically significant (p<0.05). Overall, the results from both observer groups indicate that DBT has a lower detectable threshold contrast than 2D-mammography for both masses and spheres, and both groups found spheres easier to detect than irregular solid masses.
Optimization of the simulation parameters for improving realism in anthropomorphic breast phantoms
Virtual clinical trials (VCTs) were introduced as a preclinical alternative to clinical imaging trials, and for the evaluation of breast imaging systems. Realism in computer models of breast anatomy (software phantoms), critical for VCT performance, can be improved by optimizing simulation parameters based on the analysis of clinical images. We optimized the simulation to improve the realism of simulated tissue compartments, defined by the breast Cooper’s ligaments. We utilized the anonymized, previously acquired CT images of a mastectomy specimen to manually segment 205 adipose compartments. We generated 1,440 anthropomorphic breast phantoms based on octree recursive partitioning. These phantoms included variations of simulation parameters—voxel size, number of compartments, percentage of dense tissue, and shape and orientation of the compartments. We compared distributions of the compartment volumes in segmented CT images and phantoms using Kolmogrov-Smirnov (KS) distance, Kullback-Leibler (KL) divergence and a novel distance metric (based on weighted sum of distribution descriptors differences). We identified phantoms with the size distributions closest to CT images. For example, KS resulted in the phantom with 1000 compartments, ligament thickness of 0.4 mm and skin thickness of 12 mm. We applied multilevel analysis of variance (ANOVAN) to these distance measures to identify parameters that most significantly influence the simulated compartment size distribution. We have demonstrated an efficient method for the optimization of phantom parameters to achieve realistic distribution of adipose compartment size. The proposed methodology could be extended to other phantom parameters (e.g., ligaments and skin thicknesses), to further improve realism of the simulation and VCTs.
Validation study of the thorax phantom Lungman for optimization purposes
Sunay Rodríguez Pérez, Nicholas W. Marshall, Lara Struelens, et al.
This work aims to investigate the advantages and limitations of the Kyoto Kagaku thorax phantom Lungman for use in chest radiography optimization studies. First, patient survey data were gathered for chest posterior anterior (PA) and lateral (LAT) examinations in a standard chest X-ray room over a period of one year, using a Caesium Iodide (CsI) based flat panel detector with automatic exposure control (AEC). Parameters surveyed included exposure index (EI), dose area product (DAP) and AEC exposure time. PA and LAT projections of the phantom were then compared to these values. Additionally, the equivalence in millimetres of poly (methyl methacrylate) (PMMA) was established for the different regions of the Lungman phantom (lungs and mediastinum). Finally, a voxel model of the Lungman phantom was developed by the segmentation of a volumetric dataset of the phantom acquired using CT scanning. Subsequently, the model was used in Monte Carlo simulations with PENELOPE/penEasy code to calculate the energy deposited in the organs of the phantom. This enabled comparison of the phantom tissue-equivalent materials with materials defined by ICRP 89 in terms of energy deposition. For the survey data, close agreement was found between phantom and the median values for the patient data (deviations ranged from 4% to 31%, one outlier). The phantom lung region is equivalent to 89 mm to 106 mm of PMMA, depending on tube voltage. Energy deposited in the phantom material compared to those for ICRP defined material differed by at most 36% in AP irradiations and 49% in PA irradiations.
Method for decreasing CT simulation time of complex phantoms and systems through separation of material specific projection data
Sarah E. Divel, Soren Christensen, Max Wintermark, et al.
Computer simulation is a powerful tool in CT; however, long simulation times of complex phantoms and systems, especially when modeling many physical aspects (e.g., spectrum, finite detector and source size), hinder the ability to realistically and efficiently evaluate and optimize CT techniques. Long simulation times primarily result from the tracing of hundreds of line integrals through each of the hundreds of geometrical shapes defined within the phantom. However, when the goal is to perform dynamic simulations or test many scan protocols using a particular phantom, traditional simulation methods inefficiently and repeatedly calculate line integrals through the same set of structures although only a few parameters change in each new case. In this work, we have developed a new simulation framework that overcomes such inefficiencies by dividing the phantom into material specific regions with the same time attenuation profiles, acquiring and storing monoenergetic projections of the regions, and subsequently scaling and combining the projections to create equivalent polyenergetic sinograms. The simulation framework is especially efficient for the validation and optimization of CT perfusion which requires analysis of many stroke cases and testing hundreds of scan protocols on a realistic and complex numerical brain phantom. Using this updated framework to conduct a 31-time point simulation with 80 mm of z-coverage of a brain phantom on two 16-core Linux serves, we have reduced the simulation time from 62 hours to under 2.6 hours, a 95% reduction.
Phantom system for intraluminal x-ray imaging of the human colon
Ronen Lifshitz, Sivan Nawi-Srur, Batia Katz, et al.
The Check-Cap capsule, C-Scan Cap, performs intraluminal imaging of the human colon based on X-Ray scatter processes. Basic performance of such a system can be demonstrated using various tube-like phantom objects. Also, from a perspective of capsule dynamics, actuators can and have been used for capsule manipulation. Nevertheless the actual situation of a capsule in use is extremely complex, both in terms of the imaging-target object itself and the capsule dynamics within the same. In order to allow study of imaging system performance in a pseudo-clinical environment, a specialized phantom system has been developed. A tissue-equivalent material has been developed in-house, so as to allow simple usage and flexibility for making a wide variety of phantoms, simple tubes as well as extremely complex segments of the human colon which can possibly demonstrate adenomas. The material itself is durable, flexible, and very similar to water in terms of X-Ray scattering. Based on real abdominal CT images, real colon segments have been extracted to become 3D molds, which were used for producing a set of pseudo-clinical human colon segments. In the aspect of capsule and colon dynamics, capsule propulsion within these phantoms is based on the contents, i.e. capsule is hydro-dynamically propelled by surrounding medium rather than actuators. In addition, a system for generating peristaltic contractions along these colon segments has been developed; this system allows stimulation of the colon and the capsule within using arbitrary programmable contraction waves. This phantom system allows demonstration of pseudoclinical imaging scenarios in the lab.
Validation of Cooper's ligament thickness in software breast phantoms
Anthropomorphic breast phantoms are important tools for a wide range of tasks including pre-clinical validation of novel imaging techniques. In order to improve the realism in the phantoms, assessment of simulated anatomical structures is crucial. Thickness of simulated Cooper’s ligaments influences the percentage of dense tissue, as well as qualitative and quantitative properties of simulated images. We introduce three methods (2-dimensional watershed, 3-dimensional watershed, and facet counting) to assess the thickness of the simulated Cooper’s ligaments in the breast phantoms. For the validation of simulated phantoms, the thickness of ligaments has been measured and compared with the input thickness values. These included a total of 64 phantoms with nominal ligament thicknesses of 200, 400, 600, and 800 μm. The 2-dimensional and 3-dimensional watershed transformations were performed to obtain the median skeleton of the ligaments. In the 2-dimensional watershed, the median skeleton was found cross-section by cross-section, while the skeleton was found for the entire 3-dimensional space in the 3-dimensional watershed. The thickness was calculated by taking the ratio of the total volume of ligaments and the volume of median skeleton. In the facet counting method, the ligament thickness was estimated as a ratio between estimated ligaments’ volume and average ligaments’ surface area. We demonstrated that the 2-dimensional watershed technique overestimates the ligament thickness. Good agreement was found between the facet counting technique and the 3-dimensional watershed for assessing thickness. The proposed techniques are applicable for ligaments’ thickness estimation on clinical breast images, provided segmentation of Cooper’s ligaments has been performed.
Computer simulation of the breast subcutaneous and retromammary tissue for use in virtual clinical trials
Computer simulation of breast anatomy is an essential component of Virtual Clinical Trials, a preclinical approach to validate breast imaging systems. Realism of breast phantoms affects simulation studies and their acceptance among researchers. Previously, we developed a simulation of tissue compartments defined by the hierarchy of Cooper’s ligaments, based upon recursive partitioning using octrees. In this work, we optimize the simulation parameters to represent realistically the breast subcutaneous and retromammary tissue regions. As seen in clinical images, the subcutaneous and retromammary regions contain predominantly adipose tissue organized into relatively large compartments, as opposed to the predominantly glandular breast interior. To mimic such organization, we divided the phantom volume into “subcutaneous”, “retromammary”, and “interior” regions. Within each region, parameters controlling the size and orientation of tissue compartments were selected separately. In this preliminary study, we varied parameter values and calculated the corresponding average compartment volume in each region. The proposed method was evaluated using anatomic descriptors at both radiological and pathological spatial scales. We simulated the subcutaneous region as spanning 20% of the breast diameter, comparable to published analysis of breast CT images. We simulated tissue compartments with the average volume of 0.94 cm3, 0.89 cm3 and 0.31 cm3 in the subcutaneous, retromammary and interior regions, respectively. Those average volumes match within 12% the values reported from histological analysis. Future evaluation will include a comparison of simulated and clinical parenchymal descriptors. The proposed method will be extended to automate the parameter optimization, and simulate detailed spatial variation, to further improve the realism.
Improved virtual cardiac phantom with variable diastolic filling rates and coronary artery velocities
Gregory M. Sturgeon, Taylor W. Richards, E. Samei, et al.
To facilitate studies of measurement uncertainty in computed tomography angiography (CTA), we investigated the cardiac motion profile and resulting coronary artery motion utilizing innovative dynamic virtual and physical phantoms. The four-chamber cardiac finite element (FE) model developed in the Living Heart Project (LHP) served as the computational basis for our virtual cardiac phantom. This model provides deformation or strain information at high temporal and spatial resolution, exceeding that of speckle tracking echocardiography or tagged MRI. This model was extended by fitting its motion profile to left ventricular (LV) volume-time curves obtained from patient echocardiography data. By combining the dynamic patient variability from echo with the local strain information from the FE model, a series of virtual 4D cardiac phantoms were developed. Using the computational phantoms, we characterized the coronary motion and its effect on plaque imaging under a range of heart rates subject to variable diastolic function. The coronary artery motion was sampled at 248 spatial locations over 500 consecutive time frames. The coronary artery velocities were calculated as their average velocity during an acquisition window centered at each time frame, which minimized the discretization error. For the initial set of twelve patients, the diastatic coronary artery velocity ranged from 36.5 mm/s to 2.0 mm/s with a mean of 21.4 mm/s assuming an acquisition time of 75 ms. The developed phantoms have great potential in modeling cardiac imaging, providing a known truth and multiple realistic cardiac motion profiles to evaluate different image acquisition or reconstruction methods.
Quantification of the uncertainty in coronary CTA plaque measurements using dynamic cardiac phantom and 3D-printed plaque models
The purpose of this study was to quantify the accuracy of coronary computed tomography angiography (CTA) stenosis measurements using newly developed physical coronary plaque models attached to a base dynamic cardiac phantom (Shelley Medical DHP-01). Coronary plaque models (5 mm diameter, 50% stenosis, and 32 mm long) were designed and 3D-printed with tissue equivalent materials (calcified plaque with iodine enhanced lumen). Realistic cardiac motion was achieved by fitting known cardiac motion vectors to left ventricle volume-time curves to create synchronized heart motion profiles executed by the base cardiac phantom. Realistic coronary CTA acquisition was accomplished by synthesizing corresponding ECG waveforms for gating and reconstruction purposes. All scans were acquired using a retrospective gating technique on a dual-source CT system (Siemens SOMATOM FLASH) with 75ms temporal resolution. Multi-planar reformatted images were reconstructed along vessel centerlines and the enhanced lumens were manually segmented by 5 independent operators. On average, the stenosis measurement accuracy was 0.9% positive bias for the motion free condition (0 bpm). The measurement accuracy monotonically decreased to 18.5% negative bias at 90 bpm. Contrast-tonoise (CNR), vessel circularity, and segmentation conformity also decreased monotonically with increasing heart rate. These results demonstrate successful implementation of the base cardiac phantom with 3D-printed coronary plaque models, adjustable motion profiles, and coordinated ECG waveforms. They further show the utility of the model to ascertain metrics of coronary CT accuracy and image quality under a variety of plaque, motion, and acquisition conditions.
Accuracy and variability of texture-based radiomics features of lung lesions across CT imaging conditions
Texture analysis for lung lesions is sensitive to changing imaging conditions but these effects are not well understood, in part, due to a lack of ground-truth phantoms with realistic textures. The purpose of this study was to explore the accuracy and variability of texture features across imaging conditions by comparing imaged texture features to voxel-based 3D printed textured lesions for which the true values are known. The seven features of interest were based on the Grey Level Co-Occurrence Matrix (GLCM). The lesion phantoms were designed with three shapes (spherical, lobulated, and spiculated), two textures (homogenous and heterogeneous), and two sizes (diameter < 1.5 cm and 1.5 cm < diameter < 3 cm), resulting in 24 lesions (with a second replica of each). The lesions were inserted into an anthropomorphic thorax phantom (Multipurpose Chest Phantom N1, Kyoto Kagaku) and imaged using a commercial CT system (GE Revolution) at three CTDI levels (0.67, 1.42, and 5.80 mGy), three reconstruction algorithms (FBP, IR-2, IR-4), four reconstruction kernel types (standard, soft, edge), and two slice thicknesses (0.6 mm and 5 mm). Another repeat scan was performed. Texture features from these images were extracted and compared to the ground truth feature values by percent relative error. The variability across imaging conditions was calculated by standard deviation across a certain imaging condition for all heterogeneous lesions. The results indicated that the acquisition method has a significant influence on the accuracy and variability of extracted features and as such, feature quantities are highly susceptible to imaging parameter choices. The most influential parameters were slice thickness and reconstruction kernels. Thin slice thickness and edge reconstruction kernel overall produced more accurate and more repeatable results. Some features (e.g., Contrast) were more accurately quantified under conditions that render higher spatial frequencies (e.g., thinner slice thickness and sharp kernels), while others (e.g., Homogeneity) showed more accurate quantification under conditions that render smoother images (e.g., higher dose and smoother kernels). Care should be exercised is relating texture features between cases of varied acquisition protocols, with need to cross calibration dependent on the feature of interest.
Poster Session: Phase Contrast and Dark Field Imaging
icon_mobile_dropdown
Preclinical x-ray dark-field imaging: foreign body detection
Eva-Maria Braig, Daniela Muenzel, Alexander Fingerle, et al.
The purpose of this study was to evaluate the performance of X-ray dark-field imaging for detection of retained foreign bodies in ex-vivo hands and feet. X-ray dark-field imaging, acquired with a three-grating Talbot-Lau interferometer, has proven to provide access to sub-resolution structures due to small-angle scattering. The study was institutional review board (IRB) approved. Foreign body parts included pieces of wood and metal which were placed in a formalin fixated human ex-vivo hand. The samples were imaged with a grating-based interferometer consisting of a standard microfocus X-ray tube (60 kVp, 100 W) and a Varian 2520-DX detector (pixel size: 127 μm). The attenuation and the dark-field signals provide complementary diagnostic information for this clinical task. With regard to detecting of wooden objects, which are clinically the most relevant, only the dark-field image revealed the locations. The signal is especially strong for dry wood which in comparison is poorly to non-visible in computed tomography. The detection of high atomic-number or dense material and wood-like or porous materials in a single X-ray scan is enabled by the simultaneous acquisition of the conventional attenuation and dark-field signal. Our results reveal that with this approach one can reach a significantly improved sensitivity for detection of foreign bodies, while an easy implementation into the clinical arena is becoming feasible.
Advanced hyperspectral imaging system with edge enhancement
We developed an acousto-optic hyperspectral imaging system with edge enhancement capability. The system is an add-on to a standard light microscope. Edge enhancement operation mode is aimed for analysis of low-contrast microscopic samples, e.g. unstained cytological smears and histological samples, live cells. Edge-enhancement imaging mode is based on a feature of acousto-optic tunable filters to perform band-pass spatial filtering when unturned from noncritical phase matching geometry is diffraction. Switching between standard hyperspectral imaging and edge-enhancement modes is performed by means of a telecentric amplitude mask.
Weighted singular value decomposition (wSVD) to improve the radiation dose efficiency of grating-based x-ray phase contrast imaging with a photon counting detector
Xu Ji, Yongshuai Ge, Ran Zhang, et al.
The noise performance of grating-based differential phase contrast (DPC) imaging system is strongly dependent on the fringe visibility of the grating interferometer. Since the grating interferometer system is usually designed to be operated at a specific energy, deviation from that energy may lead to visibility loss and increased noise. By incorporating an energy-discriminating photon counting detector (PCD) into the system, photons with energies close to the operation energy of the interferometer can be selected, which offers the possibility of contrast-tonoise ratio (CNR) improvement. In our previous work, a singular value decomposition (SVD)-based rank one approximation method was developed to improve the CNR of DPC imaging. However, as the noise level and energy sensitivity of the interferometer may vary significantly from one energy bin to another, the signal and noise may not be separated well using the previously proposed method, therefore the full potential of the SVD method may not be achieved. This work presents a weighted SVD-based method, which maintains the noise reduction capability regardless of the similarity in the noise level across energy bins. The optimal weighting scheme was theoretically derived, and experimental phantom studies were performed to validate the theory and demonstrate the improved radiation dose efficiency of the proposed weighted SVD method.
High resolution laboratory grating-based x-ray phase-contrast CT
Manuel P. Viermetz, Lorenz J. B. Birnbacher, Andreas Fehringer, et al.
Grating-based phase-contrast computed tomography (gbPC-CT) is a promising imaging method for imaging of soft tissue contrast without the need of any contrast agent. The focus of this study is the increase in spatial resolution without loss in sensitivity to allow visualization of pathologies comparable to the convincing results obtained at the synchrotron. To improve the effective pixel size a super-resolution reconstruction based on subpixel shifts involving a deconvolution of the image is applied on differential phase-contrast data. In our study we could achieve an effective pixel sizes of 28mm without any drawback in terms of sensitivity or the ability to measure quantitative data.
First experiences with in-vivo x-ray dark-field imaging of lung cancer in mice
Lukas B. Gromann, Kai Scherer, Andre Yaroshenko, et al.
Purpose: The purpose of the present study was to evaluate if x-ray dark-field imaging can help to visualize lung cancer in mice. Materials and Methods: The experiments were performed using mutant mice with high-grade adenocarcinomas. Eight animals with pulmonary carcinoma and eight control animals were imaged in radiography mode using a prototype small-animal x-ray dark-field scanner and three of the cancerous ones additionally in CT mode. After imaging, the lungs were harvested for histological analysis. To determine their diagnostic value, x-ray dark-field and conventional attenuation images were analyzed by three experienced readers in a blind assessment. Results radiographic imaging: The lung nodules were much clearer visualized on the dark-field radiographs compared to conventional radiographs. The loss of air-tissue interfaces in the tumor leads to a significant loss of x-ray scattering, reflected in a strong dark-field signal change. The difference between tumor and healthy tissue in terms of x-ray attenuation is significantly less pronounced. Furthermore, the signal from the overlaying structures on conventional radiographs complicates the detection of pulmonary carcinoma. Results CT imaging: The very first in-vivo CT-imaging results are quite promising as smaller tumors are often better visible in the dark-field images. However the imaging quality is still quite low, especially in the attenuation images due to un-optimized scanning parameters. Conclusion: We found a superior diagnostic performance of dark-field imaging compared to conventional attenuation based imaging, especially when it comes to the detection of small lung nodules. These results support the motivation to further develop this technique and translate it towards a clinical environment.
Classification of the micromorphology of breast calcifications in x-ray dark-field mammography
Konstantin Willer, Kai Scherer, Eva Braig, et al.
The distant goal of this investigation is to reduce the number of invasive procedures associated with breast micro calcification biopsies, by improving and refining conventional BIRADS micro calcification assessments with x-ray dark-field mammography. The study was institutional review board (IRB) approved. A dedicated grating-based radiography setup (Mo-target, 40 keV, 70 mA) was used to investigate one breast mastectomy and 31 biopsies with dark-field mammography. Comparing the absorption and scattering properties of micro calcifications clusters enables accessing information on the interior morphology on the micron-scale retrieved in a non-invasive manner. Insights underlying the micro morphological nature of breast calcifications were verified by comprehensive high-resolution micro-CT measurements. It was found that Dark-field mammography allows a micro-structural classification of breast micro calcification as ultra-fine, fine, pleomorphic and coarse textured using conventional detectors. Dark-field mammography is thereby highly sensitive to minor structural deviations. Finally, the determined micro-texture of the investigated micro calcifications was correlated with findings obtained from histopathological work up. The presented results demonstrate that dark-field mammography yields the potential to enhance diagnostic validity of current micro calcification analysis - which is yet limited to the exterior appearance of micro calcification clusters - and thereby reduce the number of invasive procedures.
Phase unwrapping with differential phase image
S. Lian, H. Kudo
Phase unwrapping is the procedure of recovering the true phase from the modulo- 2π phase. It is needed for many applications such as Interferometric synthetic aperture radar, MRI and X-ray phase imaging etc. Many phase unwrapping methods have been proposed for two dimensional phase image. Nevertheless, unlike the conventional phase image, differential image are directly measured by many X-ray phase imaging such as Talbot interferometer. And these are also wrapped image. Compare with phase image, we need an additional integral operation after the unwrapping. It is not obvious whether we can get correct results using existing unwrapping methods directly. For example, integration may result errors propagation along the whole path. In this paper, we analyze various existing unwrapping methods with differential image. We also analyze integration methods how to affect the final result. And then propose our technique to get better results, which is the first attempt for unwrapping with differential image. Several experimental results demonstrate that proposed technique is effective.
Poster Session: Radiography: X-Ray Imaging, Fluoroscopy, and Tomosynthesis
icon_mobile_dropdown
Focal spot size reduction using asymmetric collimation to enable reduced anode angles with a conventional angiographic x-ray tube for use with high resolution detectors
The high-resolution requirements for neuro-endovascular image-guided interventions (EIGIs) necessitate the use of a small focal-spot size; however, the maximum tube output limits for such small focal-spot sizes may not enable sufficient x-ray fluence after attenuation through the human head to support the desired image quality. This may necessitate the use of a larger focal spot, thus contributing to the overall reduction in resolution. A method for creating a higher-output small effective focal spot based on the line-focus principle has been demonstrated and characterized. By tilting the C-arm gantry, the anode-side of the x-ray field-of-view is accessible using a detector placed off-axis. This tilted central axis diminishes the resultant focal spot size in the anode-cathode direction by the tangent of the effective anode angle, allowing a medium focal spot to be used in place of a small focal spot with minimal losses in resolution but with increased tube output. Images were acquired of two different objects at the central axis, and with the C-arm tilted away from the central axis at 1° increments from 0°-7°. With standard collimation settings, only 6° was accessible, but using asymmetric extended collimation a maximum of 7° was accessed for enhanced comparisons. All objects were positioned perpendicular to the anode-cathode direction and images were compared qualitatively. The increasing advantage of the off-axis focal spots was quantitatively evidenced at each subsequent angle using the Generalized Measured-Relative Object Detectability metric (GM-ROD). This anode-tilt method is a simple and robust way of increasing tube output for a small field-of-view detector without diminishing the overall apparent resolution for neuro-EIGIs.
Experimental investigation of a HOPG crystal fan for x-ray fluorescence molecular imaging
Tanja Rosentreter, Bernhard Müller, Helmut Schlattl, et al.
Imaging x-ray fluorescence generally generates a conflict between the best image quality or highest sensitivity and lowest possible radiation dose. Consequently many experimental studies investigating the feasibility of this molecular imaging method, deal with either monochromatic x-ray sources that are not practical in clinical environment or accept high x-ray doses in order to maintain the advantage of high sensitivity and producing high quality images. In this work we present a x-ray fluorescence imaging setup using a HOPG crystal fan construction consisting of a Bragg reflecting analyzer array together with a scatter reducing radial collimator. This method allows for the use of polychromatic x-ray tubes that are in general easily accessible in contrast to monochromatic x-ray sources such as synchrotron facilities. Moreover this energy-selecting device minimizes the amount of Compton scattered photons while simultaneously increasing the fluorescence signal yield, thus significantly reducing the signal to noise ratio. The aim is to show the feasibility of this approach by measuring the Bragg reflected Kα fluorescence signal of an object containing an iodine solution using a large area detector with moderate energy resolution. Contemplating the anisotropic energy distribution of background scattered x-rays we compare the detection sensitivity, applying two different detector angular configurations. Our results show that even for large area detectors with limited energy resolution, iodine concentrations of 0.12 % can be detected. However, the potentially large scan times and therefore high radiation dose need to be decreased in further investigations.
Real time implementation of anti-scatter grid artifact elimination method for high resolution x-ray imaging CMOS detectors using Graphics Processing Units (GPUs)
Scatter is one of the most important factors effecting image quality in radiography. One of the best scatter reduction methods in dynamic imaging is an anti-scatter grid. However, when used with high resolution imaging detectors these grids may leave grid-line artifacts with increasing severity as detector resolution improves. The presence of such artifacts can mask important details in the image and degrade image quality. We have previously demonstrated that, in order to remove these artifacts, one must first subtract the residual scatter that penetrates through the grid followed by dividing out a reference grid image; however, this correction must be done fast so that corrected images can be provided in real-time to clinicians. In this study, a standard stationary Smit-Rontgen x-ray grid (line density - 70 lines/cm, grid ratio - 13:1) was used with a high-resolution CMOS detector, the Dexela 1207 (pixel size - 75 micron) to image anthropomorphic head phantoms. For a 15 x 15 cm field-of-view (FOV), scatter profiles of the anthropomorphic head phantoms were estimated then iteratively modified to minimize the structured noise due to the varying grid-line artifacts across the FOV. Images of the head phantoms taken with the grid, before and after the corrections, were compared, demonstrating almost total elimination of the artifact over the full FOV. This correction is done fast using Graphics Processing Units (GPUs), with 7-8 iterations and total time taken to obtain the corrected image of only 87 ms, hence, demonstrating the virtually real-time implementation of the grid-artifact correction technique.
Quantitative flow and velocity measurements of pulsatile blood flow with 4D-DSA
Gabe Shaughnessy, Carson Hoffman, Sebastian Schafer, et al.
Time resolved 3D angiographic data from 4D DSA provides a unique environment to explore physical properties of blood flow. Utilizing the pulsatility of the contrast waveform, the Fourier components can be used to track the waveform motion through vessels. Areas of strong pulsatility are determined through the FFT power spectrum. Using this method, we find an accuracy from 4D-DSA flow measurements within 7.6% and 6.8% RMSE of ICA PCVIPR and phantom flow probe validation measurements, respectively. The availability of velocity and flow information with fast acquisition could provide a more quantitative approach to treatment planning and evaluation in interventional radiology.
Development of a prototype chest digital tomosynthesis R/F system
Digital tomosynthesis has an advantage of low radiation dose compared to conventional computed tomography (CT) by utilizing small number of projections (~80) acquired over a limited angular range. It can produce 3D volumetric data although they may have some artifacts due to incomplete sampling. Based upon these attractive merits, we developed a prototype digital tomosynthesis R/F system especially for the purpose of applications in chest imaging. Prototype chest digital tomosynthesis (CDT) R/F system contains an X-ray tube with high power R/F pulse generator, flat-panel detector, R/F table, electromechanical radiographic subsystems including precise motor controller, and a reconstruction server. For image reconstruction, users could select the reconstruction option between analytic and iterative methods. Reconstructed images of Catphan700 and LUNGMAN phantoms clearly and rapidly described the internal structures of the phantoms using graphics processing unit (GPU) programming. Contrast-to-noise ratio (CNR) values of the CTP682 module was higher in images using the simultaneous algebraic reconstruction technique (SART) than those using filtered backprojection (FBP) for all materials by factors of 2.60, 3.78, 5.50, 2.30, 3.70, and 2.52 for air, lung foam, low density polyethylene (LDPE), Delrin (acetal homopolymer resin), bone 50% (hydroxyapatite), and Teflon, respectively. Total elapsed times for producing 3D volume were 2.92 sec and 86.29 sec on average for FBP and SART (20 iterations), respectively. The times required for reconstruction were clinically feasible. Moreover, the total radiation dose from the system (5.68 mGy) could demonstrate a significant lowered radiation dose compared to conventional chest CT scan. Consequently, our prototype tomosynthesis R/F system represents an important advance in digital tomosynthesis applications.
Localization of cardiac volume and patient features in inverse geometry x-ray fluoroscopy
The scanning-beam digital x-ray (SBDX) system is an inverse geometry x-ray fluoroscopy technology that performs real-time tomosynthesis at planes perpendicular to the source-detector axis. The live display is a composite image which portrays sharp features (e.g. coronary arteries) extracted from a 16 cm thick reconstruction volume. We present a method for automatically determining the position of the cardiac volume prior to acquisition of a coronary angiogram. In the algorithm, a single non-contrast frame is reconstructed over a 44 cm thickness using shift-and-add digital tomosynthesis. Gradient filtering is applied to each plane to emphasize features such as the cardiomediastinal contour, diaphragm, and lung texture, and then sharpness vs. plane position curves are generated. Three sharpness metrics were investigated: average gradient in the bright field, maximum gradient, and the number of normalized gradients exceeding 0.5. A model correlating the peak sharpness in a non-contrast frame and the midplane of the coronary arteries in a contrast-enhanced frame was established using 37 SBDX angiographic loops (64-136 kg human subjects, 0-30° cranial- caudal). The average gradient in the bright field (primarily lung) and the number of normalized gradients >0.5 each yielded peaks correlated to the coronary midplane. The rms deviation between the predicted and true midplane was 1.57 cm. For a 16 cm reconstruction volume and the 5.5-11.5 cm thick cardiac volumes in this study, midplane estimation errors of 2.25-5.25 cm were tolerable. Tomosynthesis-based localization of cardiac volume is feasible. This technique could be applied prior to coronary angiography, or to assist in isocentering the patient for rotational angiography.
X-ray vector radiography of a human hand
Christoph Jud, Eva Braig, Martin Dierolf, et al.
Grating based x-ray phase-contrast reveals differential phase-contrast (DPC) and dark-field contrast (DFC) on top of the conventional absorption image. X-ray vector radiography (XVR) exploits the directional dependence of the DFC and yields the mean scattering strength, the degree of anisotropy and the orientation of scattering structures by combining several DFC-projections. Here, we perform an XVR of an ex vivo human hand specimen. Conventional attenuation images have a good contrast between the bones and the surrounding soft tissue. Within the bones, trabecular structures are visible. However, XVR detects subtler differences within the trabecular structure: there is isotropic scattering in the extremities of the phalanx in contrast to anisotropic scattering in its body. The orientation changes as well from relatively random in the extremities to an alignment along the longitudinal trabecular orientation in the body. In the other bones measured, a similar behavior was found. These findings indicate a deeper insight into the anatomical configuration using XVR compared to conventional radiography. Since microfractures cause a discontinuous trabecular structure, XVR could help to detect so-called radiographically occult fractures of the trabecular bones.
Performance evaluation of algebraic reconstruction technique (ART) for prototype chest digital tomosynthesis (CDT) system
Chest digital tomosynthesis (CDT) is a new 3D imaging technique that can be expected to improve the detection of subtle lung disease over conventional chest radiography. Algorithm development for CDT system is challenging in that a limited number of low-dose projections are acquired over a limited angular range. To confirm the feasibility of algebraic reconstruction technique (ART) method under variations in key imaging parameters, quality metrics were conducted using LUNGMAN phantom included grand-glass opacity (GGO) tumor. Reconstructed images were acquired from the total 41 projection images over a total angular range of ±20°. We evaluated contrast-to-noise ratio (CNR) and artifacts spread function (ASF) to investigate the effect of reconstruction parameters such as number of iterations, relaxation parameter and initial guess on image quality. We found that proper value of ART relaxation parameter could improve image quality from the same projection. In this study, proper value of relaxation parameters for zero-image (ZI) and back-projection (BP) initial guesses were 0.4 and 0.6, respectively. Also, the maximum CNR values and the minimum full width at half maximum (FWHM) of ASF were acquired in the reconstructed images after 20 iterations and 3 iterations, respectively. According to the results, BP initial guess for ART method could provide better image quality than ZI initial guess. In conclusion, ART method with proper reconstruction parameters could improve image quality due to the limited angular range in CDT system.
Dental non-linear image registration and collection method with 3D reconstruction and change detection
Mark Rahmes, Dean Fagan, George Lemieux
The capability of a software algorithm to automatically align same-patient dental bitewing and panoramic x-rays over time is complicated by differences in collection perspectives. We successfully used image correlation with an affine transform for each pixel to discover common image borders, followed by a non-linear homography perspective adjustment to closely align the images. However, significant improvements in image registration could be realized if images were collected from the same perspective, thus facilitating change analysis. The perspective differences due to current dental image collection devices are so significant that straightforward change analysis is not possible. To address this, a new custom dental tray could be used to provide the standard reference needed for consistent positioning of a patient’s mouth. Similar to sports mouth guards, the dental tray could be fabricated in standard sizes from plastic and use integrated electronics that have been miniaturized. In addition, the x-ray source needs to be consistently positioned in order to collect images with similar angles and scales. Solving this pose correction is similar to solving for collection angle in aerial imagery for change detection. A standard collection system would provide a method for consistent source positioning using real-time sensor position feedback from a digital x-ray image reference. Automated, robotic sensor positioning could replace manual adjustments. Given an image set from a standard collection, a disparity map between images can be created using parallax from overlapping viewpoints to enable change detection. This perspective data can be rectified and used to create a three-dimensional dental model reconstruction.