Proceedings Volume 11595

Medical Imaging 2021: Physics of Medical Imaging

cover
Proceedings Volume 11595

Medical Imaging 2021: Physics of Medical Imaging

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 29 March 2021
Contents: 25 Sessions, 171 Papers, 175 Presentations
Conference: SPIE Medical Imaging 2021
Volume Number: 11595

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 11595
  • Keynote: Radiomics
  • Welcome and Introduction
  • CT: Optimization and Image Quality
  • Photon Counting: Detectors and Systems
  • Machine Learning in Imaging Physics
  • Phantoms and Lesion Insertion
  • Image Guided Intervention
  • Molecular Imaging and MRI
  • Detector Physics
  • Spectral CT
  • Breast Tomosynthesis
  • Phase Contrast and Fluorescence Imaging
  • CT: Reconstruction
  • Photon Counting CT
  • X-ray Imaging: Dosimetry, Scatter, and Motion
  • Dual-energy: Optimization and Clinical Application
  • Posters: Algorithm Development
  • Posters: Computed Tomography
  • Posters: X-ray Imaging
  • Posters: Image Reconstruction
  • Posters: Machine Learning Applied to Imaging Physics
  • Posters: Photon Counting and Phase Contrast Imaging
  • Posters: Breast Imaging
  • Posters: Imaging Method
Front Matter: Volume 11595
icon_mobile_dropdown
Front Matter: Volume 11595
This PDF file contains the front matter associated with SPIE Proceedings Volume 11595, including the Title Page, Copyright Information, and Table of Contents.
Keynote: Radiomics
icon_mobile_dropdown
Radiomics: transforming standard imaging into mineable data for diagnostic and theragnostic applications
This Conference Presentation, “Radiomics: transforming standard imaging into mineable data for diagnostic and theragnostic applications,” was recorded for the Medical Imaging 2021 Digital Forum.
Welcome and Introduction
icon_mobile_dropdown
Welcome and Introduction to SPIE Conference 11595
Welcome and Introduction to SPIE Medical Imaging conference 11595: Physics of Medical Imaging
CT: Optimization and Image Quality
icon_mobile_dropdown
Multi-factorial optimization of imaging parameters for quantifying coronary stenosis in cardiac CT
The accuracy and variability of quantifications in Computed Tomography Angiography (CTA) are affected by imaging parameters and patient attributes. While patient attributes cannot usually be altered for a scan, imaging parameters can be optimized to improve the accuracy and precision of the procedure. This study developed a mathematical approach to find the optimal controllable parameters that maximize an ideal estimator, namely the Estimability index (e′), for quantifying coronary stenosis in cardiac CTA. We applied one-hot encoding to the categorical features and normalized the numerical features within 0 to 1. We applied a ridge regression model to the transformed data with polynomial feature transforms with degrees of 1 to 5. A grid search identified the polynomial model with the highest accuracy to predict the e′ value. We then evaluated the influence of each parameter and its permutation on the accuracy of the model. We formulated the corresponding optimization problem as maximization of e′ where the decision parameters are subject to linear constraints defining the upper bound and lower bound of each decision variable. We mathematically calculated the deterministic and probabilistic optimal controllable parameters across a range of deterministic and probabilistic uncontrollable parameters. The results showed that a reduced noise (less than 17 HU) and sharper MTFf50 (greater than 0.45 𝑚𝑚−1) maximizes e′. Moreover, the cardiac motion velocity had a higher impact on the deviation of the optimal decision variables compared to percent stenosis, vessel radius, plaque material, and lumen contrast.
A method for optimizing the x-ray tube current in ROI imaging using a simulation framework for radiation dose and image quality calculation for arbitrary fluence distributions
Sascha Manuel Huck, George S. K. Fung, Katia Parodi, et al.
In previous works, we have presented concepts for dynamic beam attenuators (DBAs) allowing for substantial dose reductions. So far, we have used a tube current modulation (TCM) scheme where the tube current is proportional to the square root of the maximum object attenuation in each projection. As DBAs show particularly beneficial behavior in region-of-interest (ROI) imaging, the question arose whether the employed TCM method would still be the most meaningful in this case. We present a simulation framework calculating i) the dose distribution in the object and ii) the image quality in the ROI of the reconstructed images for a given primary fluence: The dose to every voxel was calculated from individual Monte- Carlo simulations for every beamlet. From this we can approximate the effective dose, that incorporates tissue-specific weighting factors describing the stochastic health risk, for any fluence distribution. Using the same fluence distribution, the image variance according to the propagated fluence can be calculated for a given ROI. We employed a homogeneous phantom with a centered ROI and a female thorax phantom where the ROI is described by either the heart or the spine. Eventually, we optimized the tube current according to the product of mean image variance in the ROI and patient dose. Furthermore, the tube current obtained was compared with a heuristic square root TCM (hsqTCM) method. In result, the optimized TCM matches the hsqTCM well for the idealized case of a central ROI in an elliptical, homogeneous phantom. For a more complex case with an off-center ROI, the optimized TCM differs substantially from the hsqTCM rule, offering an additional dose reduction of up to 30%.
A framework to simulate CT images with tube current modulation
Giavanna Jadick, Ehsan Abadi, Brian Harrawood, et al.
Tube current modulation (TCM) is routinely implemented in clinical CT imaging. By modifying the x-ray tube current as a function of patient attenuation, image quality can be made more consistent and radiation dose can be better managed. Optimal TCM settings depend on the scan protocol and the physical characteristics of the patient. This study was undertaken to develop a realistic TCM model and integrate it into a scanner-specific CT simulation platform, allowing for faithful emulation of CT scans with TCM and optimization of TCM methods. The developed model adjusts the mAs for each projection based on attenuation estimated from two localizers, a strength parameter similar to the implementation of one major CT manufacturer (Siemens CARE Dose 4D), and imposed tube power limits. To demonstrate the utility of this framework, a virtual imaging trial was conducted to characterize image quality as a function of TCM strength. Three XCAT phantoms (BMIs at 25th, 50th, and 75th percentile) were imaged at five TCM strengths, using a CT configuration based on the geometry and physics of a commercial scanner (Siemens Definition Flash). Noise magnitude was measured in each reconstructed CT slice, average noise magnitude was calculated in lungs and bones, and RMSE, PSNR, SSIM, and contrast were computed in the lungs. The TCM strength which minimized noise increased with phantom BMI, suggesting that optimal TCM settings will differ for patients of different sizes. This study demonstrates the first incorporation of TCM into a scanner-specific virtual imaging trial platform and its application to patient-specific optimization.
A web-based software platform for efficient and quantitative CT image quality assessment and protocol optimization
Mingdong Fan, Theodore Thayib, Liqiang Ren, et al.
Channelized Hotelling observer (CHO), which has been shown to be well correlated with human observer performance in many clinical CT tasks, has a great potential to become the method of choice for objective image quality assessment. However, the use of CHO in clinical CT is still quite limited, mainly due to its complexity in measurement and calculation in practice, and the lack of access to an efficient and validated software tool for most clinical users. In this work, a web-based software platform for CT image quality assessment and protocol optimization (CTPro) was introduced. A validated CHO tool, along with other common image quality assessment tools, was made readily accessible through this web platform for clinical users and researchers without the need of installing additional software. An example of its application to evaluation of convolutional-neural-network (CNN)-based denoising was demonstrated.
A Bayesian approach for CT perfusion parameters estimation with imperfect measurement
Tao Sun, Roger Fulton
Perfusion CT imaging is commonly used for the rapid assessment of patients presenting with symptoms of acute stroke. Maps of perfusion parameters such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean transit time (MTT) derived from the scan data provide crucial information for stroke diagnosis and treatment decisions. Most vendors implement singular value decomposition (SVD)-based methods on their scanners to calculate these parameters. However, SVD-based method is known to have issues of improperly handling the imperfect scan. For example, increasing the acquisition interval or decreasing the scan duration may introduce a bias in the estimated perfusion parameters. In this work, we propose a Bayesian inference algorithm, which can tolerate the imperfect scan conditions better than conventional method and is able to derive the uncertainty of a given perfusion parameter. We apply the variational technique to the inference problem, which becomes an expectation-maximization problem. The probability distribution (with Gaussian mean and variance) of each estimated parameter can be obtained. We perform evaluations in simulation studies both with full and incomplete data. The proposed method can obtain much less bias in estimation than the conventional method, and additionally providing the degree of the uncertainty in measurement.
End-to-end modeling for predicting and estimating radiomics: application to gray level co-occurrence matrices in CT
While radiomics models are finding increased use in computer-aided diagnostics and as imaging biomarkers for inference and discovery, their utility in computed tomography (CT) is limited by variability of the image properties produced by different CT scanners, imaging protocols, patient anatomy, and an increasingly diverse range of reconstruction and post-processing software. While these effects can be mitigated with careful data curation and standardization of protocols, this is impractical for diverse sources of image data. In this work, we propose to generalize traditional end-to-end imaging system models to include radiomics calculation as an explicit stage. Such a model permits both prediction of the undesirable variability of radiomics, but also forms a basis for inverting the process to estimate the true underlying radiomics. This framework has the potential to provide for standardization of radiomics across imaging conditions permitting more widespread application of radiomics models; larger, more diverse image databases; and improved diagnoses and inferences based on those standardized metrics. We apply this framework to a large class of popular radiomics based on the Gray Level Co-occurrence matrix under conditions of imaging system that are well describe by traditional linear systems approaches as well as nonlinear systems for which traditional analytic models do not apply.
Photon Counting: Detectors and Systems
icon_mobile_dropdown
Photon counting CT in a C-arm interventional system: hardware development and artifact corrections
Xu Ji, Mang Feng, Ran Zhang, et al.
This work reports the development of a C-arm photon counting detectors (PCD)-CT system for evaluating the potential clinical utility of PCD-CT for intraoperative imaging. A dual-threshold CdTe-based strip PCD was mounted on a Siemens Artis Zee C-arm interventional system. A new geometric calibration method was developed to correct for the geometric distortion of the C-arm system during rotation. Experimental results show that, under clinically relevant conditions, the C-arm PCD-CT system can reliably and reproducibly generate high quality MDCT-like images without any noticeable geometric distortion or banding artifact.
Accurate characterization of metal implants and human materials using novel proton counting detector for Monte Carlo dose calculation in proton therapy
Owing to poor characterization of implant and adjacent human tissues, the presence of metal implants has been shown to be a risk factor for clinical results for proton therapy. In this project we have developed a way of characterizing implant and human materials in terms of water-equivalent thicknesses (WET) and relative stopping power (RSP) using a novel proton counting detector. We tracked each proton using a fast spectral imaging camera AdvaPIX-TPX3 which operated in energy mode measures collected energy per-voxel to derive the deposited energy along the particle track across the voxelated sensor. We considered three scenarios: sampling of WET of a CIRS M701 Adult Phantom (CMAP) at different locations; measurements of energy perturbations in the CMAP implanted with metal rods; sampling of WET of a more complex spine phantom. WET and RSP information were extracted from energy spectra at position along the central axis by using the shift in the most probable energy (MPE) from the reference energy (either initial incident energy or energy without a metal implant). Measurements were compared to TOPAS simulation results. Measured WET of the CMAP ranged from 18.63 to 25.23 cm depending on the location of the sampling which agreed with TOPAS simulation results within 1.6%. The RSPs of metals from CMAP perturbation measurements were determined as 1.97, 2.98, and 5.44 for Al, Ti and CoCr, respectively, which agreed with TOPAS within 2.3%. RSPs for material composition of a more complex spine phantom yielded 1.096, 1.309 and 1.001 for Acrylic, PEEK and PVC, respectively. In summary, this work has shown a method to accurately characterize RSPs of metal and human materials of CMAP implanted with metals and a complex spine phantom. Using the data obtained by the proposed method, it may be possible to validate RSP maps provided by conventional photon computed tomography techniques. Owing to poor characterization of implant and adjacent human tissues, the presence of metal implants has been shown to be a risk factor for clinical results for proton therapy. In this project we have developed a way of characterizing implant and human materials in terms of water-equivalent thicknesses (WET) and relative stopping power (RSP) using a novel proton counting detector. We tracked each proton using a fast spectral imaging camera AdvaPIX-TPX3 which operated in energy mode measures collected energy per-voxel to derive the deposited energy along the particle track across the voxelated sensor. We considered three scenarios: sampling of WET of a CIRS M701 Adult Phantom (CMAP) at different locations; measurements of energy perturbations in the CMAP implanted with metal rods; sampling of WET of a more complex spine phantom. WET and RSP information were extracted from energy spectra at position along the central axis by using the shift in the most probable energy (MPE) from the reference energy (either initial incident energy or energy without a metal implant). Measurements were compared to TOPAS simulation results. Measured WET of the CMAP ranged from 18.63 to 25.23 cm depending on the location of the sampling which agreed with TOPAS simulation results within 1.6%. The RSPs of metals from CMAP perturbation measurements were determined as 1.97, 2.98, and 5.44 for Al, Ti and CoCr, respectively, which agreed with TOPAS within 2.3%. RSPs for material composition of a more complex spine phantom yielded 1.096, 1.309 and 1.001 for Acrylic, PEEK and PVC, respectively. In summary, this work has shown a method to accurately characterize RSPs of metal and human materials of CMAP implanted with metals and a complex spine phantom. Using the data obtained by the proposed method, it may be possible to validate RSP maps provided by conventional photon computed tomography techniques.
Compton coincidence in silicon photon-counting CT detectors
Compton interactions amount to a significant fraction of the registered counts in a silicon detector. In a Compton interaction, only a part of the photon energy is deposited and a single incident photon can result in multiple counts unless tungsten shielding is used. Silicon has proved to be a competitive material for photon-counting CT detectors but to improve the performance further, it is desirable to use coincidence techniques to identify Compton scattered photons and reconstruct their energies. In a detector with no tungsten shielding, incident photons can interact through a series of interactions. By using information about the position and energy of each interaction, probability-based methods can be used to estimate the incident photon energy. In this work we present a framework of likelihood functions that can be used to estimate the incident photon energy in a silicon detector. For a low count-rate case with one incident photon per time frame, we show that the proposed likelihood framework can estimate the incident photon energy with a mean error of -0.26 keV and an RMS error of 1.14 keV for an ideal case with a perfect detector. For a non-ideal case including spatial and energy resolution, the corresponding results were -0.08 keV and 0.66 keV, respectively. The fraction of correctly identified interaction chains, with respect to the interaction order, were 99.2% in the ideal case and 97.3% in the non-ideal case.
High resolution, full field-of-view, whole body photon-counting detector CT: system assessment and initial experience
Kishore Rajendran, Jeff Marsh, Martin Petersilka, et al.
Computed tomography (CT) using photon-counting detectors (PCD) offers dose-efficient ultra-high-resolution imaging, high iodine contrast-to-noise ratio, multi-energy and material decomposition capabilities. We have previously demonstrated the potential benefits of PCD-CT using phantoms, cadavers, and human studies on a prototype PCD-CT system. This system, however, had several limitations in terms of scan field-of-view (FOV) and longitudinal coverage. Recently, a full FOV (50 cm) PCD-CT system with wider longitudinal coverage and higher spatial resolution (0.15 mm detector pixels) has been installed in our lab capable of human scanning at clinical dose and dose rate. In this work, we share our initial experience of the new PCD-CT system and compare its performance with a state-of-the-art 3rd generation dual-source CT scanner. Basic image quality was assessed using an ACR CT accreditation phantom, high-resolution performance using an anthropomorphic head phantom, and multi-energy and material decomposition performance using a multi-energy CT phantom containing various concentrations of iodine and hydroxyapatite. Finally, we demonstrate the feasibility of high-resolution, full FOV PCD-CT imaging for improved delineation of anatomical and pathological features in a patient with pulmonary nodules.
Characterization of a GaAs photon counting detector for mammography
B. Ghammraoui, A. Makeev, S. Gkoumas, et al.
The purpose of this study was to evaluate the performance of mammography images acquired with a prototype Gallium Arsenide (GaAs) photon counting detector. The contrast to noise ratio was measured using different aluminum/PMMA phantoms of different thicknesses. In addition, microcalcification detection accuracy was evaluated using a receiver operating characteristic (ROC) study with three observers reading an ensemble of images for each case. For both studies, comparisons were made to a commercial mammography system. The contrast to noise ratio was estimated by imaging 18, 36 and 110 μm thick aluminum targets placed on top of 6 cm of PMMA plates and was found to be similar or better over a range of exposures. Similar task performance in detecting microcalcifactions was observed between the systems over a range of clinically applicable dose levels with a small increase in GaAs’s system at lower dose levels. The GaAs system was evaluated using a typical mammography X-ray spectrum provided by the Automatic Exposure Settings (AEC) of the commercial mammography system and using only one energy threshold (i.e, one energy window). Operating the GaAs detector with multiple energy windows (i.e., two energy windows) may provide improved performance for a given dose.
Anomalous edge response of photon counting detectors induced by pulse pileup and anti-coincidence logic
Xu Ji, Ran Zhang, Ke Li
A peculiar edge enhancement effect was observed when a PCD was operated under anti-coincidence mode and irradiated by high-flux x-rays to cause pulse pileups. The severity of this effect increases with the input flux level and completely disappears when the anti-coincidence mode is turned off. A theoretical analysis shows that this edge enhancement effect was jointly caused by the pulse pileup-induced count loss and the arbitration process used the anti-coincidence logic. Compared with pixels blocked by an edge, pixels immediately outside the edge has a much higher likelihood to win the arbitration when there are coincident x-rays.
Machine Learning in Imaging Physics
icon_mobile_dropdown
A novel physics-based data augmentation approach for improved robust deep learning in medical imaging: lung nodule CAD false positive reduction in low-dose CT environments
M. W. Wahi-Anwar, N. Emaminejad, Y. Choi, et al.
A novel physics-based data augmentation (PBDA) is introduced, to provide a representative approach to introducing variance during training of a deep-learning model. Compared to traditional geometric-based data augmentation (GBDA), we hypothesize that PBDA will provide more realistic variation representative of potential imaging conditions that may be seen beyond the initial training data, and thereby train a more robust model (particularly in the scope of medical imaging). PBDA is tested in the context of false-positive reduction in nodule detection in low-dose lung CT and is shown to exhibit superior performance and robustness across a wide range of imaging conditions.
Deep neural networks-based denoising models for CT imaging and their efficacy
Prabhat KC, Rongping Zeng, M. Mehdi Farhangi, et al.
Most of the Deep Neural Networks (DNNs) based CT image denoising literature shows that DNNs outperform traditional iterative methods in terms of metrics such as the RMSE, the PSNR and the SSIM. In many instances, using the same metrics, the DNN results from low-dose inputs are also shown to be comparable to their high-dose counterparts. However, these metrics do not reveal if the DNN results preserve the visibility of subtle lesions or if they alter the CT image properties such as the noise texture. Accordingly, in this work, we seek to examine the image quality of the DNN results from a holistic viewpoint for low-dose CT image denoising. First, we build a library of advanced DNN denoising architectures. This library is comprised of denoising architectures such as the DnCNN, U-Net, Red-Net, GAN, etc. Next, each network is modeled, as well as trained, such that it yields its best performance in terms of the PSNR and SSIM. As such, data inputs (e.g. training patch-size, reconstruction kernel) and numeric-optimizer inputs (e.g. minibatch size, learning rate, loss function) are accordingly tuned. Finally, outputs from thus trained networks are further subjected to a series of CT bench testing metrics such as the contrast-dependent MTF, the NPS and the HU accuracy. These metrics are employed to perform a more nuanced study of the resolution of the DNN outputs' low-contrast features, their noise textures, and their CT number accuracy to better understand the impact each DNN algorithm has on these underlying attributes of image quality.
Training a low-dose CT denoising network with only low-dose CT dataset: comparison of DDLN and Noise2Void
The radiation risk of X-ray CT gained increasing concern in the past decades. Lowering CT scan dose leads to noisy raw data as well as streak artifacts after reconstruction. Extensive studies have been conducted to reduce noise and artifacts for low-dose CT (LDCT). As deep learning has achieved great success in computer vision tasks, it also become a powerful tool in LDCT denoising. Commonly used deep learning methods such as supervised learning and generative adversarial learning have a strong dependence on large normal-dose CT (NDCT) dataset. While in real cases, the NDCT dataset is often expensive or not accessible, which limits the implementation of deep learning. In recent studies, multiple deep learning methods have been proposed for LDCT denoising without NDCT data. Among them, a popular type of methods is noisy label training (NLT) which use LDCT data as labels for network supervised training. Noise2Void is an easily implementable and representative method of NLT and has achieved great results in pixel-independent noise denoising. Another type is distribution learning methods which reduce LDCT noise-level by learning NDCT distribution. Deep distribution learning from noisy samples (DDLN) learns the NDCT distribution from LDCT data only and adopts MAP estimation for LDCT denoising with the learned NDCT distribution prior. It is effective for LDCT projection data denoising. In this work, the two representative methods are compared for LDCT projection data denoising under different noise-levels to seek for their suitable application scenarios.
A constrained Bregman framework for unsupervised convolutional denoising of multi-channel x-ray CT data
D. P. Clark, C. T. Badea
Deep learning (DL) approaches to image denoising have advanced real-world performance in one of the most thoroughly studied areas of digital signal processing. Keys to the success of DL are data-driven approximation of an underlying signal model and non-linear domain mapping. By contrast, “classic” denoising approaches rely on prior assumptions about the structure of data and noise and generally penalize deviations through convex optimization. Despite these apparent differences, denoising with CNNs can be viewed as an extension of older sparse coding models, and even state-of-the-art DL models commonly employ cost terms like mean-squared-error and total variation. In this work, we adapt the split Bregman optimization method (SBM) for use with CNN-based regularizers. We show how the operator splitting of the SBM lends itself to splitting cost terms for CNN-based denoising, simplifying the selection of regularization hyperparameters and the hybridization of classic and DL-specific regularizers (bilateral TV, patch rank, discriminator loss). Furthermore, we demonstrate a simple, data-driven method to rebalance cost terms during network training, while limiting the potential to “invent” features. Using preclinical photon-counting micro-CT data of the mouse heart, our proposed framework performs 2D denoising of the time (PSNR, 27.7; SSIM, 0.62) and energy (PSNR, 35.0; SSIM, 0.81) dimensions which compares favorably with both 3D, multi-channel iterative reconstruction (reference) and the BM3D denoising algorithm (time: PSNR, 26.6; SSIM, 0.58. energy: PSNR, 27.2; SSIM, 0.76). The time and energy networks evaluate in under 2 minutes each vs. ~1-2 hours for our iterative reconstructions using similar hardware.
A CT denoising neural network with image properties parameterization and control
A wide range of dose reduction strategies for x-ray computed tomography (CT) have been investigated. Recently, denoising strategies based on machine learning have been widely applied, often with impressive results, and breaking free from traditional noise-resolution trade-offs. However, since typical machine learning strategies provide a single denoised image volume, there is no user-tunable control of a particular trade-off between noise reduction and image properties (biases) of the denoised image. This is in contrast to traditional filtering and model-based processing that permits tuning of parameters for a level of noise control appropriate for the specific diagnostic task. In this work, we propose a novel neural network that includes a spatial-resolution parameter as additional input permits explicit control of the noise-bias trade-off. Preliminary results show the ability to control image properties through such parameterization as well as the possibility to tune such parameters for increased detectability in task-based evaluation.
Scatter distribution estimated and corrected by using convolutional neural network for multi-slice CT system
Yang Wang, Ruikang Zhang, Kai Chen, et al.
X-ray scatter is a major limit for good CT image quality. Apart from using hardware approach (e.g. anti-scatter grid), computational algorithms based on Monte-Carlo simulation or convolution kernels have been proven to be valid for compensating scatter effect. However, computational algorithms always have to take care about the balance between complexity and efficiency, so the performance has some limitation when scatter contribution is large. In this paper we proposed a deep learning based approach by adopting a convolutional neuro-network (CNN) to predict the scatter distribution on projection domain. The performance of the CNN-based model is validated in both projection domain as well as reconstructed image domain. The result shows that the scatter correction algorithm with learning approach is able to compensate the artifact from scatter radiations under various complicated scenarios, resulting in equivalent or even better image quality than commercially used kernel-based scatter correction algorithm.
Estimating Compton scatter distributions with a regressional neural network for use in a real-time staff dose management system for fluoroscopic procedures
Staff-dose management in fluoroscopic procedures is a continuing concern due to insufficient awareness of radiation dose levels. To maintain dose as low as reasonably achievable (ALARA), we have developed a software system capable of monitoring the procedure room scattered radiation and the dose to staff members in real-time during fluoroscopic procedures. The scattered-radiation display system (SDS) acquires imaging-system signal inputs to update technique and geometric parameters used to provide a color-coded mapping of room scatter. We have calculated a discrete look-up-table (LUT) of scatter distributions using Monte-Carlo (MC) software and developed an interpolation technique for the multiple parameters known to alter the spatial shape of the distribution. However, the file size for the LUT’s can be large (~2GB), leading to long SDS installation times in the clinic. Instead, this work investigated the speed and accuracy of a regressional neural network (RNN) that we developed for predicting the scatter distribution from imaging-system inputs without the need for the LUT and interpolation. This method greatly reduces installation time while maintaining real-time performance. Results using error maps derived from the structural similarity index indicate high visual accuracy of predicted matrices when compared to the MC-calculated distributions. Dose error is also acceptable with a matrix element-averaged percent error of 31%. This dose-monitoring system for staff members can lead to improved radiation safety due to immediate visual feedback of high-dose regions in the room during the procedure as well as enhanced reporting of individual doses post-procedure.
Phantoms and Lesion Insertion
icon_mobile_dropdown
COPD quantifications via CT imaging: ascertaining the effects of acquisition protocol using virtual imaging trial
COPD is the fourth leading cause of death in the United States. The structural and functional attributes of this disease can be assessed in vivo using computed tomography (CT). The value of quantitative CT has been demonstrated towards characterization and treatment evaluation of COPD. Although promising, the influence of imaging protocols on the accuracy and precision of these quantifications remains unknown. The purpose of this study was to build a novel and realistic virtual imaging platform and investigate the effects of CT imaging parameters on COPD quantifications. Ten COPD human models were created on the platform of the state-of-the-art XCAT human models. The shape, size, and the distribution of emphysema along with its material density were modeled separately in each XCAT phantom, based on the CT images of confirmed COPD patients. The developed phantoms were imaged using an established scanner-specific CT simulator (DukeSim) at various dose levels with and without tube current modulation. The projection images were then reconstructed using six different reconstructions from a commercial reconstruction toolbox. Established COPD imaging biomarkers were extracted from the simulated images and compared against their corresponding digital ground truth values. The relative bias ranged from -2.1% to 66.0% and the variability ranged from 0.9% to 29.5%. The results showed that with careful choice of smooth reconstruction kernel, CT can be used as a reliable quantification tool for the COPD, with higher mAs, fixed tube currents, and iterative reconstructions further enhancing the accurate and consistent quantification of the disease. This study demonstrates the first development of a suite of computational COPD phantoms, enabling virtual imaging trials in the context of COPD imaging.
Generative adversarial networks and radiomics supervision for lung lesion synthesis
Realistic lesion generation is a useful tool for system evaluation and optimization. Generated lesions can serve as realistic imaging tasks for task-base image quality assessment, as well as targets in virtual clinical trials. In this work, we investigate a data-driven approach for categorical lung lesion synthesis using public lung CT databases. We propose a generative adversarial network with a Wasserstein discriminator and gradient penalty to stabilize training. We further included conditional inputs such that the network can generate user-specified lesion categories. Novel to our network, we directly incorporated radiomic features in an intermediate supervision step to encourage similar textures between generated and real lesions. We calculated the network using lung lesions from the Lung Image Database Consortium (LIDC) database. Lesions are divided into two categories: solid vs. non-solid. We performed quantitative evaluation of network performance based on four criteria: 1) overfitting, in terms of structural and morphological similarity to the training data, 2) diversity of generated lesions, in terms of similarity to other generated data, 3) similarity to real lesions, in terms of distribution of example radiomics features, and 4) conditional consistency, in terms of classification accuracy using a classifier trained on the training lesions. We imposed a quantitative threshold for similarity based on visual inspection. The percentage of non-solid and solid lesions that satisfy low overfitting and high diversity is 87.1% and 70.2% of non-solid and solid lesions respectively. The distribution of example radiomics features are similar in the generated and real lesions indicated by low Kullback–Leibler divergence scores: 1.62 for non-solid lesions and 1.13 for solid lesions. Classification accuracy for the generated lesions are comparable with that for the real lesions. The proposed network presents a promising approach for generating realistic lesions with clinically relevant features crucial for the comprehensive assessment of novel medical imaging systems.
The creation of a breast cancer voxel model database for virtual clinical trials in digital breast tomosynthesis
Aim: To develop, validate and apply a pipeline for breast cancer voxel model generation from patient digital breast tomosynthesis (DBT) cases for cancer type specific virtual clinical trials (VCT). Methods: Input cancer cases were retrieved from wide-angle DBT systems. Three aspects of the creation process were investigated: (1) The impact of the limited z-resolution of DBT on the shape of the voxel model using circularity measurements (i.e. ratio of diameters between input and result after simulation test), DICE coefficient and artefact spread function. (2) The possibility to speed up and automate lesion segmentation with a deep learning network. (3) The ultimate realism of the voxel models in a VCT application, visually scored by a radiologist and a medical physicist. Results: Deviations between ground truth and segmented voxel models due to the pseudo-3D characteristics of DBT were limited, with circularity changes smaller than 8%. A 4-layer U-net deep learning network with a multiplication of the DICE loss and the implemented boundary loss as loss function is capable to produce segmentations within the variability of manual segmentations (DICE coefficient = 0.80). A reader study of the VCT application showed an average realism score of 3.4 on a scale of 1 to 5 for the simulated lesion manually segmented, compared to an average of 4.3 for the real lesions. An initial total of 25 invasive cancer models (9 non-spiculated, 16 spiculated masses) was successfully created and validated. Conclusion: Segmentation from an object with limited z-resolution induces an acceptable deformation. Voxel models created from DBT images can be used to mimic realistic DBT cancer cases. The use of AI techniques has facilitated the cumbersome manual segmentation task.
Computer model of mechanical imaging acquisition for virtual clinical trials
Rebecca Axelsson, Hanna Isaksson, Sophia Zackrisson, et al.
Malignant breast tumours can be distinguished from benign lesions and normal tissue based on their mechanical properties. Our pilot studies have demonstrated the potential of using Mechanical Imaging (MI) combined with mammography to reduce recalls and false positives in breast cancer screening by more accurately identifying benign lesions. To enable further optimization of MI we propose a computer simulation of the MI acquisition, for use in a Virtual Clinical Trial (VCT) framework. VCTs are computer simulated clinical trials used to efficiently evaluate clinical imaging systems. A linear elastic finite element (FE) model of the breast under dynamic compression was implemented using an open-source FE solver. A spherical tumour (15 mm in diameter) was inserted into the simulated predominantly adipose breast. The location and stiffness of the tumour was varied. The average stress on the compressed breast surface was calculated and compared with the local average stress at the tumour location and the Relative Mean Pressure over lesion Area (RMPA) was calculated. Preliminary results were within a realistic range with an average stress on the breast (tumour) of 5.9-16.6 kPa which is in agreement with published values between 1.0 – 22.5 kPa. This corresponds to RMPA values of 0.96-2.15 depending on stiffness and location of the tumour. This can lead to more detailed validation of various MI acquisition schemes through VCTs before their use in clinical studies.
3D printed cardiac phantom for cardiac CT imaging performance evaluation and examination strategy verification
A 3D printed heart chamber phantom was developed to work combined with other commercially available phantom kit. Based on CT image combined with traditional phantom and anatomic structures, the 3D model was generated and input for printing with selected materials. The 3D printed phantom could realize multi-dimensional motion and deformation similarity, improved HU behavior as contrast enhanced tissue mimic, biological closed anthropomorphic structure including cardiac chambers and coronary arteries with contrast agents, as well as inserts for anatomic or functional abnormalities simulation. With those properties the proposed 3D printed phantom could potentially be used for either CCTA imaging performance or CCTA scan strategy verifications combined with scanner and patient properties.
Deep-learning lesion and noise insertion for virtual clinical trial in chest CT
Hao Gong, Jeffrey F. Marsh, Jamison Thorne, et al.
Accurate and objective image quality assessment is essential for the task of radiation dose optimization in clinical CT. Standard method relies on multi-reader multi-case (MRMC) studies in which radiologists are tasked to interpret diagnostic image quality of many carefully-collected positive and negative cases. The efficiency of such MRMC studies is frequently challenged by the lengthy and expensive procedure of case collection and the establishment of clinical reference standard. To address this challenge, multiple methods of virtual clinical trial to synthesize patient cases at different conditions have been proposed. Projection-domain lesion- / noise-insertion methods require the access to patient raw data and vendor-specific proprietary tools which are frequently not accessible to most users. The conventional image-domain noise-insertion methods are often challenged by the over-simplified lesion models and CT system models which may not represent conditions in real scans. In this work, we developed deep-learning lesion and noise insertion techniques that can synthesize chest CT images at different dose levels with and without lung nodules using existing patient cases. The proposed method involved a nodule-insertion convolutional neural network (CNN) and a noise-insertion CNN. Both CNNs demonstrated comparable quality to our previously-validated projection domain lesion- / noise-insertion techniques: mean structural similarity index (SSIM) of inserted nodules 0.94 (routine dose), and mean percent noise difference ~5% (50% of routine dose). The proposed deep-learning techniques for chest CT virtual clinical trial operate directly on image domain, which is more widely applicable than projection-domain techniques.
Image Guided Intervention
icon_mobile_dropdown
Evaluation of methods to derive blood flow velocity from 1000 fps high-speed angiographic sequences (HSA) using optical flow (OF) and computational fluid dynamics (CFD)
Digital Subtraction Angiography (DSA) is considered the gold standard for imaging and guiding treatment of neurovascular lesions, such as cerebral aneurysms and carotid stenoses. Though DSA can show high-resolution morphology, it remains difficult to extract temporal physiological information, because higher frame-rates are necessary to accurately quantify neurovascular flow details. Recent advances in photon-counting detector technology have led us to develop High-Speed Angiography (HSA), where X-ray images are acquired at 1000 fps for more accurate visualization and quantification of blood flow. Blood flow was imaged using HSA under constant flow conditions within various 3D printed patient-specific phantoms. Blood velocity was quantified using an open source Optical Flow algorithm, OpenOpticalFlow, to perform velocity estimation based on the spatio-temporal intensity changes of iodinated contrast wavefronts. The results of these algorithms are then compared with Computational Fluid Dynamics (CFD) simulations, using the same inlet boundary conditions and model geometries. The performance of these algorithms at lower temporal resolution was then also assessed by simulating lower frame rates from the acquired 1000 fps data. It is important to ascertain the hemodynamic effect of abnormal neurovascular conditions, as well as their effect on treatment of such conditions during the actual clinical interventional procedure. While theoretical CFD results requiring considerable computer capability are delayed for hours or more, it is expected that clinical results from multiple HSA sequences will be available almost immediately while the patient is still under treatment, and even right after flow conditions are changed beneficially by the intervention.
A hybrid photon counting and flat panel detector system for periprocedural hemorrhage monitoring in the angio suite
Endovascular procedures performed in the angio suite have gained considerable popularity for treatment of ischemic stroke as well as aneurysms. However, new intracranial hemorrhage (ICH) may develop during these procedures, and it is highly desirable to arm the angio suite with real-time and reliable ICH monitoring tools. Currently, angio suites are equipped with scintillator-based flat panel detector (FPD) imaging systems for both planar and cone beam CT (CBCT) imaging applications. However, the reliability of CBCT for ICH imaging is insufficient due to its poor low-contrast detectability compared with MDCT and lack of spectral imaging capability for differentiating between ICH, calcifications, and iodine staining from periprocedural contrast-enhanced imaging sequences. To preserve the benefits of the FPD for 2D imaging and certain high-contrast 3D imaging tasks while adding a high quality, quantitative, and affordable CT imaging capability to the angio room for intraoperative ICH monitoring, a hybrid detector system was developed that includes the existing FPD on the C-arm gantry and a strip photon-counting detector (PCD) that can be translated into the field-of-view for high quality PCD-CT imaging at a given brain section-of-interest. The hybrid system maintains the openness and ease of use of the C-arm system without the need to remodel the angio room and without installing a slidinggantry MDCT (aka Angio CT) with orders of magnitude higher costs. Additionally, the cost of the strip PCD is much less than the cost of a large-area PCD. To demonstrate the feasibility and potential benefits of the hybrid PCD-FPD system, a series of physical phantom studies, and human cadaver studies were performed at a gantry rotation speed (7 s) and radiation dose level that closely match those of clinical CBCT acquisitions. The experimental images of C-arm PCD-CT demonstrated MDCT-equivalent low-contrast detectability of PCD-CT and significantly reduced artifacts compared with FPD-based CBCT.
A novel approach to calculating dual energy in the angiography suite
Purpose: Medical management after endovascular treatment (EVT) for acute ischemic stroke involves blood pressure management and antithrombotic medications which are strongly influenced by hemorrhagic complications of the procedure. In this study our aim is to modify a commercially available c-arm angiographic system to assess feasibility of distinguishing between blood and iodinated contrast using dual-energy CT technology. Methods: An angiographic C-arm system was manipulated with the addition of a flat sheet of tin (Sn) filtration with corresponding calibration. Two flat-panel cone-beam CT (FP-CBCT) were acquired at 70 kV and 125Sn kV. A multienergy CT phantom with iodine inserts at various concentrations ranging from 0.2 to 15 mg/cc was used to characterize the behavior (i.e. dual-energy ratios DER) of the materials as a function of the kVp pair. DER values were needed to adjust an existing material decomposition software. The accuracy of iodine quantification was assessed with inserts including solid water, iodine, blood and mixed iodine and blood; all with concentrations ranges relevant for stroke imaging. Results: Imaging indicated the ability to differentiate between blood and iodine for the use with FP-CBCT. Pure blood and iodine inserts fell within the 95% confidence interval for precision of contrast concentration measured. For iodine inserts, the error from expected measurement was 7.5% or under for inserts with concentrations above 2 mg/cc. Inserts containing a combination of blood and iodine were consistently reading 1 mg/ml or less deviations relative to the value of the material specified by the manufacturer, which represented a 25-35% error difference from the expected value. Conclusion: These results establish the reproducibility of the phantom values for dual energy calculations and suggest that this technology may be clinically useful after EVT for stroke. Improvements in the accuracy of the iodine/blood combination under dual energy evaluation is needed for other applications outside of the use for differentiating between hemorrhage and contrast staining.
Deformable image-based motion compensation for interventional cone-beam CT with a learned autofocus metric
A. Sisniega, H. Huang, W. Zbijewski, et al.
Purpose: Cone-beam CT is a common approach for 3D guidance in interventional radiology (IR), but its long image acquisition time poses a limit to image quality due to complex deformable motion of soft-tissue structures. Multi-region autofocus optimization showed successful compensation of deformable motion with gains in image quality. However, the use of sub-optimal conventional autofocus metrics, accompanied by the high-dimensionality and non-convexity of the optimization problem result in challenged convergence. This work presents a learning-based approach to the design of autofocus metrics tailored to the specific problem of motion compensation in soft-tissue CBCT. Methods: A deep convolutional neural network (CNN) was used to automatically extract image features quantifying the local motion amplitude and principal direction in volumetric regions of interest (128 x 128x 128 voxels) of a motion contaminated volume. The estimated motion amplitude is then used as the core component of the cost function of the deformable autofocus method, complemented by a regularization term favoring similar motion for regions close in space. The network consists of a siamese arrangement of three branches acting on the three central orthogonal planes of the volumetric ROI. The network was trained with simulated data generated from publicly available CT datasets, including deformable motion fields from which volumetric ROIs with locally rigid motion were extracted. The performance of motion amplitude estimation and of the final CNN-based deformable autofocus were assessed on synthetic CBCT data generated similarly to the training dataset and featuring deformable motion fields with 1 to 5 components with random direction and random amplitude ranging from 0 mm to 50 mm. Results: Predicted local motion amplitude showed good agreement with the true values, showing a liner relationship (R2 = 0.96, slope 0.95), and slight underestimation of the motion amplitude. Absolute errors in total motion amplitude and individual components remained < 2 mm throughout the explored range. Relative errors were higher for low amplitude motion, pointing to need of a larger training cohort, more focused on low motion amplitude scenarios. Motion compensation with the learning-based metric showed successful compensation of motion artifacts in single ROI environments, with 40% reduction in median RMSE using the motion static image as reference. Deformable motion compensation resulted in better visualization of soft-tissue structures, and overall sharper image details, with slight residual motion artifacts for regions combining moderate motion amplitude with complex anatomical structures. Conclusion: Deformable motion compensation with automatically learned autofocus metrics was proved feasible, opening the way to the design of metrics with potential for more reliable performance and easier optimization than conventional autofocus metrics.
Cone-beam CT for neurosurgical guidance: high-fidelity artifacts correction for soft-tissue contrast resolution
Purpose: Intraoperative cone-beam CT (CBCT) plays an important role in neurosurgical guidance but is conventionally limited to high-contrast bone visualization. This work reports a high-fidelity artifacts correction pipeline to advance image quality beyond conventional limits and achieve soft-tissue contrast resolution even in the presence of multiple metal objects – specifically, a stereotactic head frame. Methods: A new metal artifact reduction (MAR) method was developed based on a convolutional neural network (CNN) that simultaneously estimates metal-induced bias and metal path length in the projection domain. To improve generalizability of the network, a physics-based method was developed to generate highly accurate simulated, metalcontaminated projection training data. The MAR method was integrated with previously proposed artifacts correction methods (lag, glare, scatter, and beam-hardening) to form a high-fidelity artifacts correction pipeline. The proposed methods were tested using an intraoperative CBCT system (O-arm, Medtronic) emulating a realistic setup in stereotactic neurosurgery, including nominal (20 cm) and extended (40 cm) field of view (FOV) protocols. Results: The physics-based data generation method provided accurate simulation of metal in projection data, including scatter, polyenergetic, quantum noise, and electronic noise effects. The artifacts correction pipeline was able to accommodate both 20 cm and 40 cm FOV protocols and demonstrated ~80% improvement in image uniformity and ~20% increase in contrast-to-noise ratio (CNR). Fully corrected images in the smaller FOV mode exhibited ~32% increase in CNR compared to the 40 cm FOV mode, showing the method’s ability to handle truncated metal objects outside the FOV. Conclusion: The image quality of intraoperative CBCT was greatly improved with the proposed artifacts correction pipeline, with clear improvement in soft-tissue contrast resolution (e.g., cerebral ventricles) even in the presence of a complex metal stereotactic frame. Such capability gives clearer visualization of structures of interest for intracranial neurosurgery, and it provides an important basis for future work aiming to deformably register preoperative MRI to intraoperative CBCT. Ongoing work includes clinical studies now underway.
Toward intra-operative 4-dimensional soft tissue perfusion imaging using a standard x-ray system with no gantry rotation: a proof of concept
Katsuyuki Taguchi, Thomas J. Sauer, William P. Segars, et al.
Many interventional procedures aim at changing soft tissue perfusion or blood flow. One problem at present is that soft tissue perfusion and its changes cannot be assessed in an interventional suite because cone-beam computed tomography is too slow (it takes 4–10 s per volume scan). In order to address the problem, we propose a novel method called IPEN for Intra-operative 4-dimensional soft tissue PErfusion using a standard x-ray system with No gantry rotation. In this simulation study, we provide a proof-of-concept that the core ideas work.
Molecular Imaging and MRI
icon_mobile_dropdown
Positron emission tomography attenuation correction with dual energy CT electron density maps in the presence of iodinated contrast media
Coupling computed tomography with positron emission tomography (PET/CT) supplements tracer uptake with anatomical information for localization and improves PET quantification by using CT images for attenuation correction. Iodinated contrast agents in CT scans are used to enhance vascularity, organs, and abnormalities and for characterize lesions. However, current attenuation correction methodologies generate overestimates in standardized uptake values (SUV) in the presence of materials with high atomic numbers such as iodinated contrast agents. Utilizing electron density (ED) from dual energy CT (DECT) may result in less biased attenuation correction as ED is proportional to attenuation at PET emission energy of 511 keV. To evaluate different methods of attenuation correction, five phantom configurations with varying iodine concentrations and constant concentrations of Fluorine-18 were scanned using PET/CT and DECT at similar scanning parameters. Phantom configurations were scanned at CTDIvol 2, 4, 6, and 8 mGy with DECT to evaluate the effect of dose on ED and SUV. For attenuation correction, ED was transformed into attenuation at 511 keV through reported material compositions and ED. SUV demonstrated less biased behavior in the presence of iodinated contrast media with ED-based correction (-1.3% to 1.4%, p=0.271) compared to nominal correction (1.5% to 8.6%, p=0.000). No interaction effect between dose and phantom configuration or effect of dose on SUV was present, which was also reflected in ED stability in different tissue mimics. Use of ED-based attenuation correction from DECT allowed for less biased SUV when increasing concentrations of iodinated contrast agents, indicating quantitative advantages of DECT coupled with PET.
Back-end readout electronic design and initial results: a head-and-neck dedicated PET system based on CZT
Yuli Wang, Ryan Herbst, Shiva Abbaszadeh
Cadmium zinc telluride (CZT) radiation detectors are suitable for various applications, due to good energy performance at room temperature and simple pixilation to achieve high spatial resolution. Our group is developing a two-panel head-and-neck dedicated positron emission tomography (PET) system with CZT detectors. In this work, we present the back-end readout electronic design and the initial electronic noise results of our system. The back-end readout electronic incorporates RENA boards (a total 150 RENA boards for each panel), 30:1 fan-in boards connecting 30 RENA boards to the PicoZed 7010/7020 board (a total 5 fan-in boards for each panel). In each panel, 5:1 intermediate boards are used for biasing CZT detectors. The RENA board and the Picozed board are capable of data transmission of 50 Mbps and 6.6 Gbps, respectively. Electronic noise was also quantified using a square wave test pulse that provides charge injection into each channel of the RENA chip in the amount of 75fC/Volt. The pulse amplitude was chosen to generate approximately the same amount of charges as 511 keV photon would provide for each channels. The FWHM electronic noise at 511 keV was measured to be less than 1% (FWHM of 7.80 ± 1.47 ADC units or 4.89 ± 0.92 keV after conversion).
Non-contact respiratory triggering for clinical MRI using frequency modulated continuous wave radar
Han Wang, Yiran Li, Xinyuan Xia, et al.
Abdominal MRI is susceptible to respiratory motion artifacts. The existing clinical solution is using breathing belt to track the movement of the abdomen and trigger MRI acquisition during the end-expiration phase. Attaching respiratory belt to patients often slows down clinical workflow and affects patient comfort especially for those with surgical wounds and respiratory disorders. Herein we, for the first, propose a novel MRI compatible frequency modulated continuous wave (FMCW) radar to track respiratory motion within MRI bore in a non-contact fashion. The electromagnetic wave from FMCW radar can penetrate clothing and MRI RF coils to achieve continuous monitoring of patient’s vital signs. The system consists of a front-end FMCW radar sensor and a FPGA based power management/communication board that interface with a clinical MRI scanner. This design fully integrates the FMCW radar signal with MRI control console to enable real time respiratory triggered MRI acquisition. Consistent respiratory waveform was validated by comparing FMCW signal with traditional breathing belt measurement. Superior image quality from clinical MRI pulse sequence was achieved using the developed system in healthy volunteers.
A physics and learning-based transmission-less attenuation compensation method for SPECT
Attenuation compensation (AC) is a pre-requisite for reliable quantification and beneficial for visual interpretation tasks in single-photon emission computed tomography (SPECT). Typical AC methods require the availability of an attenuation map, which is obtained using a transmission scan, such as a CT scan. This has several disadvantages such as increased radiation dose, higher costs, and possible misalignment between SPECT and CT scans. Also, often a CT scan is unavailable. In this context, we and others are showing that scattered photons in SPECT contain information to estimate the attenuation distribution. To exploit this observation, we propose a physics and learning-based method that uses the SPECT emission data in the photopeak and scatter windows to perform transmission-less AC in SPECT. The proposed method uses data acquired in the scatter window to reconstruct an initial estimate of the attenuation map using a physicsbased approach. A convolutional neural network is then trained to segment this initial estimate into different regions. Predefined attenuation coefficients are assigned to these regions, yielding the reconstructed attenuation map, which is then used to reconstruct the activity distribution using an ordered subsets expectation maximization (OSEM)-based reconstruction approach. We objectively evaluated the performance of this method using highly realistic simulation studies conducted on the clinically relevant task of detecting perfusion defects in myocardial perfusion SPECT. Our results showed no statistically significant differences between the performance achieved using the proposed method and that with the true attenuation maps. Visually, the images reconstructed using the proposed method looked similar to those with the true attenuation map. Overall, these results provide evidence of the capability of the proposed method to perform transmissionless AC and motivate further evaluation.
Wavelet improved GAN for MRI reconstruction
Yutong Chen, David Firmin, Guang Yang
Background: Compressed sensing magnetic resonance imaging (CS-MRI) is an important technique of accel- erating the acquisition process of magnetic resonance (MR) images by undersampling. It has the potential of reducing MR scanning time and costs, thus minimising patient discomfort. Motivation: One of the successful CS-MRI techniques to recover the original image from undersampled images is generative adversarial network (GAN). However, GAN-based techniques suffer from three key limitations: training instability, slow convergence and input size constraints. Method and Result: In this study, we propose a novel GAN-based CS-MRI technique: WPD-DAGAN (Wavelet Packet Decomposition Improved de-aliaising GAN). We incorporate Wasserstein loss function and a novel structure based on wavelet packet decomposition (WPD) into the de-aliaising GAN (DAGAN) architecture, which is a well established GAN-based CS-MRI technique. We show that the proposed network architecture achieves a significant performance improvement over the state-of-the-art CS-MRI techniques.
Correcting motion artifacts in MRI scans using a deep neural network with automatic motion timing detection
Michael Rotman, Rafi Brada, Israel Beniaminy, et al.
Motion artefacts created by patient motion during an MRI scan occur frequently in practice, often rendering the scans clinically unusable and requiring a re-scan. While many methods have been employed to ameliorate the effects of patient motion, these often fall short in practice. In this paper we propose a novel method for detecting and timing patient motion during an MR scan and correcting for the motion artefacts using a deep neural network. The deep neural network contains two input branches that discriminate between patient poses using the motion’s timing. The first branch receives a subset of the k-space data collected during a single dominant patient pose, and the second branch receives the remaining part of the collected k-space data. The proposed method can be applied to artefacts generated by multiple movements of the patient. Furthermore, it can be used to correct motion for the case where k-space has been under-sampled to shorten the scan time, as is common when using methods such as parallel imaging or compressed sensing. Experimental results on both simulated and real MRI data show the efficacy of our approach.
Monte Carlo simulation of an amorphous selenium-based multi-layer photon-counting detector for SPECT applications
In this work, we have used Geant4 Monte Carlo simulations to evaluate the spatial resolution and detection efficiency metrics of an amorphous selenium (a-Se) photon-counting detector for high energy gamma-ray detection with 1mm and 50µm pixel pitches and a varying number of detector layers. A noiseless monochromatic particle gun with an energy of 140 keV is used to resemble the typical energy of single-photon emission computed tomography (SPECT). At high energies like 140 keV, a-Se detection efficiency significantly drops due to its absorption coefficient. By using this novel multilayer a-Se detector, the drop in detection efficiency can be compensated. The point spread function (PSF) is obtained by illuminating the detector with 106 photons. The modulation transfer function (MTF) is calculated from the one-dimensional integral of the PSF, known as line spread function (LSF), and is compared to the ideal pixel MTF. Spatial resolution is considered as the spatial frequency at which the MTF is equal to 0.5. The simulation results indicate that by increasing the number of layers, the MTF was degraded slightly due to Compton scattering, however, it did not degrade spatial resolution for the 1mm pixel size. At the same time, by using more layers, the detection efficiency is increased to 80% for 10 layers. This detection efficiency includes the noise counts (error counts) caused by Compton scattering. Leveraging the photon-counting energy threshold enables partial compensation for noise counts. Using an energy threshold of 110 keV results in 52% efficiency for 10 layers and reduces the noise counts significantly. This increase in detection efficiency along with the high intrinsic spatial resolution makes a-Se a cost-effective candidate for large area SPECT applications.
Detector Physics
icon_mobile_dropdown
Improved optical quantum efficiency and temporal performance of a flat-panel imager with avalanche gain
Corey Orlik, Adrian F. Howansky, Sébastien Léveillé, et al.
Active matrix flat panel imagers (AMFPIs) with thin film transistor (TFT) arrays are becoming the standard for digital x-ray imaging due to their high image quality and real time readout capabilities. However, in low dose applications their performance is degraded by electronic noise. A promising solution to this limitation is the Scintillator High-Gain Avalanche Rushing Photoconductor AMFPI (SHARP-AMFPI), an indirect detector that utilizes avalanche amorphous selenium (a-Se) to amplify optical signal from the scintillator prior to readout. We previously demonstrated the feasibility of a large area SHARP-AMFPI, however there are several areas of desired improvement. In this work, we present a newly fabricated SHARP-AMFPI prototype detector with the following developments: metal oxide hole blocking layer (HBL) with improved electron transport, transparent bias electrode for increased optical coupling, and detector assembly allowing for a back-irradiation (BI) geometry to improve detective quantum efficiency of scintillators. Our measurements showed that the new prototype has improved temporal performance, with lag and ghosting below 1%. We also show an improvement in optical coupling from 25% to 90% for cesium iodide (CsI) scintillator emissions. The remaining challenge of the SHARP-AMFPI is to reduce the dark current to prevent dielectric breakdown under high bias and further increase optical quantum efficiency (OQE) to CsI scintillators. We are proposing to use a newly developed quantum dot (QD) oxide layer, which shows to reduce the dark current by an order of magnitude, and tellurium doping, which could increase OQE to 85% to CsI at avalanche fields, in future prototype detectors.
Theoretical investigation of detector MTF of polycrystalline mercuric iodide x-ray converters incorporating Frisch grid structures for digital breast tomosynthesis
Liuxing Shen, Youcef El-Mohri, Albert K. Liang, et al.
The DQE performance of active matrix flat-panel imagers (AMFPIs) degrades under low dose conditions – such as those encountered in digital breast tomosynthesis where electronic additive noise becomes comparable to imaging signal. Compared to commercially available x-ray converter materials such as CsI:Tl and a-Se, particle-in-binder polycrystalline mercuric iodide (PIB HgI2) offers imaging signal 3 to 10 times larger. However, PIB HgI2 exhibits an unacceptably high degree of image lag, believed to originate from the trapping of holes. To suppress hole signal contribution (with the expectation of decreasing image lag), a Frisch grid structure embedded within a PIB HgI2 detector is under investigation. In this theoretical study involving finite element analysis modeling, the effects of grid design and charge carrier lifetimes on the line spread function and modulation transfer function (MTF) were investigated. Two design parameters were considered: grid pitch (defined as the center-to-center distance between adjacent grid elements) and RGRID (defined as the ratio of grid element width to grid pitch). Results show that for a grid pitch comparable to the pixel pitch (i.e., 100 μm) and high RGRID, MTF is significantly degraded compared to a detector without a grid while, for a grid pitch of 20 μm, MTF is largely maintained and is almost independent of RGRID. The best identified design is a grid pitch of 20 μm and RGRID of 45% – providing an MTF similar to that of a detector without a grid while suppressing hole signal by 78%. This work is supported by NIH grant R01-EB022028.
Spatial-frequency-dependent pulse height spectroscopy of x-ray scintillators using single x-ray imaging
Indirect flat panel imager (I-FPI) performance is limited by noise in the scintillator’s response to single x-rays, i.e. random variation of gain and blur. Single x-ray imaging (SXI) is an experimental methodology to record the response of scintillators to single x-rays using an image intensifier that is lens coupled to an EMCCD camera. In this work we developed a framework for using ensembles of SXI images to compute the spatial-frequency-dependent pulse height spectrum, Pr[g(f)], a novel detector performance metric whose moments may be used to compute scintillator MTF(f), qNNPS(f), and DQE(f). The approach is demonstrated using cesium iodide (CsI:Tl) samples of various thicknesses and screen-optical properties. SXI was conducted on each sample using ~60 keV γ-rays emitted by Am-241. Each image in an SXI ensemble was summed in 1D, and Fourier transformed to form an ensemble of frequencydependent responses. The histogram at each frequency f forms the Pr[g(f)]. To validate this approach against standard methods, each sample was also evaluated using the standard method by coupling it to a photodiode-TFT array as in a conventional I-FPI. MTF(f) and qNNPS(f) were measured using a slanted edge and flood field images, respectively, using an 85 kVp x-ray spectrum filtered to achieve an average energy of 60 keV. MTF(f) and qNNPS(f) results agreed well between both methods, and discrepancies were consistent with the differing experimental conditions. The Pr[g(f)] may be used to help optimize CsI:Tl in I-FPIs, and to improve the computational efficiency of Monte Carlo simulations of I-FPI performance.
A method to determine spectral performance of image post-processing algorithms
Alex Salsali, Paul Picot, Jesse Tanguay, et al.
The MTF and DQE of an x-ray detector are important metrics of performance. They are generally measured following IEC guidelines which requires the use of raw (unprocessed) images. However, many detectors now incorporate processing algorithms that are deeply embedded in the imaging chain and may be difficult or impossible to disable, blurring the line between processed and unprocessed image data, and questioning the relevance of MTF and DQE testing. This study develops a framework to represent the spectral performance of linear and shift-invariant digital-processing algorithms and examines their effects on MTF and DQE measurements with examples. Processing algorithms are represented as a cascade of operations where images are represented using an integrable impulse-sampled notation. This allows the use of Fourier transform theorems and relationships, which differ to a discrete Fourier transform notation, including a specific representation of signal and noise aliasing. It is concluded that: i) digital convolution of image data gives the same result, with the same aliasing artifacts, as a true convolution integral of presampling image data followed by sampling; ii) the slanted-edge method to measure MTF provides the presampling MTF even when processing algorithms operate on aliased image data; iii) the DQE is largely unaffected by LSI post processing, however spectral zero-crossings can suppress specific frequency content in both the MTF and DQE, and unsharp masking algorithms can decrease the DQE at low frequencies.
Spectral CT
icon_mobile_dropdown
Accurate physical density assessments from clinical spectral results
Matthew Hwang, Harold I. Litt, Peter B. Noël, et al.
Physical density assessments may provide valuable insights for a range of diagnostic purposes in abdominal, pulmonary, and breast imaging. These purposes include differentiating iso-attenuating cysts from lesions, aiding in function tests of viral pneumonia, and quantifying breast cancer risk. Physical density assessments also provide a natural and intuitive measurement of human tissue that may be useful for measuring global body mass distributions and comparing measurement techniques. In this work, we present a spectral physical density map generated from clinical dual-energy computed tomography (DECT) datasets. We utilized available DECT estimates for effective atomic number, monoenergetic attenuation, and electron density as inputs into the Alvarez-Macovski model and the relation between a material’s physical and electron densities. To achieve higher accuracy assessments, these underlying equations were parametrized and fit to published tissue composition and density data that had been supplemented with simulated iodine enhancements. Validating the fits with phantom experiments, we observed measurements that are within ± 0.02 g/ml of their nominal values. Our assessments thus fall inside the margin-of-error for the ground-truth densities declared by the phantom manufacturer. Though we validated our density maps only on a dual-layer DECT implementation, the development and optimization of this new spectral result were independent of any other spectral CT technology. Because of this independence and the high accuracies of the maps, we encourage future clinical trials testing the potential applications of this new result for diagnostic imaging.
Quantitative dual-energy imaging in the presence of metal implants using locally constrained model-based decomposition
Purpose: To mitigate effects of metal artifacts in Dual-Energy (DE) CT imaging, we introduce a constrained optimization algorithm to enable simultaneous reconstruction-decomposition of three materials: two tissues-of-interest and the metal. Methods: The volume conservation principle and nonnegativity of volume fractions were incorporated as a pair of linear constraints into the Model-Based Material Decomposition (MBMD) algorithm. This enabled solving for three unknown material concentrations from DE projection data. A primal-dual Ordered Subsets Predictor-Corrector Interior-Point (OSPCIP) algorithm was derived to perform the optimization in the proposed constrained-MBMD (CMBMD). To improve computational efficiency and monotonicity of CMBMD, we investigated an approach where the constraint was applied locally onto a small region containing the metal (identified from a preliminary reconstruction) during initial iterations, followed by final iterations with the constraint applied globally. Validation studies involved simulations and test bench experiments to assess the quantitative accuracy of bone concentration measurements in the presence of fracture fixation hardware. In all studies, DE data was acquired using a kVp-switching protocol with the 60 kVp low-energy beam and the 140 kVp high-energy beam. The system geometry emulated the extremity Cone-Beam CT (CBCT). Simulation studies included: i) a cylindrical phantom (80 mm diameter) with a 30 mm long Ti screw and an insert of varying cortical bone concentrations (3 – 13%), and ii) a realistic tibia phantom created from patient CBCT data with Ti fixation hardware of increasing complexity. The test bench experiment involved a 100 mm diameter water bath containing four Ca inserts (6.5 – 39.1% bone concentration) and a Ti plate. Results: CMBMD substantially reduced artifacts in DE decompositions in the presence of metal. The sequentially localglobal constraint strategy resulted in more monotonic convergence than using a global constraint for all iterations. In the simulation studies, CMBMD achieved quantitative accuracy within ~12% of nominal bone concentration in areas adjacent to metal, and within ~5% in areas further away from the metal, compared to ~80% error for the two-material MBMD. In the test bench study, CMBMD generated ~40% reduction in the error of bone concentration estimates compared to MBMD for nominal insert concentrations of <250 mg/mL, and ~12% reduction for concentrations <250 mg/mL. Conclusion: Proposed CMBMD enables accurate DE decomposition in the presence of metal implants by incorporating the metal as an additional base material. Proposed method will be particularly useful in quantitative orthopedic imaging, which is often challenged by metal fracture fixation and joint replacement hardware.
Dual-contrast decomposition of dual-energy CT using convolutional neural networks
Liver lesion detection and characterization presents a longstanding challenge for radiologists. Since liver lesions are mainly characterized from information obtained at both arterial and portal venous circulatory phases, current hepatic Computed tomography (CT) protocols involve intravenous contrast injection and subsequent multiple CT acquisitions. Because detection of lesions by CT often requires further investigation with MRI, improved differentiation CT capabilities are highly desirable. Recently developed imaging protocols for spectral photon-counting CT enable simultaneous mapping of arterial and portal-venous enhancements by injecting two different contrast agents sequentially, allowing robust pixel-to- pixel spatial alignment between the different contrast phases with a reduction of radiation exposure. Here we propose a method that allows to quantitatively and reliably distinguish between two contrast agents in a single dual-energy CT (DECT) acquisition by taking advantage of the unique abilities of modern self-learning algorithms for non-linear mapping, feature extraction, and feature representation. For this purpose, we designed a U-net architecture convolutional neural network (CNN). To overcome training data requirements, we utilizing clinical DECT images to simulate dual-contrast spectral datasets. With the unique network architecture and training datasets, we demonstrate reliable dual-contrast quantifications from DECT. Our results demonstrate an ability to quantify densities of water, iodine and gadolinium, with root mean square errors of 0.2 g/ml, 1.32 mg/ml and 1.04 mg/ml, respectively. While observing some material-cross artifacts, our model demonstrated a high robustness to noise. With the rapid increase in DECT usage, our results pave the way for improved diagnostics and better patient outcome with available hardware implementations.
Generation of virtual non-contrast (VNC) image from dual energy CT scans using deep learning
Dual energy CT acquisitions can produce two material basis images with high accuracy. From dual energy CT scans with contrast enhancement, one often attempts to generate the corresponding non-enhanced CT images for calcium scoring or for renal stone detection. However, to accurately generate the virtual non-contrast (VNC) images, three-material decomposition is needed. With dual energy CT images, one needs to introduce additional constraints such as the mass or volume conservation condition to approximately perform three-material decomposition. In this work, we present a deep learning strategy to generate VNC image from DECT material basis images. In this strategy, the needed constraint for three-material decomposition is accomplished by learning from the large amount of available training data. In practice, the supervised learning strategy requires matched training data, but this requirement is hard to satisfy in practice since the DECT and non-contrast CT images are acquired in two different CT scans and thus mis-registration often plagues the ordinary supervised learning strategies. In this paper, a new strategy was developed to enable the training of the proposed VNC-Net without using matched training data.
Investigation of the linearized spectral contribution in the simultaneous estimation of spectra and basis images in multispectral CT reconstruction
Simultaneous estimation of spectra and basis images in multispectral CT reconstruction employs a data model with unknown spectra. One approach is based on the linearization of the data model, which leads to two linear terms, with regards to the basis image and to the spectrum. The latter one, i.e., the linearized matrix of spectral contribution, is new, to the best of our knowledge, and warrants investigations. In this work, we have characterized the conditioning of the linearized matrix of spectral contribution using singular value decomposition (SVD). We have also proposed a SVD-based preconditioner for the matrix and incorporated it in a constrained optimization problem for recovering the spectrum. The results have showed improved conditioning of the matrix and accurate recovery of the spectrum by use of the SVD-based preconditioner.
Neural MLAA for PET-enabled dual-energy CT imaging
Siqi Li, Guobao Wang
The PET-enabled dual-energy CT method allows dual-energy CT imaging on PET/CT scanners without the need for a second x-ray CT scan. A 511 keV γ-ray attenuation image can be reconstructed from time-of-flight PET emission data using the maximum-likelihood attenuation and activity (MLAA) algorithm. However, the attenuation image reconstructed by standard MLAA is commonly noisy. To suppress noise, we propose a neuralnetwork approach for MLAA reconstruction. The PET attenuation image is described as a function of kernelized neural networks with the flexibility of incorporating the available x-ray CT image as anatomical prior. By applying optimization transfer, the complex optimization problem of the proposed neural MLAA reconstruction is solved by a modularized iterative algorithm with each iteration decomposed into three steps: PET activity image update, attenuation image update, and neural-network learning using a weighed mean squared-error loss. The optimization algorithm is guaranteed to monotonically increase the data likelihood. The results from computer simulations demonstrated the neural MLAA algorithm can achieve a significant improvement on the γ-ray CT image quality as compared to other algorithms.
Breast Tomosynthesis
icon_mobile_dropdown
Strategies of improving lesion detection in wide-angle digital breast tomosynthesis (DBT) with angular dose distribution and detector design
Cancerous masses are more conspicuous in wide-angle digital breast tomosynthesis (DBT) due to better depth resolution and tissue separation, while the detection of subtle microcalcifications (MC) is challenging. This study aims at providing guidance for a new DBT system design to enhance lesion detection through variable angular dose image acquisition and improved detector performance. Digital breast phantoms were generated, and projection images were simulated using the FDA VICTRE tool 1. Simulated masses and clusters of MC were inserted at different locations of the digital breast phantom. Projection images were simulated with 28 kVp W/Rh energy spectrum, 200 μm thick amorphous selenium (a- Se) direct active matrix flat panel imager (AMFPI), and 50° angular range with 25 views. The impact of a-Se detector performance, i.e., complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) versus AMFPI, with different electronic noise and pixel pitches, was investigated. Even and uneven angular dose distribution schemes (ADS) were designed to test the combined effect of image acquisition setting and detector performance on lesion detection. 2D Filtered Channel observer (FCO) and 3D Channelized Hotelling observer (CHO) were employed to evaluate lesion detectability under different scenarios. The results demonstrated the improvement of direct conversion CMOS APS on the detectability of small MC clusters by reducing FSM, improving DQE at a lower dose, and improve the sharpness of reconstructed MC. The uneven dose distribution benefits the detection of MC without compromising mass detection in FBP reconstructed volumes with slice thickness filter for wide angle DBT.
Virtual clinical trial platforms for digital breast tomosynthesis: a local solution compared to the VICTRE platform
K. Houbrechts, L. Vancoillie, L. Cockmartin, et al.
Aim: Compare an in-house developed hybrid simulation framework and the FDA’s total simulation framework VICTRE in an exercise to simulate realistic DBT images. Methods: Three different set-ups were investigated in increasing order of difficulty: (1) A simple object insert simulated in homogeneous backgrounds. (2) The same simple test object in a breast phantom to investigate the impact of a non-homogeneous background. (3) The simple test object replaced with clinically relevant lesions (spiculated and non-spiculated masses, calcification clusters) to test the frameworks in their entirety. Next to a visual analysis, a quantitative comparison based on contrast and signal-difference-to-noise (SDNR) measurements was performed. Results: Similar contrast and SDNR values in the ‘for processing’ images are obtained for a glandular test object when simulated with VICTRE and the Leuven platform, for both homogeneous as well as structured backgrounds (e.g. structured background; contrast: 0.13 and 0.12, SDNR: 8.99 and 8.56 for the Leuven platform and VICTRE respectively). The reconstruction algorithms of both frameworks differ, but the input of VICTRE images in an offline reconstruction tool like in the Leuven framework leads to similar results. In DBT reconstructed slices, the simulated mass models looked similar to the real lesions from which the model was derived. The simulated calcification clusters are more subtle when using the VICTRE framework, while all clusters appear to be realistic. This illustrates the need of full characterization of the methods. Conclusion: The step-by-step comparison of two very different frameworks was successful. Both frameworks are able to simulate objects with the same characteristics (contrast, SDNR, shape) and can create images with realistic lesions.
Development of magnification tomosynthesis for superior resolution in diagnostic mammography
Raymond J. Acciavatti, Trevor L. Vent, Chloe J. Choi, et al.
Our previous work showed that digital breast tomosynthesis (DBT) supports super-resolution (SR). Clinical systems are not yet designed to optimize SR; this can be demonstrated with a high-frequency line-resolution pattern. SR is achieved if frequencies are oriented laterally, but not if frequencies are oriented in the perpendicular direction; i.e., the posteroanterior (PA) direction. We are developing a next-generation tomosynthesis (NGT) prototype with new trajectories for the x-ray source. This system is being designed to optimize SR not just for screening, but also for diagnostic mammography; specifically, for magnification DBT (M-DBT). SR is not achieved clinically in magnification mammography, since the acquisition is 2D. The aim of this study is to investigate SR in M-DBT, and analyze how anisotropies differ from screening DBT (S-DBT). We have a theoretical model of a high-frequency sinusoidal test object. First, a conventional scanning motion (directed laterally) was simulated. In the PA direction, SR was not achieved in either S-DBT or M-DBT. Next, the scanning motion was angled relative to the lateral direction. This motion introduces submillimeter offsets in source positions in the PA direction. Theoretical modeling demonstrated that SR was achieved in M-DBT, but not in S-DBT, in the PA direction. This work shows that, with the use of magnification, anisotropies in SR are more sensitive to small offsets in the source motion, leading to insights into how to design M-DBT systems.
Digital breast tomosynthesis denoising using deep convolutional neural network: effects of dose level of training target images
This paper investigates the training of deep convolutional neural networks (DCNNs) to denoise digital breast tomosynthesis (DBT) images. In our approach, the DCNN was trained with high dose (HD)/low dose (LD) image pairs, in which the HD image was used as reference to guide the DCNN to learn to reduce noise of a corresponding input LD image. In the current work, we studied the effect of the dose level of the HD target images on the effectiveness of the trained denoiser. We generated a set of simulated DBT data with different dose levels using CatSim, a dedicated x-ray imaging simulator, in combination with virtual anthropomorphic breast phantoms simulated by VICTRE. We found that a DCNN trained with higher dose level of HD target images led to less noisy denoised images. We also acquired DBT images of real physical phantoms containing simulated microcalcifications (MCs) for training and validation. The denoisers trained with either simulated or physical phantom data improved significantly (𝑝𝑝 < 0.0001) the contrast-tonoise ratio of MCs in the validation phantom images. In an independent test set of human subject DBTs, the MCs became more conspicuous, and the mass margins and spiculations were well preserved. The study showed that the denoising DCNN was robust in that the denoiser could be trained with either simulation or physical phantom data. Moreover, the denoiser trained with CatSim simulation data was directly applicable to human DBTs, allowing flexibility in terms of the training data preparation, especially the HD images.
Signal-to-noise ratio and contrast-to-noise ratio measurements for next generation tomosynthesis
David A. Martin, Trevor L. Vent, Chloe J. Choi, et al.
It is standard for the x-ray source in conventional digital breast tomosynthesis (DBT) acquisitions to move strictly along the chest wall of the patient. A prototype, next-generation tomosynthesis (NGT) system has been developed that is capable of acquiring customized geometries with source motion parallel and perpendicular to the chest wall. One well-known consequence of acquiring projections with the x-ray source anterior to the chest wall is that a small volume of tissue adjacent to the chest wall is missed. Here we evaluate strategies in DBT to avoid missing tissue while improving overall image quality. Acquisition geometries tested in this study include the conventional (control), “T-shape,” and “bowtie” geometries. To evaluate the impact of moving the x-ray source away from the chest wall, the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were measured as a function of location within the reconstructed volume. Using simulations and physical experiments, the SNR and CNR were compared with conventional DBT. Simulations of two different phantoms were performed: a “tube” phantom and a “lattice” phantom. Experiments with uniform and textured phantoms were also conducted. While the image quality was slightly reduced immediately adjacent to the chest wall, there was no missed tissue and both the T-shape and Bowtie geometries exhibited SNR and CNR improvement over the vast majority of the reconstruction volume; the overall result being an improvement in image quality with both the T-shape and bowtie geometries.
Phase Contrast and Fluorescence Imaging
icon_mobile_dropdown
Incorporating dark-field information in mesh-based x-ray phase imaging
Uttam Pyakurel, Desirée D'Moore, Pikting Cheung, et al.
X-ray phase imaging has found limited clinical use due to requirements on x-ray coherence that may not be easily translated to clinical practice. Instead, this work employs a conventional source to create structured illumination with a simple wire mesh. The system has been employed to produce high contrast absorption images with simultaneous differential phase contrast images. In previous work we have demonstrated accurate quantitative phase extraction. In this work, we have incorporated the dark field information to successfully reveal additional structure in low contrast objects.
Evaluation of edge-illumination and propagation-based x-ray phase contrast imaging methods for high resolution imaging application
The aim of this study is to investigate and reveal the potential of employing a direct conversion amorphous selenium (a-Se) CMOS based high resolution x-ray detector in both propagation-based (PB) and edge illumination (EI) x-ray phase contrast imaging (XPCi) techniques. Both PB-XPCi and EI-XPCi modalities are evaluated through a numerical model and are compared based on their contrast, edge-enhancement, visibility, and dose efficiency characteristics. It is demonstrated how EI-XPCi configuration outperforms the PB-XPCi one considering using the same x-ray source and detector. After highlighting the strength of EI-XPCi system and reviewing today’s XPCi technologies, absorption mask grating fabrication is addressed as the main challenge to upgrade and improve EI-XPCi setups to higher resolution detectors. Mammography is considered as a case study to elucidate the importance of employing a high resolution EI-XPCi technique for microcalcification detection through numerical simulation of a breast phantom.
Preclinical in vivo x-ray fluorescence computed tomography
Kian Shaker, Carmen Vogt, Yurika Katsu-Jimenéz, et al.
X-ray fluorescence computed tomography (XFCT) with nanoparticles (NPs) as contrast agents has reached technical maturity allowing for in vivo preclinical imaging in the laboratory setting. We present the first in vivo longitudinal study with XFCT where mice were 5 times each during an 8-week period. Imaging is performed with low radiation dose (<25 mGy) and high signal-to-background for high-spatial-resolution imaging (200-400 µm) of molybdenum NP accumulations (down to ~50 µg/ml Mo). We further discuss our ongoing development of protein-coated NPs for actively targeting molecular markers (e.g., cancer), as well as potential clinical applications.
Single-kV multi-contrast x-ray imaging for gout detection and gout-pseudogout differentiation
Daniel H. Bushe, Ran Zhang, Xu Ji, et al.
Gout is the most prevalent inflammatory arthritis found in men. A prompt diagnosis and early treatment of gout are crucial in preventing eventual functional impairment and reduction in comorbidities. In this work, the quantitative material information provided by a multi-contrast x-ray (MCXR) imaging acquisition is leveraged to develop a rapid, non-invasive, and low dose diagnostic method for gout detection and gout-pseudogout differentiation. This work establishes a theoretical foundation to demonstrate how a single-kV MCXR acquisition is capable of differentiating gout from pseudogout via a projection domain two-material decomposition. Experimental results from a benchtop MCXR system are presented. The imaging performance of the proposed MCXR technique is compared to dual-energy radiography to further validate the method.
CT: Reconstruction
icon_mobile_dropdown
DeepInterior: new pathway to address the interior tomographic reconstruction problem in CT via direct backprojecting divergent beam projection data
Chengzhu Zhang, Yinsheng Li, Ke Li, et al.
The interior tomographic reconstruction problem concerns reconstruction of a local region of interest (ROI) from projection data that are conformally collimated to only illuminate the target ROI. In the past decade or so, it has been proven that a stable solution exists to address the interior tomographic reconstruction problem provided that a) the Tuy data sufficiency condition is satisfied and b) image values of some pixels inside the ROI are known(not all pixels to void triviality), although there seems to be no analytical reconstruction method available to reconstruct the image. In this work, we present a new pathway to address the interior tomographic reconstruction problem, referred to as DeepInterior. The new scheme consists of two steps: 1) Direct backprojection of the acquired fully truncated divergent beam projection data to form a backprojection image B0 without the conventional differentiation operations and 2) the true image is then reconstructed from the blurred image B0 using a trained deep neural network architecture. We demonstrated that the trained DeepInterior is shift-invariant and can accurately reconstruct ROIs at arbitrary locations.
Manifold reconstruction of differences: a model-based iterative statistical estimation algorithm with a data-driven prior
Manifold learning using deep neural networks been shown to be an effective tool for noise reduction in low-dose CT. We propose a new iterative CT reconstruction algorithm, called Manifold Reconstruction of Differences (MRoD), which combines physical models with a data-driven prior. The MRoD algorithm involves estimating a manifold component, approximating common features among all patients, and the difference component which has the freedom to fit the measured data. While the manifold captures typical patient features (e.g. healthy anatomy), the difference image highlights patient-specific elements (e.g. pathology). We present a framework which combines trained manifold-based modules with physical modules. We present a simulation study using anthropomorphic lung data showing that the MRoD algorithm can both isolate differences between a particular patient and the typical distribution, but also provide significant noise reduction with relatively low bias.
Medical image reconstruction using compressible latent space invertible networks
Varun A. Kelkar, Sayantan Bhadra, Mark A. Anastasio
Many modern medical imaging systems are computed in nature and need computational reconstruction to form an image. When the acquired measurements are insufficient to uniquely determine the sought-after object, prior knowledge about the object needs to be utilized in order to successfully recover an image. An emerging line of research involves the use of deep generative models such as generative adversarial networks (GANs) as priors in the image reconstruction procedure. However, when GANs are employed, reconstruction of images outside the range of the GAN leads to errors that result in realistic but false image estimates. To address this issue, an image reconstruction framework is proposed that is formulated in the latent space of an invertible generative model. A novel regularization strategy is introduced that takes advantage of the architecture of certain invertible neural networks (INNs). To evaluate the performance of the proposed method, numerical studies of reconstructing images from stylized MRI measurements are performed. The method outperforms classical reconstruction methods in terms of traditional image quality metrics and is comparable to a state-of- the-art adaptive GAN based framework, while benefiting from a deterministic procedure and easier tuning of the regularization parameters.
tN-net: a spatiotemporal plus prior image-based convolutional neural network for 4D-CBCT reconstructions enhancement
Cone-Beam Computed Tomography (CBCT) usually suffers from motion blurring artifacts when scanning at the region of the thorax. Consequently, it may result in inaccuracy in localizing the target of treatment and verifying the delivered dose in radiation therapy. Despite that 4D-CBCT reconstruction technology could alleviate the motion blurring artifacts, it introduces severe streaking artifacts due to the under-sampled projections used for reconstruction. Aiming at improving the overall quality of 4D-CBCT images, we explored the performance of the deep learning-based technique on 4D-CBCT images. Inspired by the high correlation among these 4D-CBCT images, we proposed a spatial-temporal plus prior image-based CNN, which is a cascaded network of a spatial CNN and a temporal CNN. For the spatial CNN, it is in the manner of the encoder-decoder architecture that utilizes a pair of a prior image-based channel and an artifact-degraded channel in the encoder stage for feature representation and fuses the feature maps for image restoration in the decoder stage. Next, three consecutive phases of images that are predicted by N-net individually are stacked together for latent image restoration via the temporal CNN. By doing so, temporal CNN learns the correlation between these images and construct the residual map covering streaking artifacts and noise for further artifact reduction. Experimental results of both simulation data and patient data indicated that the proposed method has the capability not only reduces the streaking artifacts but also restore the original anatomic features while avoiding inducing error tomographic information.
Deep learning in image reconstruction: vulnerability under adversarial attacks and potential defense strategies
Chengzhu Zhang, Yinsheng Li, Guang-Hong Chen
In this work, we studied the adversarial attacks and their corresponding defense strategies specifically in x-ray computed tomography image reconstruction tasks. After a small amount of imperceptible noise was added to the input image, these barely noticeable additional noise to the input image resulted in artifactual false-positive structures into output images of the well referenced high performance deep learning reconstruction methods. Since the adversarial attacks often occur at a specific stage of the entire imaging chain, defense measures can be developed to incorporate the uncontaminated data in the imaging chain into the image reconstruction framework to eliminate hazardous effects of adversarial attacks.
Photon Counting CT
icon_mobile_dropdown
Model-based inter- and intra-panel inconsistency correction for photon counting detector CT
Mang Feng, Xu Ji, Ran Zhang, et al.
Due to the subtle variations of energy response functions across photoconductive panels, photon counting detector CT are subject to severe banding artifacts. This work presents a physics based method to correct for these artifacts. It employs calibration objects with known thicknesses and composition to estimate the panel-specific response functions, which are used to concert the raw photon counting projection data of an arbitrary image object into acrylic- and aluminum-equivalent pathlengths. Experimental results show the method not only remove the banding artifacts but also address the beam hardening artifacts.
Assessment of multi-energy inter-pixel coincidence counters (MEICC) for photon counting detectors at the presence of both charge sharing and pulse pileup
Katsuyuki Taguchi, Jan S. Iwanczyk
The performance of photon counting detectors (PCDs) are limited by the spectral distortion caused by charge sharing and pulse pileup. Recently we have proposed a new detector design called MEICC for multi-energy inter-pixel coincidence counter. MEICC records the counting signals in the same way as the current PCDs, while also recording the energy-dependent cross-talk with neighboring pixels. The previous studies have shown that MEICC decreases the effect of charge sharing significantly. Building on the previous success, the aim of this study was to assess the performance of MEICC when the pulse pileup also causes spectral distortion (in addition to charge sharing). Monte Carlo simulations have been performed to compare the performance of MEICC, the current PCD, and two different versions of real-time charge sharing correction schemes. The study found that MEICC had the best performance when count rates were moderate or high.
Non-linear partial volume artifact reduction in spectral CT one step direct inversion reconstruction
Benjamin M. Rizzo, Emil Y. Sidky, Taly Gilat Schmidt
In a clinical setting, the accurate representation of pathologies and anatomical structures in CT reconstructions is important for diagnostic and therapeutical interventions. However, the presence of metal within the patient can cause significant streak artifacts within the reconstruction, possibly degrading or obscuring features within a region of interest to the imaging study. Spectral CT can be utilized to better resolve the attenuation properties of different materials and hence provide more accurate reconstructions of images. In addition, accurately modeling the polyenergetic transmission of x-ray photons, the noise properties of the data, and undersampling caused by materials with high photon attenuation can address some of the artifacts caused by metal. Previous work using a one-step image reconstruction algorithm (cOSSCIR), demonstrated the potential to remove metal artifacts caused by beam hardening and photon starvation, but streaks due to the high gradient around the boundary of metal remained. Within this framework, an accurate model of the detector geometry can be leveraged to further address metal artifacts in the reconstructed image. In this study, we investigate errors caused by “partial-volume” effects using photon-counting CT phantom simulations. Noiseless 2D fan-beam data was first generated from a pelvic phantom with bilateral hip prostheses, where each detector element was modeled as having a finite aperture using multiple ‘raylets.’ The cOSSCIR framework was modified to incorporate the nonlinear partial volume (NLPV) model based on raylets. Basis images were reconstructed by cOSSCIR with and without the NLPV model. The simulation results demonstrated that cOSSCIR with the NLPV model was able to accurately tom with metal, therefore removing metal artifacts caused by beam hardening and nonlinear partial volume effects.
Stability of a spectral calibration model with non-linear intensity correction for photon-counting detectors
This work extends the methodology for estimating spectral response of photon-counting detectors (PCDs) from transmission measurements through an object of known composition and geometry. The extension incorporates a non-linear model that relates the photon counts incident on the PCD with the photon counts registered by the PCD. The model calibration is performed on an actual step-wedge phantom and it used to predict Xray transmission through a different material than what comprises the phantom. In addition, stability of the determination of the model parameters is investigated, and this analysis is used to determine regularization strength and basis materials for the test object.
Analytical model for pulse pileup in photon counting detectors with seminonparalyzable behavior
Photon counting x-ray detectors (PCDs) with energy discrimination capabilities provide spectral information, can eliminate electronic and Swank noise, and image multiple contrast agents simultaneously with high resolution. However, with high flux on the detectors, pulse pileup leads to count loss and spectral distortion. Accurate description of the output count rate and spectrum behavior of a PCD is crucial to the development of further applications, e.g., material decomposition. In this work, a fully analytical model of the pulse pileup spectrum and count statistics for a nonparalyzable detector with nonzero pulse length is proposed, which accounts for seminonparalyzable behavior, i.e., retriggering of counts by pulses incident during the dead time. We use a triangle to approximate a realistic pulse shape and recursively compute the output spectrum of different pulse pileup orders. A count-rate and spectrum dependent expression of count statistics is further derived based on renewal theory. Our analytical model can then be used to predict material decomposition noise using the Cramér–Rao lower bound (CRLB). We compared our model predictions with Monte Carlo simulations. The results show that our model can accurately predict the count loss and spectral distortion resulting from pulse pileup and can be used in predicting material decomposition noise over a large range of count rates.
Deep learning based spectral distortion correction and decomposition for photon counting CT using calibration provided by an energy integrated detector
M. D. Holbrook, D. P. Clark, C. T. Badea
While the benefits of photon counting (PC) CT are significant, the performance of current PCDs is limited by physical effects distorting the spectral response. In this work, we examine a deep learning (DL) approach for spectral correction and material decomposition of PCCT data using multi-energy CT data sets acquired with an energy integrating detector (EID) for spectral calibration. We use a convolutional neural network (CNN) with a U-net structure and compare image domain and projection domain approaches to spectral correction and material decomposition. We trained our networks using noisy, spectrally distorted PCD projections as input data while the labels were derived from multi-energy EID data. For this study, we have scanned: 1) phantoms containing vials of iodine and calcium in water and 2) mice injected with an iodine-based liposomal contrast agent. Our results show that the image-based approach corrects best for spectral distortions and provides the lowest errors in material maps (RMSE in iodine: 0.34 mg/mL) compared with the projection-based approach (0.74 mg/mL) and image domain decomposition without correction (2.67 mg/mL). Both DL methods are, however, affected by a loss of spatial resolution (from 3 lp/mm in EID labels to ~2.2 lp/mm in corrected reconstructions). In summary, we demonstrate that multi-energy EID acquisitions can provide training data for DL-based spectral distortion correction. This data-driven correction does not require intimate knowledge of the spectral response or spectral distortions of the PCD. While the benefits of photon counting (PC) CT are significant, the performance of current PCDs is limited by physical effects distorting the spectral response. In this work, we examine a deep learning (DL) approach for spectral correction and material decomposition of PCCT data using multi-energy CT data sets acquired with an energy integrating detector (EID) for spectral calibration. We use a convolutional neural network (CNN) with a U-net structure and compare image domain and projection domain approaches to spectral correction and material decomposition. We trained our networks using noisy, spectrally distorted PCD projections as input data while the labels were derived from multi-energy EID data. For this study, we have scanned: 1) phantoms containing vials of iodine and calcium in water and 2) mice injected with an iodine-based liposomal contrast agent. Our results show that the image-based approach corrects best for spectral distortions and provides the lowest errors in material maps (RMSE in iodine: 0.34 mg/mL) compared with the projection-based approach (0.74 mg/mL) and image domain decomposition without correction (2.67 mg/mL). Both DL methods are, however, affected by a loss of spatial resolution (from 3 lp/mm in EID labels to ~2.2 lp/mm in corrected reconstructions). In summary, we demonstrate that multi-energy EID acquisitions can provide training data for DL-based spectral distortion correction. This data-driven correction does not require intimate knowledge of the spectral response or spectral distortions of the PCD.
X-ray Imaging: Dosimetry, Scatter, and Motion
icon_mobile_dropdown
iPhantom: an automated framework in generating personalized computational phantoms for organ-based radiation dosimetry
We propose an automated framework to generate 3D detailed person-specific computational phantoms directly from patient medical images. We investigate the feasibility of this framework in terms of accurately generating patient-specific phantoms and the clinical utility in estimating patient-specific organ dose for CT images. The proposed framework generates 3D volumetric phantoms with a comprehensive set of radiosensitive organs, by fusing patient image data with prior anatomical knowledge from a library of computational phantoms in a two-stage approach. In the first stage, the framework segments a selected set of organs from patient medical images as anchors. In the second stage, conditioned on the segmented organs, the framework generates unsegmented anatomies through mappings between anchor and nonanchor organs learned from libraries of phantoms with rich anatomy. We applied this framework to clinical CT images and demonstrated its utility for patient-specific organ dosimetry. The result showed the framework generates patientspecific phantoms in ~10 seconds and provides Monte Carlo based organ dose estimation in ~30 seconds with organ dose errors <10% for the majority of organs. The framework shows the potential for large scale and real-time clinic analysis, standardization, and optimization.
Comparison of skin dose calculated by the dose tracking system (DTS) with a beam angular correction factor and that calculated by Monte-Carlo
Skin dose is dependent on the incident beam angle and corrections are needed for accurate estimation of the risk of deterministic effects of the skin. Angular-correction factors (ACF) were calculated and incorporated into our skin-dosetracking system (DTS) and the results compared to Monte-Carlo simulations for a neuro-interventional procedure. To obtain the ACF’s, EGSnrc Monte-Carlo (MC) software was used to calculate the dose averaged over 0.5, 1, 2, 3, 4 and 5 mm depth into the entrance surface of a water phantom at the center of the field as a function of incident beam to skin angle from 90-10 degrees for beam field sizes from 5-15 cm and for beam energies from 60-120 kVp. These values were normalized to the incident primary dose to obtain the ACF. The angle of incidence at each mesh vertex in the beam on the surface of the DTS patient graphic was calculated as the complement of the angle between the normal vector and the vector of the intersecting ray from the tube focal spot; skin dose at that vertex was calculated using the corresponding ACF. The skin-dose values with angular correction were compared to those calculated using MC with a matching voxelized phantom. The results show the ACF decreases with decreasing incident angle and skin thickness, and increases with increasing field size and kVp. Good agreement was obtained between the skin dose calculated by the angular-corrected DTS and MC, while use of the ACF allows the real-time performance of the DTS to be maintained.
The effect of underlying bone on the beam angular correction in calculating the skin dose of the head in neuro-interventional imaging
EGSnrc Monte-Carlo software was used to calculate the “patient’s skin dose” as a function of incident beam angle for cylindrical water phantoms with underlying subcutaneous fat and various thicknesses of bone. Simulations were done for incident angles from 90 to 10 degrees, entrance beam sizes from 5 to 15 cm, and energies from 60 to 120 kVp. The depth-averaged scatter-plus-primary to incident-primary dose ratio decreases with decreasing skin incident angle and increasing underlying bone thickness, and increases with increasing field size and energy. Corrections for these factors improve the accuracy of skin-dose estimation for neuro-interventional procedures with our Dose-Tracking-System.
Preliminary in-vivo imaging evaluation of patient-specific scatter-corrected digital chest tomosynthesis
Christina R. Inscoe, Alex J. Billingsley, Connor Puett, et al.
Purpose: Scatter reduction remains a challenge for chest tomosynthesis. The purpose of this study was to validate a lowdose patient-specific method of scatter correction in a large animal model and implement the technique in a human imaging study in a population with known lung lesions. Method: The porcine and human subjects were imaged with an experimental stationary digital chest tomosynthesis system. Full field projection images were acquired, as well as with a customized primary sampling device for sparse sampling of the primary signal. A primary sampling scatter correction algorithm was used to compute scatter from the primary beam information. Sparse scatter was interpolated and used to correct projections prior to reconstruction. Reconstruction image quality was evaluated over multiple acquisitions in the animal subject to quantify the impact of lung volume discrepancies between scans. Results: Variations in lung volume between the full field and primary sample projection images induced mild variation in computed scatter maps, due to acquisitions during separate breath holds. Reconstruction slice images from scatter corrected datasets including both similar and dissimilar breath holds were compared and found to have minimal differences. Initial human images are included. Conclusions: We have evaluated the prototype low-dose, patient-specific scatter correction in an in-vivo porcine model currently incorporated into a human imaging study. The PSSC technique was found to tolerate some lung volume variation between scans, as it has a minimal impact on reconstruction image quality. A human imaging study has been initiated and a reader comparison will determine clinical efficacy.
Image-domain cardiac motion compensation in multidirectional digital chest tomosynthesis
We investigate an image-based strategy to compensate for cardiac motion-induced artifacts in Digital Chest Tomosynthesis (DCT). We apply the compensation to conventional unidirectional vertical “↕” scan DCT and to a multidirectional circular trajectory "O" providing improved depth resolution. Propagation of heart motion into the lungs was simulated as a dynamic deformation. The studies investigated a range of motion propagation distances and scan times. Projection-domain retrospective gating was used to detect heart phases. Sparsely sampled reconstructions of each phase were deformably aligned to yield a motion compensated image with reduced sampling artifacts. The proposed motion compensation mitigates artifacts and blurring in DCT images both for “↕” and "O" scan trajectories. Overall, the “O” orbit achieved the same or better nodule structural similarity index in than the conventional “↕” orbit. Increasing the scan time improved the sampling of individual phase reconstructions.
Dual-energy: Optimization and Clinical Application
icon_mobile_dropdown
Theoretical optimization of dual-energy x-ray imaging of chronic obstructive pulmonary disease (COPD)
We propose two-dimensional (2D) dual-energy (DE) x-ray imaging of lung structure and function for the assessment of COPD, and investigate the resulting image quality theoretically using the human observer detectability index (d') as a figure of merit. We modeled the ability of human observers to detect ventilation defects in xenon enhanced DE (XeDE) images and emphysema in unenhanced DE images. Our model of d' accounted for the extent of emphysematous destruction and functional impairment as a function of defect/lesion contrast, spatial resolution, x-ray scatter, quantum and background anatomical noise power spectrum (NPS), and the efficiency of human observers. The effect of x-ray spectrum and exposure allocation factor on d' was also explored. Our results suggest that, the detectability is maximized for exposure allocation factors that minimize quantum NPS. The optimal combination of tube voltage was found to be ~50/140 kV or 60/140 kV depending on the task and patient at an x-ray exposure equal to that of a standard chest x-ray. In 2D DE x-ray imaging of COPD, the detectability is primarily limited by low contrast, x-ray scatter, and anatomic noise, the latter two of which reduce the detectability of individual defects by 30% and ~>90%, respectively.
Single-shot quantitative x-ray imaging from simultaneous scatter and dual energy measurements: a simulation study
Conventional x-ray imaging struggles with quantitation due to several challenges, including scatter, beam hardening, and multiple overlaying materials. We propose a new single-shot quantitative imaging (SSQI) method enabled by the combination of a primary modulator and a dual-layer detector to quantify area density of specific materials. The primary modulator enables scatter correction for the dual-layer detector, and the dual-layer detector enables beam hardening correction for the primary modulator. The dual-layer detector further provides motion-free dual-energy imaging from a single shot. When combined, the scatter-corrected dual-energy images can be used to create materialspecific images, including of soft tissue, bone, or iodine. A simulation study was performed of SSQI for chest x-ray imaging. We simulated projections of a digital chest phantom using a 120 kV spectrum, primary modulator, and dual-layer detector, with the addition of scatter and noise. Using the low-frequency property of scatter, a pre-calibrated material decomposition, and the known primary modulator pattern, we jointly recovered the scatter images in the dual-layer images and the material decomposition of the phantom. The resulting material decomposition accurately separated soft tissue from bone, reducing the RMSE in material-specific images by 66-84% as compared to no scatter correction. Through this simulation study, we have demonstrated the potential of SSQI for material quantification that is robust against scatter. The simplicity and broad applicability of SSQI to x-ray systems has the potential for widespread adoption, leading to improved quantitative imaging not only for chest x-ray but also for real-time image guidance and for cone-beam CT.
Dual energy chest x-ray for improved COVID-19 detection using a dual-layer flat-panel detector: simulation and phantom studies
Linxi Shi, N. Robert Bennett, Minghui Lu, et al.
Chest radiography (CXR) plays an important role in triage, management, and monitoring of patients with COVID-19. In this work, we use a dual-layer (DL) flat-panel detector to perform artifact-free single-exposure DE CXR for COVID-19 detection. A simulation study was performed to generate DE CXRs from CT volumes of 13 patients diagnosed with COVID-19, which were compared with simulated conventional CXR by two chest radiologists. A phantom study was also conducted using an anthropomorphic chest phantom with synthetic opacifications. Improved visualization of opacities was observed for DE CXRs in both studies, indicating that the proposed DL detector could be a powerful new tool for COVID-19 diagnosis and patient management.
Assessment of reproducibility of volumetric breast density measurement using dual energy digital breast tomosynthesis
Breast density (BD) has been shown to be an independent risk factor for breast cancer, and it can be quantified by measuring the volumetric breast density (VBD). The longitudinal change in VBD has been used as a biomarker to assess the effect of chemoprevention drug for breast cancer risk reduction. However, the reproducibility of VBD measurement is hindered by breast compression, which causes temporal changes in the distribution of breast tissues, making it difficult to assess small longitudinal changes in VBD. Dual energy digital breast tomosynthesis (DE DBT) may provide more reproducible VBD measurements over different breast compressions. In this study, we aim to evaluate the reproducibility of VBD measurement using DE DBT. Digital breast phantoms were generated using a virtual clinical trial software (VICTRE) and were compressed multiple times into different clinical-relevant compressed breast thicknesses to emulate the change in repeated imaging of the same breast. The compressed thickness difference of 10 mm exaggerates the variation seen in clinical practice. DE DBT projection images of the phantoms were generated by Monte-Carlo simulations and included quantum noise and scatter radiation. A previously developed scatter correction method and a DE material decomposition technique were applied to obtain the BD map and calculate VBD. Our results show that DE DBT can provide reproducible VBD measurements for breasts under different compressions with the average absolute discrepancy of 2.3% ± 1.1% between two repeated measurements for all phantoms. The largest discrepancy is 3.9% for the extreme case with 20 mm compressed thickness difference.
Effects of x-ray scatter in quantitative dual-energy imaging using dual-layer flat panel detectors
Purpose: We compare the effects of scatter on the accuracy of areal bone mineral density (BMD) measurements obtained using two flat-panel detector (FPD) dual-energy (DE) imaging configurations: a dual-kV acquisition and a dual-layer detector. Methods: Simulations of DE projection imaging were performed with realistic models of x-ray spectra, scatter, and detector response for dual-kV and dual-layer configurations. A digital body phantom with 4 cm Ca inserts in place of vertebrae (concentrations 50 - 400 mg/mL) was used. The dual-kV configuration involved an 80 kV low-energy (LE) and a 120 kV high-energy (HE) beam and a single-layer, 43x43 cm FPD with a 650 μm cesium iodide (CsI) scintillator. The dual-layer configuration involved a 120 kV beam and an FPD consisting of a 200 μm CsI layer (LE data), followed by a 1 mm Cu filter, and a 550 μm CsI layer (HE data). We investigated the effects of an anti-scatter grid (13:1 ratio) and scatter correction. For the correction, the sensitivity to scatter estimation error (varied ±10% of true scatter distribution) was evaluated. Areal BMD was estimated from projection-domain DE decomposition. Results: In the gridless dual-kV setup, the scatter-to-primary ratio (SPR) was similar for the LE and HE projections, whereas in the gridless dual layer setup, the SPR was ~26% higher in the LE channel (top CsI layer) than in the HE channel (bottom layer). Because of the resulting bias in LE measurements, the conventional projection-domain DE decomposition could not be directly applied to dual-layer data; this challenge persisted even in the presence of a grid. In contrast, DE decomposition of dual-kV data was possible both without and with the grid; the BMD error of the 400 mg/mL insert was -0.4 g/cm2 without the grid and +0.3 g/cm2 with the grid. The dual-layer FPD configuration required accurate scatter correction for DE decomposition: a -5% scatter estimation error resulted in -0.1 g/cm2 BMD error for the 50 mg/mL insert and a -0.5 g/cm2 BMD error for the 400 mg/mL with a grid, compared to <0.1 g/cm2 for all inserts in a dual-kV setup with the same scatter estimation error. Conclusion: This comparative study of quantitative performance of dual-layer and dual-kV FPD-based DE imaging indicates the need for accurate scatter correction in the dual-layer setup due to increased susceptibility to scatter errors in the LE channel.
Establishing a quality control protocol for dual-energy based contrast-enhanced digital mammography
L. Cockmartin, H. Bosmans, N. W. Marshall
This study explores candidate metrics for use in a vendor-neutral Quality Control (QC) protocol for dual energy based (‘spectral’) contrast-enhanced digital mammography (DE-CEDM). The physical metrics were chosen for their anticipated link with clinical image quality. An extra criterion was that limiting values can be applied to these metrics. The following DE-CEDM characteristics were tested: (1) signal produced by test object inserts versus their iodine concentration for different breast tissue thicknesses and compositions, the possibility of iodine quantification and iodine detectability thresholds; (2) normal breast tissue cancellation for different types of simulated tissue compositions; (3) artefacts; (4) image uniformity; (5) exposure time and (6) mean glandular dose (MGD). The tests employed custom made and commercially available phantoms based on breast-tissue equivalent materials and iodine inserts. Initial test results on a specific DE-CEDM application (SenoBright HD, GE Healthcare) found a linear response to iodine concentration that was largely independent of background composition, with slopes in the range of 30 – 43 for iodine pixel value as a function of iodine concentration, making absolute iodine quantification possible. The residual signal due to normal tissue in the subtracted image was equivalent to less than 0.3 mg/cm2 of iodine. Artefacts were quantified and considered acceptable. For a 5 cm thick breast, the system operates at a total MGD of 1.92 mGy with a total exposure time of 4.25 s. The preliminary QC test results are ready for discussion in the broader community of radiologists, medical physicists and manufacturers.
Posters: Algorithm Development
icon_mobile_dropdown
Classification of round lesions in dual-energy FFDM using a convolutional neural network: simulation study
Brian Toner, Andrey Makeev, Marian Qian, et al.
The presence of round cystic and solid mass lesions identified at mammogram screenings account for a large number of recalls. These recalls can cause undue patient anxiety and increased healthcare costs. Since cystic masses are nearly always benign, accurate classification of these lesions would be allow a significant reduction in recalls. This classification is very difficult using conventional mammogram screening data, but this study explores the possibility of performing the task on dual-energy full field digital mammography (FFDM) data. Since clinical data of this type is not readily available, realistic simulated data with different sources of variation are used. With this data, a deep convolutional neural network (CNN) was trained and evaluated. It achieved an AUC of 0.980 and 42% specificity at the 99% sensitivity level. These promising results should motivate further development of such imaging systems.
Statistically adaptive filtering for low signal correction in x-ray computed tomography
Low x-ray dose is desirable in x-ray computed tomographic (CT) imaging due to health concerns. But low dose comes with a cost of low signal artifacts such as streaks and low frequency bias in the reconstruction. As a result, low signal correction is needed to help reduce artifacts while retaining relevant anatomical structures. Low signal can be encountered in cases where sufficient number of photons do not reach the detector to have confidence in the recorded data. X-ray photons, assumed to have Poisson distribution, have signal to noise ratio proportional to the dose, with poorer SNR in low signal areas. Electronic noise added by the data acquisition system further reduces the signal quality. In this paper we will demonstrate a technique to combat low signal artifacts through adaptive filtration. It entails statistics-based filtering on the uncorrected data, correcting the lower signal areas more aggressively than the high signal ones. We look at local averages to decide how aggressive the filtering should be, and local standard deviation to decide how much detail preservation to apply. Implementation consists of a pre-correction step i.e. local linear minimum mean-squared error correction, followed by a variance stabilizing transform, and finally adaptive bilateral filtering. The coefficients of the bilateral filter are computed using local statistics. Results show that improvements were made in terms of low frequency bias, streaks, local average and standard deviation, modulation transfer function and noise power spectrum.
Accurate reconstruction of cross-section images from limited-angular-range data in human-limb imaging
In this work, we investigate and develop a method for cross-section image reconstruction from data collected over limited-angular ranges in the context of human-limb imaging. We first design a convex optimization program with constraints on directional image total-variations (TVs), and then tailor a convex primal-dual algorithm, which is referred to as the directional TV (DTV) algorithm, for solving this program. By using the proposed DTV algorithm, we investigate image reconstructions in studies with data collected from numerical thigh phantoms over a limited-angular range of 60. The results of the numerical studies demonstrate that the method proposed can yield, from limited-angular-range data, cross-section images with significantly reduced artifacts that are observed otherwise in images obtained with existing algorithms.
Investigation of image reconstruction for digital breast tomosynthesis imaging by use of directional TV constraint
In digital breast tomosynthesis (DBT), in-plane images are of clinical utility, whereas images within transverse planes contain significant artifacts simply because the existing algorithms are not designed for reconstructing accurately images within transverse planes from extremely limited-angular-range data. In this work, we investigate and develop a convex primal-dual (CPD) algorithm that incorporates directional total-variation (DTV) constraints for yielding breast images within transverse planes with substantially reduced artifacts when images are reconstructed from DBT data. We have performed numerical studies to demonstrate that the algorithm proposed has the potential to yield breast images with substantially reduced artifacts within transverse planes observed in images obtained otherwise with existing algorithms.
Improving presentation consistency of radiographic images using deep learning
In general X-ray radiography, inconsistency of brightness and contrast in initial presentation is a common complaint from radiologists. Inconsistencies, which may be a result of variations in patient positioning, dose, protocol selection and implant could lead to additional workflow by technologists and radiologists to adjust the images. To tackle this challenge posed by conventional histogram-based display approach, an AI Based Brightness Contrast (AI BC) algorithm is proposed to improve the consistency in presentation by using a residual neural network trained to classify X-ray images based on N by M grid of brightness and contrast combinations. More than 30,000 unique images from sites in US, Ireland and Sweden covering 31 anatomy/view combinations were used for training. The model achieved an average test accuracy of 99.2% on a set of 2700 images. AI BC algorithm uses the model to classify and adjust images to achieve a reference look and then further adjust to achieve user preference. Quantitative evaluation using ROI based metrics on a set of twelve wrist images showed a 53% reduction in mean pixel intensity variation and a 39% reduction in bone-tissue contrast variation. A study with application specialists adjusting image presentation of 30 images covering 3 anatomies (foot, abdomen and knee) was performed. On average, the application specialists took ~20 minutes to adjust the conventional set, whereas they took ~10 minutes for the AI BC set. The proposed approach demonstrates the feasibility of using deep learning technique to reduce inconsistency in initial display presentation and improve user workflow.
A feasibility study of data redundancy based on-line geometric calibration without dedicated phantom on Varian OBI CBCT system
Changhwan Kim, Chiyoung Jeong, Min-jae Park, et al.
As the conventional method of geometric calibration not only needs extra-scan but also have the vulnerability to the dimensional accuracy of the dedicated phantom, we proposed a new geometric calibration scheme based on data redundancy which uses projection data of any arbitrary patient already scanned, mainly focused on a geometry of Varian’s OBI system. Using the fan-beam based data redundancy which can be applied to cone-beam projection data of either full-fan or half-fan geometry, one could successfully detect the misalignments of geometric parameters and find the values of them. The simulation studies using the XCAT numerical phantom were conducted to check the feasibility of data-redundancy-based method can detect the misalignment of geometric parameters and find true values.
Effects of smartphone sensor characteristics on dermatoscopic images: a simulation study
Varun Vasudev, Lode De Paepe, Andrew D. A. Maidment, et al.
Dermatoscopes are commonly used to evaluate skin lesions. There are a wide array of medical imaging devices entering the market, some of which allow patients to analyze skin lesions themselves. These devices usually come in the form of smartphone attachments that leverage smartphone optics to acquire images; and in some cases, even give a preliminary diagnosis. While these attachments are useful, smartphone sensors are small which limits the extent and detail of captured images as opposed to images from a professional camera. Our work focuses on the information lost due to the known limitations of smartphone sensors, and its effect on image appearance. This analysis has been performed using a virtual simulation pipeline for dermatology, VCT-Derma, which contains simulated skin and dermatoscope models, among others. We discuss the necessary sensor parameters to adapt the dermatoscope model to various sensors, and with the help of the skin model and a colorgauge chart, obtain images from the sensors simulated. Results indicate differences in image clarity as well as observed color fidelity between the reference dermatoscope and smartphone sensors. Results of imaging the skin model show improved feature clarity in the reference device image as compared to the two smartphone sensors. Results of imaging the colorgauge chart show average ΔE2000 values of ~12.5 across all color patches for the reference device, and smartphone sensors. Under the same lightning, smartphone sensors showed areas with saturated pixels, as opposed to the reference device. Research is ongoing on the influence of multispectral illumination on these sensors.
Quantifying the importance of spatial anatomical context in cadaveric, non-contrast enhanced organ segmentation
Volumetric segmentation using deep learning is a computationally expensive task, but one with great utility for medical image analysis in radiology. Deep learning uses the process of convolution to calculate voxel level relationships and predict class membership of each voxel (e.g. segmentation). We hypothesize that (1) kidney segmentation in cadaveric, non-contrast enhanced CT images is possible; (2) a volumetric UNet (VNet) architecture will out-perform a 2D UNet architecture in kidney segmentation; and (3) as increasing anatomically relevant information present within the volumes will increase the ability of the system to understand the relationship of anatomical structures, thus enabling more accurate segmentation. In this project we utilized a difficult dataset (cadaveric, non-contrast enhanced CT data) to determine how much anatomical information is necessary to obtain a quantifiable segmentation with the lowest Hausdorff Distance and highest Dice Coefficient values between the output and the ground truth mesh. We used a 70/20/10% training testing and validation split with a total N of 30 specimens. In order to test the anatomical context required to properly segment structures we evaluated and compared the performance of four separate segmentation models: (1) a 2D UNet model that pulled random cross sections from the volumes for training; (2) a 2D UNet model that had the training samples augmented with 3D perturbations for more anatomical context; (3) a 3D VNet with volumetric patching and a padded border to protect against edge artifacts; and (4) a 3D VNet with volumetric patching and image compression by ½ the volume with the padded border. Our results show that as anatomical context in the image or volume increases, segmentation performance also improves.
A calibration method for conformal x-ray transmission imaging system
The X-ray source using ZnO nanowires as the cold cathode is first densely integrate a large number of X-ray sources into flat panel devices. Compared with traditional ray sources, X-ray sources device based on ZnO nanowires have small size, instantaneous response, and can be electronically controlled. We propose a new conformal transmission imaging system that uses this flat-panel ray source to irradiate objects with multiple points to complete extremely low-dose projection. In order to get the final image, it is necessary to process the original projection with an image restoration method to remove the projection aliasing caused by multiple point sources. Due to the addition of iteration in the image restoration process, the error caused by the geometric parameters will gradually become larger as the iteration progresses, resulting in distortion of the final image. Therefore, geometric correction of the imaging system is required. The new imaging system has a unique geometric relationship. Based on this geometric relationship, this paper proposes a geometric correction method applied to the new imaging scheme. In this method, the angle parameters in the new imaging system are estimated using a single point volume model. It only needs one projection of the phantom, and it can derive the analytical formula about the three angle errors, which can avoid falling into a local minimum. In this paper, we give the simulation results of the calibration method, which proves the validity and accuracy of the method.
A hybrid domain approach to reduce streak artifacts of sparse view CT image via convolutional neural network
In this study, we propose a method to reduce streak artifacts of sparse view CT images via convolutional neural network (CNN). The main idea of the proposed method is to utilize both image and sinogram domain data for CNN training. To generate datasets, projection data were acquired from 512 (128) views using Siddon’s ray-driven algorithm, and full (sparse) view CT images were reconstructed by filtered back projection with a Ram-Lak filter. We first trained U-net based CNN_img, which was designed to reduce the streak artifacts of sparse view CT in image domain. Then, the output images of CNN_img were used as prior images to conduct pseudo full view sinogram. Before upsampling, sparse view sinogram was normalized by the prior images, and then linear interpolation was employed to estimate the missing view data compared to full view sinogram. The upsampled data were denormalized using prior images. To reduce the residual errors in pseudo full view sinogram data, we trained CNN_hybrid with residual encoder-decoder CNN, which is known to be effective in reducing the residual errors while preserving structural details. In order to increase the learning efficiency, the dynamic range of the pseudo full view sinogram data was converted via exponential function. The results show that the CNN_hybrid provides better performance in streak artifacts reduction than CNN_img, which is also confirmed by quantitative assessment.
Posters: Computed Tomography
icon_mobile_dropdown
Optimization of CT angiography using physiologically-informed computational plaques, dynamic XCAT phantoms, and physics-based CT simulation
Thomas J. Sauer, Ehsan Abadi, W. Paul Segars, et al.
Cardiovascular disease is a leading cause of death worldwide. Among all cardiovascular diseases, coronary artery disease (CAD) is a major contributor to mortality, accounting for approximately 10% of deaths per annum. Imaging techniques such as computed tomography (CT) can provide information on the location, composition, and severity of disease within the coronary arteries. However, cardiac motion makes protocol optimization for coronary CT angiography (CCTA) image acquisition and reconstruction uniquely challenging. The clinical trials necessary to optimize these protocols require a cohort of diverse patients, clinical staff, and are time-consuming. Consequently, there is a need for safer, less expensive, and faster alternatives. A virtual imaging trial (VIT) enables studies on the optimization of CCTA protocol parameters in a controlled (simulated) environment. In this work, we developed a framework to integrate computational models of CAD with computational, anthropomorphic phantoms (XCAT) and a CT simulator (DukeSim) so that VITs can be performed with these tools to optimize CCTA protocols for clinical tasks. To demonstrate the framework, we included a calcification in the right coronary artery of a 50th percentile BMI male XCAT phantom. The calcification was subjected to cardiac motion (60, 90, 120 BPM) and underwent simulated imaging with and without contrast enhancement (simulating a 370 mgI/mL injection) and was reconstructed via filtered back-projection (FBP) with two kernels (B26f, B70f). Measurements of the right coronary artery under imaging indicated that the small calcification was readily detected below 90 BPM, with no iodinated contrast media, and reconstructed via FBP with the B70f kernel. As demonstrated, this new VIT framework can provide an efficient means or CCTA protocol optimization.
Synthesizing high-resolution CT from low-resolution CT using self-learning
We propose a learning-based method to synthesize high-resolution (HR) CT images from low-resolution (LR) CT images. A self-super-resolution framework with cycle consistent generative adversarial network (CycleGAN) is proposed. As an ill-posed problem, recent super-resolution methods rely on the presence of external/training atlases to learn the transform LR images to HR images, which is often not available for CT imaging to have high resolution for slice thickness. To circumvent the lack of HR training data in z-axis, the network learns the mapping from LR 2D transverse plane slices to HR 2D transverse plane slices via CycleGAN and inference HR 2D sagittal and coronal plane slices by feeding these sagittal and coronal slices into the trained CycleGAN. The 3D HR CT image is then reconstructed by collecting these HR 2D sagittal and coronal slices and image fusion. In addition, in order to force the ill-posed LR to HR mapping to be close to a one-to-one mapping, CycleGAN is used to model the mapping. To force the network focusing on learning the difference between LR and HR image, residual network is integrated into the CycleGAN. To evaluate the proposed method, we retrospectively investigate 20 brain datasets. For each dataset, the original CT image volume was served as ground truth and training target. Low-resolution CT volumes were simulated by downsampling the original CT images at slice thickness direction. The MAE is 17.9±2.9 HU and 25.4±3.7 HU for our results at downsampling factor of 3 and 5, respectively. The proposed method has great potential in improving the image resolution for low pitch scan without hardware modification.
GAN-based sinogram completion for slow triple kVp switching CT
The feasibility of acquiring multi-energy CT data through slow modulation of the kVp as an alternative to photon-counting detectors (PCDs) is currently under exploration. A low kVp-switching rate can be enabled with a conventional CT system but raises challenges due to missing sinogram views. Our previous work used a CNN-based method for sinogram completion by generating full-sampled images from undersampled sinograms, providing an acceptable image quality at a 22°/kVp switching rate. The purpose of this study was to investigate a GAN-based spectral sinogram completion method for enabling a lower kVp switching rate. A Pix2Pix GAN model with paired undersampled sinogram of 45° or 120° projections/kVp and its corresponding full-sampled sinogram was implemented and trained. The completed data was subsequently used to perform sinogram domain material decomposition. Our results on a simulated FORBILD abdomen phantom dataset showed that the GAN-based method can further lower the kVp switching rate to 45° projections/kVp. The proposed GAN-based sinogram completion method facilitates slow-kVp switching acquisitions and thus further relaxes hardware requirements.
Estimation of in vivo noise in clinical CT images: comparison and validation of three different methods against ensemble noise gold-standard
Francesco Ria, Taylor B. Smith, Ehsan Abadi, et al.
Image quality estimation is crucial in modern CT with noise magnitude playing a key role. Several methods have been proposed to estimate noise surrogates in vivo. This study aimed to ascertain the accuracy of three different noise-magnitude estimation methods. We used ensemble noise as the ground truth. The most accurate approach to assess ensemble noise is to scan a patient repeatedly and assess the noise for each pixel across the ensemble of images. This process is ethically undoable on actual patients. In this study, we surmounted this impasse using Virtual Imaging Trials (VITs) that simulate clinical scenarios using computer-based simulations. XCAT phantoms were imaged 47 times using a scanner-specific simulator (DukeSim) and reconstructed with filtered back projection (FBP) and iterative (IR) algorithms. Noise magnitudes were calculated in lung (ROIn), soft tissues (GNI), and air surrounding the patient (AIRn), applying different HU thresholds and techniques. The results were compared with the ensemble noise magnitudes within soft tissue (En). For the FBP-reconstructed images, median En was 30.6 HU; median ROIn was 46.6 HU (+52%), median GNI was 40.1 HU (+31%), and median AIRn 25.1 HU (-18%). For the IR images, median En was 19.5 HU; median ROIn was 31.2 HU (+60%), median GNI was 25.1 HU (+29%), and median AIRn 18.8 HU (-4%). Compared to ensemble noise, GNI and ROIn overestimate the tissue noise, while AIRn underestimates it. Air noise was least representative of variations in tissue noise due to imaging condition. These differences may be applied as adjustment or calibration factors to better represent clinical results.
Ultra-high-resolution CT and multi-resolution visualization of calcified coronary artery plaques: an ex vivo human study
Thomas Wesley Holmes, Amir Pourmorteza
Background: With new advances in CT energy-integrating (EID) and photon-counting (PCD) detector technology, the spatial resolution of CT has significantly improved from 0.50 mm to less than 0.25 mm isotropic. Our goal was to explore the image quality improvements that are associated with UHR CT, specifically in imaging and characterization of atherosclerotic plaques. Methods: We imaged seven excised human hearts, with a known history of atherosclerosis, on an EID-CT scanner equipped with a UHR comb. The scans were performed at 120 kVp and 211 mAs, with 1-sec gantry rotation time. The images were reconstructed twice; first with a standard resolution (SR) reconstruction kernel (Uq36u) at 0.5x0.5x3.0 mm3 voxel size, and second with a UHR kernel (Uq77u) at 0.2x0.2x1.5 mm3. We measured calcium volume, Agatston score, and number of lesions with dense calcification (HU>1000). We propose a multi-resolution visualization scheme in which smooth soft tissue features are derived from the SR image, and the sharp hard plaque features blended from the UHR images. Results: We detected a total of 105 lesions in the seven hearts. The UHR images had significantly reduced blooming artifacts which leads to higher average HU values and smaller lesion volumes. The calcium volume for UHR was 67% of the SR volumes (r2= 0.81). Agatston scores were also systematically lower in UHR compared to SR 41% (r2= 0.81). In addition, we found 102 lesions with dense calcification compared to 4 in SR. Conclusion: UHR-CT can significantly reduce blooming artifacts and partial volume effects in atherosclerotic plaque imaging.
Convolutional neural network based metal artifact reduction method in dental CT image
In dental CT, the presence of metal objects introduces various artifacts caused by photon starvation and beam hardening. Although several metal artifacts reduction methods have been proposed, they still have limitations in terms of reducing the metal artifacts. In this work, we proposed a method to reduce the metal artifacts with convolutional neural network (CNN). The proposed method is comprised of two steps. In STEP 1, we acquired a more accurate prior image, which is used in normalized metal artifact reduction (NMAR) technique through the CNN. The metal artifacts in output image from STEP 1 are reduced by CNN training, which provides more accurate prior images. In STEP 2, the NMAR is conducted with the acquired prior image from CNN result. To validate the proposed method, we used dental CT images containing metals and without metal to evaluate that the proposed method could significantly reduce the metal artifacts compared to the NMAR method.
Quantitative stability with patient size, dose, and kVp in spectral detector CT for pediatric imaging
Pediatric imaging utilizes the quantitative capabilities of CT to guide clinical decision making and treatment, but image quality is heavily affected by variation in patient sizes and the need for lower dose scans. Dual energy CT generates spectral results such as virtual monoenergetic images (VMI), electron density (ED), and effective atomic number (Zeff) that enhance material characterization and quantification. Though it has not been extensively explored, application of DECT to pediatric imaging may allow increased stability in quantitative measures with varying patient size, dose, and tube voltage. To examine the dependency of size, dose, and tube voltage, a phantom with tissue-mimicking inserts was scanned with dual-layer spectral detector CT with different extension rings to simulate different pediatric patient sizes. Each size configuration was subsequently scanned at CTDIvol of 9, 6, and 3 mGy with 100 and 120 kVp to obtain conventional CT and spectral results. Overall, both VMI and ED values were accurately quantified. VMI at 70 keV and 9 mGy demonstrated smaller differences among patient size and kVp compared to conventional images. Low dose dependency relative to 9 mGy was also present for VMI. Similarly, ED and Zeff showed low dependency on patient size, dose, and kVp and maintained material differentiability. Stability of these spectral results with different patient sizes, doses, and tube voltages illustrates the potential application of spectral detector CT to pediatric patients not only to improve the consistency of quantitative measures across patient sizes but also to allow lower doses without impairing quantification.
Assessing the condition of spectral channelization (energy binning) in photon-counting CT: a singular value decomposition based approach
To further understand the fundamentals of photon-counting spectral CT and provide guidelines on its design and implementation, we propose a singular value decomposition (SVD) based approach to assess the conditioning of spectral channelization and its impact on the performance of spectral imaging under both ideal and realistic detector spectral response. The study runs over two- and three-material decomposition based spectral imaging (material specific imaging). A specially designed phantom that mimics the soft and bony tissues in the head is used to reveal the relationship between the conditioning of spectral channelization and the imaging performance (noise and contrast-to-noise ratio). The study also runs over the cases with up to 50% spectral overlapping and gapping. Under ideal detector spectral response, the condition number of spectral channelization reaches the minimum while no overlapping occurs in spectral channels, and increments with increasing spectral overlapping, and so does in the situation while gapping occurs. The distortion in detector’s spectral response due to charge-sharing and fluorescent escaping inevitably leads to spectral overlapping and thus degrades the conditioning of spectral channelization. With increasing condition number, the noise increases and contrast-to-noise ratio decreases, respectively. The proposed approach, especially its coverage on the situation wherein gapping occurs in spectral channelization, is novel and may provide information for insightful understanding of the fundamentals and guidelines on implementation of spectral imaging in photon-counting and energy-integration CT, and other x-ray imaging modalities such as radiography and tomosynthesis.
Parameter-free Bayesian reconstruction for dual energy computed tomography
Dual energy CT (DECT) expands applications of CT imaging in its capability to acquire two datasets, one at high and the other at low energy, and produce decomposed material images of the scanned objects. Bayesian theory applied for statistical DECT reconstruction has shown great potential for giving the accurate decomposed material fraction images directly from projection measurements. It provides a natural framework to include various kinds of prior information for improved image reconstruction with its optimal selected hyper parameter by a trial-error style. To eliminate the cumbersome style, in this work, we propose a parameter-free Bayesian reconstruction algorithm for DECT (PfBR-DE). In our approach, the physical meaning of the hyper parameter can be interpreted as the ratio of the data variance α and the prior tolerance σ by formulating the probability distribution functions of the data fidelity and prior expectation. With an alternative optimization scheme, the data variance, prior tolerance and decomposed material images can be jointly estimated. Experimental results with the abdomen phantom demonstrate the PfBR-DE method can obtain the comparable quantity decomposed material images with the conventional methods without freely adjustable hyper parameter.
Limited-angle CT reconstruction via data-driven deep neural network
Dobin Yim, Burnyoung Kim, Seungwan Lee
An increasing use of computed tomography (CT) in modern medicine has raised radiation dose issues. Strategies for low-dose CT imaging are necessary in order to prevent side effects. Among the strategies, limited-angle CT scans are being used for reducing radiation dose. However, the limited angle scans cause severe artifacts in the images reconstructed by using conventional reconstruction algorithms, such as filtered back-projection (FBP), due to insufficient data. To solve this issue, various methods have been proposed to replace conventional reconstruction algorithms. In this study, we proposed a data-driven deep learning-based limited-angle CT reconstruction method. The proposed method, called Recon-NET, consisted of 3 fully connected (FC) layers, 5 convolution layers and 1 deconvolution layer, and the Recon-NET learned the end-to-end mapping between projection and reconstructed image data. The FBP algorithm was implemented for comparison with the Recon-NET. Also, we evaluated the performance of the Recon-NET in terms of image profile, mean-squared error (MSE), peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The results showed that the proposed model could reduce artifacts and preserve image details comparing to the conventional reconstruction algorithm. Therefore, the Recon-NET has the potential to provide high-quality CT images with low-dose and reject the complexity of conventional techniques.
Rotating projection based localizer radiograph: attenuation calculation optimization and patient automatic centering with parallel-beam projection scheme
Attenuation information from Localizer Radiograph (LR) is the basis for Automatic Exposure Control. However, the total achievable dose optimization could be significantly affected by either LR based attenuation calculation or patient positioning. To avoid those aspects, we proposed an integrated procedure for more robust and accurate attenuation calculation as well as possible automatic patient centering combined with the Rotating Projection base Localizer Radiograph (RPLR). A 3D attenuation map with more accurate attenuation information and automatic patient centering can be realized with one pre-view scan, so that a complete automatic workflow with always optimized dose modulation can be enabled.
3D VSHARP®, a general-purpose CBCT scatter correction tool that uses the linear Boltzmann transport equation
High quality cone-beam tomography (CBCT) reconstruction requires accurately estimating and subtracting the (often) large amount of scatter from the raw projection data. Although considerable attention has been paid to scatter correction algorithm development over the past several years, there still exists the need for a practical, general-purpose tool that is accurate, fast, and requires minimal calibration. Here, we introduce 3D VSHARP® which utilizes a finite element solver of the Linear Boltzmann Transport Equation (LBTE) to accurately and rapidly simulate photon transport through a model of the object being scanned and then scale and subtract the estimated scatter from raw projections. 3D VSHARP has been incorporated into the commercially available reconstruction software development toolkit, CST (Varex Imaging, Salt Lake City, UT) enabling scatter correction to be applied to arbitrary scanner configurations and geometries as part of an entire reconstruction pipeline. To set parameters for 3D VSHARP, the user chooses from a library of files that describe key physical aspects of the CT system, including its x-ray spectrum, detector response, and, if they exist, bowtie filter, and anti-scatter grid. The object model, which characterizes the spatial distribution of the atomic number and density of the scanned object, is automatically generated from the first-pass reconstruction which may, if desired, include CST’s existing kernel-based scatter correction 2D VSHARP®. We describe the new correction tool and show example reconstructions. High accuracy of scatter correction and excellent image quality were achieved with total reconstruction times on the order of 1 minute.
Dose-length-product determination on cone beam computed tomography through experimental measurements and dose-area-product conversion
V. Fransson, A. Tingberg
The dosimetry of cone beam computed tomography (CBCT) is not fully elaborated yet, and some of these systems presents dose-area-product (DAP) values after an examination rather than, as in the case of traditional CT, the doselength- product (DLP). The purpose of this study was to provide a reproducible and straight-forward method for DLP measurements on CBCT, as well as to validate a tool for estimating DLP for a CBCT system in terms of accuracy. A prototype conversion tool for estimating DLP, using the DAP value, was provided by the vendor of a CBCT system which currently display only DAP. The DAP to DLP conversion tool was validated using five protocols for extremity imaging. DLP was measured using a 30 cm ionization chamber and 30 cm long cylindrical PMMA-phantom. DLP, the integrated absorbed dose within the ionization chamber, was measured through central and peripheral measurements in the phantom in order to calculate the weighted DLP, DLPW,CBCT. Comparisons between DLPW,CBCT and estimated DLP, showed that the conversion tool was accurate within 10%, with a mean average error of 6.1% for all measured protocols. The variation between repeated measurements was small, making the method highly reproducible. In conclusion, in this study a simple method for determining DLP on CBCT was presented, and it was validated that the conversion tool can present the delivered dose in terms of DLP with high accuracy. The measured DLP, as well as the DLP estimated by the conversion tool, is suitable for quality control and relative dose comparisons between protocols, but its’ relation to the DLP of CT systems should be investigated further in order to relate to patient dose.
Sphere phantom approach to measure MTF of computed tomography system using convolutional autoencoder network
Image quality assessment is important task to maintain and improve the imaging system performance, and modulation transfer function (MTF) is widely used as a quantitative metric describing the spatial resolution of an image for fan beam computed tomography (CT) system. In MTF measurement, although fine wire and edge objects are generally used, there are difficulties in precise phantom alignment. To overcome this limitation, a sphere object was considered as an alternative phantom due to spherically symmetric property. However, the sphere phantom is more suitable for measuring 1D MTF along the particular direction than 2D MTF for fan beam CT system. In this work, we proposed a sphere phantom approach to measure whole 2D MTF of fan beam CT system using convolutional autoencoder network. We generated projection data of point and sphere objects, and reconstructed using filtered back-projection (FBP). The reconstructed point image was regarded as an ideal 2D point spread function (PSF). The ideal 2D modulation transfer function (MTF) was calculated by taking Fourier transform of the ideal 2D PSF. To measure 2D MTF, we divided the Fourier transform of reconstructed sphere phantom by that of ideal sphere object. The estimation errors caused by the inverse filtering were corrected using proposed convolutional autoencoder network. After applying the network, the estimated 2D MTF shows a good agreement with the ideal 2D MTF, indicating that the convolutional autoencoder network is effective for measuring 2D MTF of fan beam CT system.
Automated patient-specific and organ-based image quality metrics on dual-energy CT datasets for large scale studies
Wanyi Fu, Juan Carlos Ramirez-Giraldo, Daniele Marin, et al.
The purpose of this study was to develop an automated patient-specific and organ-based image quality (IQ) assessment tool for dual energy (DE) computed tomography (CT) images for large scale clinical analysis. To demonstrate its utility, this tool was used to compare the image quality of virtual monoenergetic images (VMI) with mixed images. The tool combines an automated organ segmentation model developed to segment key organs of interest and a patient-based IQ assessment model. The organ segmentation model was reported in our previous study and used to segment liver in this study; specifically, the model used 3D Unet architecture, developed by training on 200 manually labeled CT cases. We used task-based image quality assessment to define a spectral detectability index (ds'), which enables the task definition to be lesion with specific contrast properties depending on DE reconstruction chosen. For actual testing of the tool, this study included 322 abdominopelvic DECT examinations acquired with dual-source CT. Within regions of segmented organ volumes, the IQ assessment tool automatically measures noise and calculates the spectral dependent detectability index (ds') for a detection task (i.e., liver lesion). This organ-based IQ tool was used to compare the image quality of DE images including VMIs at 50 keV, 70 keV and mixed images. Compared to mixed images, the results showed that VMI at 70 keV had better or equivalent spectral detectability index (difference 12.62±2.95%), while 50 keV images showed improved detectability index (61.62±10.23%). The ability to automatically assess image quality on a patient-specific and organ-based level may facilitate large scale clinical analysis, standardization, and optimization.
Evaluation of a new low dose CBCT imaging protocol for measuring circumferential bone levels around dental implants
The objective of this study was to evaluate whether linear accuracy and metallic scatter artifact generated between 360 degree and 180 degree Cone Beam Computed Tomography (CBCT) acquisition protocols were significantly different during evaluation of peri-implant bone levels. On ten dentate dry human skulls, dental implants were placed at two posterior mandibular implant sites in each skull, one on left and one on right, totaling 20 sites for each protocol. CBCT scans using both acquisition protocols were made on each site. The conventional 360 degree protocol contained: (Parameters: (90 kV -10 mA), 120mm x 100mm focused field of view (FOV) and 17.5 (s) exposure). The low radiation dose 180 degree protocol contained: (parameters: (80kV -2mA), 120x 100mm FOV and 9.0 (s) exposure). There was a significant difference between metallic scatter acquired by pixel intensity values during bone density measurements between 360 degree acquisition and the 180 degree acquisition (Mesial Left p=0.00; Mesial Right =0.00; Distal Left 0.51; Distal Right p=0.02; Buccal Left p=0.00; Buccal Right p=0.43; Lingual Left p=0.00, Lingual Right p=0.03). There was no significant difference in linear accuracy of measurements (Mesiodistal Left p=0.36; Mesiodistal Right p=0.13; Buccolingual Left p=0.70; Buccolingual Right p=0.92). Sensitivity and specificity of both acquisition protocols were comparable. These results dictate that low dose acquisition protocol has comparable linear measurement accuracy to conventional acquisition protocol. Additionally, there is a significant decrease in the metallic scatter, thus making the low dose 180 degree protocol significantly better for evaluating peri-implant bone levels following dental implant placement.
Stationary head CT scanner using CNT x-ray source arrays
Derrek Spronk, Yueting Luo, Christina R. Inscoe, et al.
X-ray Computed Tomography (CT) is an indispensable imaging modality in the diagnosis of traumatic brain injury and brain hemorrhage. While the technology and the associated system components have been refined over the last several decades, all modern CT systems still rely on the principle of rotating sources and detectors. The rotating gantry adds a high degree of complexity to the overall system design, and could be eliminated in favor of a configuration of stationary x-ray sources and detectors. Such a change could potentially enable CT systems to be better suited for austere environments. Furthermore, the image acquisition speed would no longer be limited by the maximum rotating speed of the gantry. Unfortunately, due to the size and bulk of existing commercial x-ray sources, such a configuration is impossible to build with a sufficient number of focal spots. Recently, carbon nanotube (CNT) x-ray source arrays have been used in various stationary imaging configurations to generate diagnostic quality tomosynthesis images in the fields of mammography, dentistry, and orthopedics. In this study, we present a potential stationary head CT (s-HCT) design which combines projection data from 3 separate but parallel imaging planes for a complete CT fan-beam reconstruction. The proposed scanner consists of 3 CNT x-ray source arrays with a large number distributed focal spots each, and an Electronic Control System (ECS) for high speed control of the x-ray exposure from each focal spot. The projection data was collected by an array of multi-row detectors. For this unique imaging configuration, a customized geometry calibration procedure was developed. A linear collimator was designed and constructed for the reduction of cone-angle scatter. Finally, volumetric CT slice data was acquired through z-axis translation of the imaging object.
The impact of CT-data segmentation variation on the morphology of osteological structure
Matthew Wysocki, Scott Doyle
CT (computed tomography) scans have become indispensable tools for gross anatomy teaching and research [1-5]. Computational methods can create high-resolution 3D models of anatomical structures for education, research, and clinical applications [6-9]. However, data processing has a large influence on 3D model generation. Understanding how these differences in processing alter morphology, and whether such disparities impact the conclusions one may draw from the data, is imperative for interpreting radiology ground truth. Failure to account for these differences can lead to erroneous decisions regarding joint repair, joint replacement, and prosthetics design. In this work, we investigate how segmentation algorithms influence the morphology of 3D models of osteological structures (femurs) from human cadaveric CT scans. We measure dissimilarity in 3D model morphology resulting from multiple different segmentation protocols. As CT scanderived 3D models become more commonplace in gross anatomical research, it is critical to fully understand proper segmentation approaches and how much variation is acceptable for 3D anatomical models derived from radiological imaging.
Correlation of respiratory changes in lung density on dynamic chest radiographs with changes in the CT value: a computational phantom study
Pulmonary impairments are observed as decreased changes in lung density during respiration on dynamic chest radiographs (DCR). To facilitate pulmonary function evaluation based on DCR, the present study was conducted to correlate respiratory changes in pixel values (▵pixel values) on DCR images with those in computed tomography (CT) values (▵CT values) using four-dimensional extended cardiac‐torso (XCAT) phantoms. Twenty XCAT phantoms with forced breathing with/without chest wall motion were generated, and then, were projected using an X-ray simulator. ▵pixel values and ▵CT values on respiration were calculated on the simulated projection and CT images, respectively, to create a conversion table. The equation (regression line) of the relationship between the ▵pixel values and ΔCT values in the lung field was calculated, and statistical analysis was performed to test whether there was a significant difference in the inclination of the regression line (P < 0.05) in terms of physique, sex, and breathing manner. There was a significant difference in the inclination of the regression lines between males and females, with and without chest wall motion. We developed a conversion table from ▵pixel values to ▵CT values in the lung field. Our results demonstrated the feasibility of the quantitative evaluation of pulmonary function based on ▵pixel values on DCR. It would be helpful to provide a better understanding of the pulmonary function and conditions based on ▵pixel values while considering ▵CT values.
Posters: X-ray Imaging
icon_mobile_dropdown
Mobile x-ray tomography system with intelligent sensing for 3D chest imaging
Yang Zhao, J. Eric Tkaczyk, Alex Chen, et al.
Tomography in the mobile setting has the potential to improve diagnostic outcomes by enabling 3D imaging at the patient’s bedside. Using component testing and system simulations, we demonstrate the potential for limited angle X-ray tomography on a mobile X-ray system. To enable mobile features such as low weight, size and power of system components, we have developed detector and patient anatomy tracking algorithms for accurately and automatically registering system geometry to patient anatomy during acquisition of individual projective-views along a tube-motion trajectory. We evaluate the effects of acquisition parameters and registration inaccuracy on image quality of reconstructed chest images using realistic X-ray simulation of an anthropomorphic numerical phantom of the thorax.
Stationary head CT with linear CNT x-ray source arrays: image quality assessment through simulation
Yueting Luo, Derrek Spronk, Christina R. Inscoe, et al.
Purpose: Carbon nanotube (CNT) based field-emission x-ray source arrays allow the development of robust stationary computed tomography (CT) imaging systems with no gantry movement. There are many technical considerations that constrain the optimal system design. The aim of this work is to assess the image quality of a proposed Stationary Head CT (sHCT) system through simulation. Methods: In our previous work, we defined a system design consisting of three parallel imaging planes. Each plane consists of a CNT x-ray source array with a large number of linearly distributed focal spots and three strip detector modules. Each imaging plane is rotated 120° with respect to the adjacent plane to provide maximum projection view coverage of the region of interest (ROI). An iterative reconstruction algorithm based on the ASTRA toolbox was developed for the specific sHCT system. The ACR 464 phantom and a set of clinical head CT data were used to assess the system design and image quality. Imaging performance was evaluated both quantitatively and qualitatively. Results: The simulation results suggest that the proposed sHCT design is feasible and high-fidelity CT images can be obtained. The reconstructed image of the ACR 464 phantom reproduces accurate CT numbers. The reconstructed CT images for the human head confirm the capability of this prototype for identifying low contrast pathologies. Conclusion: A three-plane sHCT system is evaluated in this work. The iterative reconstruction algorithm produces high image quality in terms of uniformity, signal-to-noise ratio, signal-to-contrast ratio and structural information. Further work on the optimization of the current sHCT system will focus on speed up of volumetric image data collection in system hardware and further improvement of the reconstruction image quality through regularization and incorporating of machine leaning techniques.
Radiographic imaging with a linear x-ray source and multi-row detector
Qinghao Chen, Shuang Zhou, Yuewen Tan, et al.
A tetrahedron beam CT (TBCT) benchtop system has been developed with linear array x-ray source with 48 focal spots in 4 mm spacing and a photon counting detector (PCD). The generated x-ray beams are collimated to fan beams by a multi-slot collimator and converge to the 6 mm wide multi-row photon counting detector. Due to its scatter rejection geometry, the scatter-to-primary ratio (SPR) is significantly reduced to 17% for fan beam comparing to 120% for cone beam in presence of a head phantom. We performed both 2D and 3D imaging study with the TBCT system. 2D radiography images have been obtained using shift-and-add method. Both analytical and iterative reconstruction methods were used in 3D image reconstruction. Head phantom and animal cadaver were scanned.
Investigation of the need for an x-ray scatter-reduction grid during neurointerventional procedures
X-ray guided neurointerventions are catheter-based treatments for cerebrovascular diseases such as strokes and aneurysms. During such procedures visualization of treatment devices is the primary imaging task. In this work we investigate the necessity of x-ray scatter-reduction grids in performing those tasks. Various endovascular treatment devices such as stents, coils and catheters along with a low contrast blood vessel phantom were placed on a head-equivalent phantom. Images of the objects were acquired with and without a grid (15:1 grid ratio, 80 lines/cm and Al interspace). The x-ray field was set to the full 8 x 8 inch FOV to allow for realistic scatter generation. The detector was positioned close to the phantom to investigate maximal scatter conditions. Contrast and Contrast to Noise (CNR) ratios of the catheter tip and the blood vessel phantom were measured and compared for images obtained with and without the grid. The x-ray technique parameters were kept constant for all acquisitions. For the catheter tip there was a 43% reduction in contrast with the removal of the grid due to increased scatter reaching the detector. However, due to increased primary there was a 18% increase in CNR. For the blood vessel phantom, there was a 33% reduction in contrast, whereas a 17% increase in CNR. All the devices and the blood vessels in the phantom were still visible even with the increased scatter without the grid. The results of the study indicate the use of grids during neurointervention procedures might not be necessary to perform the intervention.
X-ray tube based on carbon nanotube field emitter for low dose mini C-arm fluoroscopy
Jongmin Lim, Amar P. Gupta, Hangyeol Park, et al.
We designed and developed the vacuum sealed x-ray tube based on carbon nanotube(CNT) field emitter for mobile medical x-ray devices and also design the test bed for CNT x-ray tube. The CNT was synthesized by chemical vapor deposition(CVD) method on a metal alloy substrate. The grown CNT is assembled with a gate and a focuser and then combined into an electron gun(e-gun) through a brazing process. The the e-gun had an aging process inside the vacuum chamber. As a result of aging, the CNT e-gun was able to generate anode current of 1.5 mA at electric field of about 4 V/μm, and field emission current was also stabilized. After the aging process, the e-gun was brazed into a ceramic X-ray tube inside a high-temperature furnace at a vacuum degree of E-06 torr and vacuum sealed. Field emission characteristic was measured using this X-ray tube and compared with an e-gun, and almost similar results were obtained. Incase of Xray tube, we applied a higher electric field while controlling the current at 500ms intervals through pulse driving. As a result, X-ray images of human teeth were successfully acquired using CNT X-ray tubes.
Stent enhancement using marker response network and focus conversion learning in x-ray fluoroscopy sequences
Coronary heart disease is a common cause of death for human being. To treat artery stenosis due to accumulation of atheromatous plaques, stents are implanted to support the narrowing vessel. The relative position between stent and vascular wall is a critical factor for evaluating the treatment. However, low signal to noise ratio (SNR) of fluoroscopy sequences make it difficult for doctors to observe the stents clearly. In order to improve the clarity of stent effectively, the paper describes a novel algorithm based on deep neural network for stent markers detection so as to realize the time domain stacking. In this step, the response map generation model with weighted loss was designed to concentrate on small objects detection, which has unbalanced annotation between background and targets. In addition, a focus conversion learning algorithm by deblurring network was proposed for edge sharpness and the spatial resolution improvement to decrease influences by focus size. It can locate the marker pair successfully in both phantom and clinical images with 84.04% correction rate in marker detection and decrease the mean square error in the focus conversion algorithm. After quantitative indexes comparison and observation, it reveals that the proposed algorithm can effectively enhance stents without manual annotation, which provides the assistance to evaluate the treatment exactly.
Characterization of compact alumina vacuum sealed x-ray tube for medical imaging: interpretation with simulation program
We developed a compact vacuum X-ray tube using an alumina body instead of glass. A filament is implanted as a cathode which follows Richardson-Dushman equation. After aging the filament to eliminate impurities on the filament which improves performance of filament before tubing, tube current was obtained from anode voltage of 6kV, 3mA to 40kV, 3.15mA. The pulse high voltage generator is designed and developed to make the tube less stressful. With the ceramic X-ray tube, X-ray images of human breast and teeth phantom were successfully obtained, verifying the potential of the compact alumina vacuum sealed X-ray tube in X-ray application for medical imaging.
Theoretical comparison of the detective quantum efficiency of halide lead perovskite, cesium iodide and selenium x-ray imaging detectors
Halide lead perovskites have been proposed for direct-conversion x-ray imaging because of their high stopping power, high charge mobility and high bulk resistivity. We modeled the detective quantum efficiency (DQE) of methylammonium lead iodide (MAPbI3) and compared with that of amorphous selenium (a-Se) and columnar cesium iodide (CsI). For CsI, we calculated the DQE for RQA-5, RQA- 7 and RQA-9 x-ray spectra for 200 µm detectors elements; for a-Se we calculated the DQE for MMA 28 x-ray spectrum for 75 µm elements. Our DQE model included the quantum efficiency, x-ray fluorescence, fluorescence reabsorption, charge conversion, collection of secondary quanta (i.e. charges or optical photons), charge diffusion in MAPbI3, optical blur in CsI, noise aliasing, and electronic noise. The model DQE of CsI was compared with published data; the model photoelectric noise power spectrum of lead was compared with published Monte Carlo data; there was excellent agreement. For fluoroscopic applications, the theoretical DQE of MAPbI3 was approximately equal to that of CsI at exposures of 10µR per image, but was ~40% lower than CsI at an exposure of 0.1µR per image. This result is due to the relatively high levels of electronic noise present in prototype MAPbI3 systems. For chest radiography applications, the theoretical DQE of MAPbI3 was 25% greater than that of CsI at typical exposure levels (i.e. 0.04mR 3mR at the detector). For mammography, the theoretical DQE of MAPbI3 was ~5% greater than that of a-Se across all spatial frequencies and all exposures between 0.6mR 250mR. These results suggest that halide lead perovskites may provide superior dose efficiency than CsI-based systems in chest radiography applications, but may offer little to no improvement in mammographic or fluoroscopic applications.
Thickness uniformity and stability of amorphous selenium films on flexible substrates for indirect conversion x-ray detection
Maryam Farahmandzadeh, Steven Marcinko, Davide Curreli, et al.
Amorphous selenium (a-Se) possesses unique features that have been leveraged for medical imaging applications. Previously, it was shown that having a soft interface with a-Se will reduce the stress generated from the creation of crystalline nucleus, prevent radiation-induced crystallization, and improve long-term stability of the device. The aim of this study is to develop a uniform thickness of a-Se on a flexible substrate, polyethylene terephthalate coated with indium tin oxide (ITO-PET), for future investigations of the effect of substate on stability of the a-Se against radiation-induced defects and long-term storage. In order to fabricate the a-Se-based detector, we have developed a dedicated thermal evaporator for a-Se deposition. The dependence of the source-to-substrate distance on film uniformity has been investigated. Experimentally, following modeling results, a-Se samples over a 2-inch diameter were deposited on ITOPET and thickness uniformity was compared to simulation.
Development of a novel algorithm to improve image quality in chest digital tomosynthesis using convolutional neural network with super-resolution
Tsutomu Gomi
We developed a novel dual-energy (DE) virtual monochromatic (VM) very-deep super-resolution (VDSR) reconstruction algorithm (DVV) that uses projection data to improve nodule contrast and resolution during chest digital tomosynthesis (CDT). To estimate residual errors in high-resolution and multiscale VM images in projection space, the DVV algorithm employs a training network (mini-batch stochastic gradient descent algorithm with momentum) that involves subjectively reconstructed hybrid SR images [simultaneous algebraic reconstruction technique (SART) total variation (TV)-first iterative shrinkage-thresholding algorithm (FISTA); SART-TV-FISTA]. DE-DT imaging was accomplished using pulsed X-ray exposures that were rapidly switched between low- and high-tube potential kVp, followed by image comparisons using conventional polychromatic filtered back projection (FBP), SART-TV-FISTA, and DE-VM-SART-TV-FISTA algorithms. Improvements in contrast and resolution were compared using signal difference- to-noise ratio (SDNR) and radial-modulation transfer function (radial-MTF) of a chest phantom with simulated ground-glass opacity (GGO) nodules. The novel DVV algorithm improved overall performance in terms of SDNR and yielded high quality images independent of the type of simulated GGO nodules used in the chest phantom. The novel DVV algorithm yielded superior resolution in comparison with conventional reconstruction algorithms for the radial-MTF analysis, with and without VM processing. Furthermore, the DVV algorithm improved contrast and spatial resolution.
Stationary multi x-ray source system with carbon nanotube emitters for digital tomosynthesis
In order to diagnose diseases in complex areas such as the chest, an X-ray system of a suitable type is required. Chest tomosynthesis, which acquires a reconstructed 3D image by taking X-ray images from various angles, is one of the best image acquisition technologies in use. However, one major disadvantage of tomosynthesis systems with a single X-ray source is the motion blur which occurs when the source moves or rotates to change the acquisition angle. To overcome this, we report a stationary digital tomosynthesis system, which uses 85 field-emission type X-ray sources based on carbon nanotubes (CNTs). By using CNT-based electronic emitters, it is possible to miniaturize and digitize the X-ray system. This system is designed such that a maximum of 120 kV can be applied to the anode to obtain chest X-ray images. The field emission characteristics of the CNT-based emitters are measured, and X-ray images were obtained using the stationary multi X-ray source system, confirming its applicability to chest Tomosynthesis.
Estimation of lung volume changes from frontal and lateral views of dynamic chest radiography using a convolutional neural network model: a computational phantom study
Nozomi Ishihara, Rie Tanaka, William Paul Segars, et al.
TWe aimed to estimate respiratory changes in lung volumes (Δlung volume) using frontal and lateral dynamic chest radiography (DCR) by employing a convolutional neural network (CNN) learning approach trained and tested using the four-dimensional (4D) extended cardiac-torso (XCAT) phantom. Twenty XCAT phantoms of males (5 normal, 5 overweight, and 5 obese) and females (5 normal) were generated to obtain 4D computed tomography (CT) of a virtual patient. XCAT phantoms were projected in frontal and lateral directions. We estimated lung volumes of the XCAT phantoms using CNN learning techniques. One dataset consisted of a right- or left-half frontal view, a lateral view, and ground truth (GT) knowledge of each phantom in the same respiratory phase. Δlung volume were calculated by subtracting the lung volume estimated at the maximum exhale from that at the maximal inhale, and was compared with Δlung volume calculated from the known GT. Δlung volume was successfully estimated from frontal and lateral DCR images of XCAT phantoms by a CNN learning approach. There was a correlation for Δlung volume between GT and estimation in both lungs. There were no significant differences in the estimation error between the right and left lungs, males and females, and males having different physiques. We confirmed that DCR has potential use in the estimation of Δlung volume, which corresponds to vital capacity (VC) in pulmonary function tests (PFT). Pulmonary function could be assessed by DCR even in patients with infectious diseases who can’t do PFT using a spirometer.
An automatic approach to lung region segmentation in chest x-ray images using adapted U-Net architecture
Segmentation of the lung field is considered as the first and crucial stage in diagnosis of pulmonary diseases. In clinical practice, computer-aided systems are used to segment the lung region from chest X-ray (CXR) or CT images. The task of segmentation is challenging due to the presence of opacities or consolidation in CXR, which are typically produced by overlaps between the lung region and intense abnormalities caused by pulmonary diseases such as pneumonia, tuberculosis, or COVID-19. Recently, Convolution Neural Networks (CNNs) have been shown promising for segmentation and detection in digital images. In this paper, we propose a two-stage framework based on adapted U-Net architecture to leverage automatic lung segmentation. In the first stage, we extract CXR-patches and train a modified U-Net architecture to generate an initial segmentation of lung field. The second stage is the post-processing step, where we deploy image processing techniques to obtain a clear final segmentation. The performance of the proposed method is evaluated on a set of 138 CXR images obtained from Montgomery County’s Tuberculosis Control Program, producing an average Dice Coefficient (DC) of 94.21%, and an average Intersection-Over-Union (IoU) of 91.37%.
Development of microfocus x-ray source based on CNT emitter for intraoperative specimen radiographic system
A microfocus X-ray source based on carbon nanotube (CNT) emitter grown by chemical vapor deposition is presented in this paper. The microfocus X-ray source is developed for the intraoperative specimen radiographic system, which can be used inside the operation theatre and helps reducing the surgery time during breast conserving surgery by confirming the extent of margin on specimen. This high focusing X-ray source is realized by growing CNTs on pointed structures. The field emission characteristic shows that maximum anode current of 1mA, which corresponds to a maximum emission current density of 500 mA/cm2 from the CNT-based point emitter. The optimized parameter for the assembly of electron gun was achieved by using commercially available CST simulation software. Consequently, this microfocus X-ray tube could produce X-ray image of multilayer printed circuit board showing fine lines of integrated circuit.
An analysis of radiomics features in lung lesions in COVID-19
Radiomic features extracted from CT imaging can be used to quantitively assess COVID-19. The objective of this work was to extract and analyze radiomics features in RT-PRC confirmed COVID-19 cases to identify relevant characteristics for COVID-19 diagnosis, prognosis, and treatment. We measured 29 morphology and second-order statistical-based radiomics features from 310 lung lesions extracted from 48 chest CT cases. Features were evaluated according to their coefficient of variation (CV). We calculated the CV for each feature under two statistical conditions: one with all lesions weighted equally and one with all cases weighted equally. In analyzing the patient data, there were 6.46 lesions-per-case and for 81.25% of cases, the lesions presented with bilateral lung involvement. For all radiomic features examined except ‘energy’, the CV was higher in the lesion distribution than the case distribution. The CV for morphological features were larger than second-order in both distributions, 181% and 85% versus 50% and 42%, respectively. The most variable features were ‘surface area’, ‘ellipsoid volume’, ‘ellipsoid surface area’, ‘volume’, and ‘approximate volume’, which deviated from the mean 173-255% in the lesion distribution and 119-176% in the case distribution. The features with the lowest CV were ‘homogeneity’, ‘discrete compactness’, ‘texture entropy’, ‘sum average’, and ‘elongation’, which deviated less than 31% by case and less than 25% by lesion. Future work will investigate integrating this data with similar studies and other diagnostic and prognostic criterion enhancing the role of CT in detecting and managing COVID-19.
A parametric fitting technique for rapid determination of a skin-dose correction factor for angle of beam incidence during image-guided endovascular procedures
Monte-Carlo software was used to calculate the “patient’s skin-dose” averaged over 1 mm skin thickness as a function of incident beam-to-skin angle from 90 to 10 degrees for entrance-beam sizes from 5 to 15 cm, energies from 60 to 120 kVp, and thicknesses of Cu beam filters from 0.2 to 0.5 mm in a water phantom to obtain an angular-correction-factor (ACF). The Matlab tool, ‘cftool’, was used to fit these ACF’s to formulas as a function of incident beam angle and kVp, allowing the ACF to be quickly determined for accurate skin-dose calculation during fluoroscopically-guided procedures.
Posters: Image Reconstruction
icon_mobile_dropdown
Novel reconstruction algorithms that facilitate real time 4D tomosynthesis
Priyash Singh, Chloe J. Choi, Trevor L. Vent, et al.
Tomosynthesis has become a vital interventional tool for breast biopsy procedures. It is used to orient, advance and confirm the biopsy needle’s movement. However, at the end of a procedure, success is determined only after the biopsy sample shows the presence of the targeted lesion. Contrarily, failures, such as a target miss, are realized only after healthy tissue has been incorrectly excised. If real-time 4D tomosynthesis is made possible, it could not only guide and confirm the needle advancement but also anticipate any inadvertent target displacement and prevent healthy tissue damage. This study explores three classes of novel reconstruction algorithms that facilitate real-time 4D tomosynthesis guided biopsy procedures namely, Image-Processed algorithm, Segmented algorithm and Difference-Exploiting algorithm. A conventional tomosynthesis reconstruction algorithm applied to an incrementally moving needle shows a blurred needle tip - a consequence of superimposing and averaging the back-projections where the tip exists at different positions. The Image-Processed algorithm contrast-enhances all the back-projections before reconstruction thereby curbing the blurring and producing a more discernible needle tip. Pixel-based Segmented and Difference-Exploiting algorithms reconstruct individual pixels differently. The Segmented algorithm uses only the latest back-projection to reconstruct the pixels of the needle thereby capturing its most recent position. The Difference-Exploiting algorithm utilizes the superimposed differences of back-projections that helps in selectively identifying those elements, like the moving needle, that show a variation. Reconstructing these elements differently compared to other static elements of the breast allows capturing them in real-time. This work details the formulation of the three algorithms.
MLEM reconstruction with specific initial image for cone-beam x-ray luminescence computed tomography
The reconstruction of cone-beam x-ray luminescence computed tomography (CB-XLCT) is an ill-posed inversion problem because of incomplete data and lack of prior information. To improve the illness of the inversion problem, the data fidelity and regularization term are two key aspects for the reconstruction model. However, there is not much research considering the statistical characteristics of data in XLCT reconstruction, although many various regularizations are studied. To make full use of the data noise model, a strategy combing the maximum likelihood expectation estimation (MLEM) algorithm and the regularization-type algorithm is proposed. In the MLEM algorithm, the Poisson noise is considered for accurate data model. The result by the regularization-type algorithm is used as the specific initial image for the MLEM to improve the reconstruction quality and convergence speed of the MLEM. There are two main steps in the proposed strategy. Firstly, the fast iterative shrinkage-thresholding algorithm (FISTA) with a large regularization parameter is used to get the sparse solution quickly. Secondly, the sparse solution is used as the initial iteration value of the MLEM. The proposed algorithm is named as FISTA-MLEM. Through the stepwise strategy, the image sparsity is guaranteed and the accuracy of the reconstruction is maintained. Result of phantom experiment shows the FISTA-MLEM method presents better contrast to noise ratio and shape similarity compared with other traditional methods, such as ART, Tikhonov, FISTA and TSVD.
Quantitative texture analysis of normal and abnormal lung tissue for low dose CT reconstruction using the tissue-specific texture prior
Screening is an effective way to detect lung cancer early and can improve the survival rate significantly. The low-dose computed tomography (LdCT) is demanding for lung screening to ensure the exam radiation as low as reasonably possible. The statistical image reconstruction has shown great advantages in LdCT imaging, where many types of priors can be used as constrain for optimal images. The tissue-specific Markov random field (MRF) type texture prior (MRFt) was proposed in our previous work to address the clinical related texture information. For the chest scans, four tissue texture were extracted from regions of lung, bone, fat and muscle respectively. In this work, we focus on the region of interest, i.e. lung for the lung cancer screening. The quantitative texture analysis of normal and abnormal lung tissue was performed to address the following issues of the proposed MRFt model: (1) a more comprehensive understanding of the lung tissue texture (2) what MRF prior we should use for the abnormal lung tissue. Experiments results showed that normal lung tissue has texture similarity among different subjects. The robust similarity among humans laid the feasibility of building the lung tissue database for the LdCT imaging which has no previous FdCT scans. Different abnormal lung tissue varies significantly. There is no way to get the prior knowledge of lung nodules until the CT exam was performed.
Estimation of statistical weights for model-based iterative CT reconstruction
V. Haase, K. Stierstorfer, K. Hahn, et al.
Since the introduction of model-based iterative reconstruction for computed tomography (CT) by Thibault et al. in 2007, statistical weights play an important role in the problem formulation with the objective to improve image quality. Statistical weights depend on the variance of measurements. However, this variance is not known and therefore weights must be estimated. So far, the literature neither discusses how statistical weights should be estimated nor how accurate the estimation needs to be. Our submission aims at filling this gap in the literature. Specifically, we propose an estimation procedure for statistical weights and assess this procedure with real CT data. The estimated weights are compared against (clinically unpractical) sample weights obtained from repeated scans. The results show that the estimation procedure delivers reliable results for the rays that pass through the scanned object. Four imaging scenarios are considered; in each case the root mean square difference between the estimated and sample weights is below 5% of the maximum statistical weight value. When used for reconstruction, these differences are seen to have little impact: all voxel values within soft tissue (low contrast) regions differ by less than 1 HU. Our results demonstrate that the statistical weights can be sufficiently well estimated to closely approach the result that would be obtained if the weights were known.
Deep learning-based sinogram extension method for interior computed tomography
Juuso H. J. Ketola, Helinä Heino, Mikael A. K. Juntunen, et al.
X-ray computed tomography (CT) is widely used in diagnostic imaging. Due to the growing number of CT scans worldwide, the consequent increase in populational dose is of concern. Therefore, strategies for dose reduction are investigated. One strategy is to perform interior computed tomography (iCT), where X-ray attenuation data are collected only from an internal region-of-interest. The resulting incomplete measurement is called a truncated sinogram (TS). Missing data from the surrounding structures results in reconstruction artifacts with traditional methods. In this work, a deep learning framework for iCT is presented. TS is extended with a U-net convolutional neural network, and the extended sinogram is reconstructed with filtered backprojection (FBP). U-net was trained for 300 epochs with L1 loss. Truncated and full sinograms were simulated from CT angiography slice images for training data.1097/193/152 sinograms from 500 patients were used in the training, validation, and test sets, respectively. Our method was compared with FBP applied to TS (TS-FBP), adaptive sinogram detruncation followed by FBP (ADT-FBP), total variation regularization applied to TS, and FBPConvNet using TS-FBP as input. The best root-mean-square error (0.04±0.01, mean±SD) and peak signal-to-noise-ratio (29.5±2.9) dB in the test set were observed with the proposed method. However, slightly higher structural similarity indices were observed with FBPConvNet (0.97±0.01) and ADT-FBP (0.97±0.01) than with our method (0.96 ± 0.01). This work suggests that extension of truncated sinogram data with U-Net is a feasible way to reconstruct iCT data without artifacts that render image quality undesirable for medical diagnostics.
Quantitative cone-beam x-ray luminescence computed tomography with 3D TV denoising based on Split Bregman method
Cone-beam X-ray luminescence computed tomography (CB-XLCT) is a noninvasive molecular imaging technique that reflects the distribution of fluorescent nanomaterials in the imaged object. It is urgent to describe the quantitative relationship between the reconstruction and the concentration of the fluorescent nanomaterials. However, in the field of CB-XLCT, most researches aim to improve the imaging accuracy, ignoring further quantitative evaluation of the reconstruction intensity. In this work, the quantitative evaluation for CB-XLCT is studied. In addition, to improve the quantitative performance, a new strategy based on fast iterative shrinkage-thresholding algorithm (FISTA) and 3D Total- Variation (TV) denoising with Split Bregman (SB) method (FISTA-TV) is proposed for CB-XLCT reconstruction. In FISTA-TV, FISTA is applied to get a L1-regularized sparse reconstruction in CB-XLCT and the Split Bregman method is used to solve the TV denoising problem. With the FISTA-TV strategy, the sparse results yielded by FISTA together with 3D TV denoising based on Split Bregman, alleviate the illness of the inverse problem of CB-XLCT, making the relationship between the reconstruction intensity and the actual concentration of fluorescent nanomaterials more accurate. Computer simulations have shown the quantitative reconstruction and evaluation for CB-XLCT is improved with the proposed FISTA-TV algorithm, compared to Algebraic Reconstruction Technique (ART), Tikhonov regularization, FISTA.
3D image reconstruction for symmetric-geometry CT with linearly distributed source and detector in a stationary configuration
Tao Zhang, Hewei Gao, Li Zhang, et al.
In conventional computed tomography (CT) system, generally, the X-ray source moves along a circular or spiral trajectory to achieve volume coverage. However, the gantry rotation increases the manufacturing complexity and dominantly limits the temporal resolution. Recently, a new concept of symmetric-geometry computed tomography (SGCT) was explored, where the sources and detectors are linearly distributed in a stationary configuration. The movements of source and detector are no longer needed in the data acquisition of SGCT, which has the advantage of increasing the scanning speed and simplifying the system construction. In this work, we investigate the there-dimensional (3D) image reconstruction of SGCT, in which the special scanning trajectory is of interest (i.e., tilting straight-line scan). Based on the analysis of imaging geometry and projection data representation, a tilting straight-line analytic reconstruction (TSLA) method is proposed for 3D tomography. The preliminary results of 3D simulated phantoms show that TSLA algorithm for SGCT can reach a reconstruction accuracy which is comparable to that of the helical multidetector CT using PI-original method. On the other hand, with no rotation involved, SGCT can offer fast CT scan and it has the potential in many 3D tomography applications where scanning speed is critical.
Deep-learning based joint estimation of dual-tracer PET image activity maps and clustering of time activity curves
Yiming Wan, Huihui Ye, Huafeng Liu
Dual-tracer positron emission tomography (PET) has emerged as a promising nuclear medical imaging technology. It reflects more information about biological functions than only using single tracer, which is of great value for disease research, clinical diagnosis and treatment. Furthermore, image reconstruction and segmentation often are treated as two sequential problems. This paper integrates these two problems into a coherent framework. In recent years, deep learning has attracted growing interests in medical image processing due to its powerful feature extraction ability. The reconstruction of simultaneous dual-tracer activity map based on neural network has shown an advantage of not relying on prior information and interval injection that conventional methods always need. In this paper, we propose a joint deep neural network for reconstruction of two individual tracer and segmentation by clustering of time activity curves. In the part of reconstruction, a classic Generative Adversarial Networks (GAN) called pix2pix is introduced as the basic framework. A 3D U-Net is used as the generator realize reconstruction. Then the time activity curves (TACs) in the reconstruction results are extracted and used for clustering to achieve segmentation. The loss of segmentation is used to guide the training of reconstruction network in return. The performance of segmentation network exceeds that of the previous joint method and the performance of reconstruction network is further improved.
A new method and system for fast and accurate 3D projections in medical imaging and iterative reconstruction
Ivan Bajic
This work proposes the highly-parallel HRGB method, and an associated HRGB Parallel Electronic Circuit (PEC), for complex-geometry medical image reconstruction and model-based iterative reconstruction (MBIR). 3D forward projection, particularly ray-driven projection, is discussed. A new approach is proposed, wherein the modeled beam is, in effect, sent through frequency space (not image space), and image space is modeled as a continuous (not discrete) region. This allows accuracy and speed, e.g. the HRGB PEC eliminates interpolation computations and slow data access. For even greater speed, the HRGB PEC can be implemented as an asynchronous (or semi-asynchronous) circuit, allowing fast computations that are not limited (or, are negligibly limited) by digital clock cycles. Thus, the HRGB's 3D projections can be computed with greater accuracy, and for arbitrary sources and arbitrary detectors in 3D space, with multiple beams/rays on each detector element for additional accuracy.
Model-based reconstruction algorithm for x-ray induced acoustic tomography
We present a time-domain, model-based scheme for X-ray induced acoustic tomography (XACT). In comparison to the commonly used analytic schemes, the model-based reconstructions exhibit less limited-view and noisy artifacts and enable incorporating detector sizes and sound speed variations. In this paper, we brie y compare the performance of the proposed model-based scheme with the universal back-projection and model back-projection reconstructions obtained for full- and limited view test-cases in spherical detection geometry for noisy acoustic measurements. The algorithm has also been validated for experimental ring-array XACT data.
Posters: Machine Learning Applied to Imaging Physics
icon_mobile_dropdown
A deep learning approach to correctly identify the sequence of coincidences in cross-strip CZT detectors
Intra-detector scatters (IRS) and Inter-detector scatters (IDS) are events that often happen in positron emission tomography (PET) due to the Compton scattering of an annihilation photon inside one detector block and also from one detector block to another. One challenge in PET system based on Cadmium zinc telluride (CZT) detectors is the high mass attenuation coefficient for Compton scattering at 511 keV that causes a considerable fraction of Multiple Interaction Photon Events (MIPEs). Besides, in a cross strip CZT detector, there is more ambiguity in pairing anode with its corresponding cathode in MIPEs in IRS. This study utilizes state-of-the-art in deep learning to correctly identify target sequences in cross-strip CZT detectors. It is promising to improve the system's sensitivity by identifying true line-of-responses (LOR)s out of different possible LORs from IRS events, IDS events, and Intra-detector ambiguity, which they are usually discarded.
Learning a projection operator onto the null space of a linear imaging operator
Image reconstruction algorithms seek to reconstruct a sought-after object from a collection of measurements. However, complete measurements such that an object can be uniquely reconstructed are seldom available. Analysis of the null components of the imaging system can guide both physical design of the imaging system and algorithmic design of reconstruction algorithms to more closely reconstruct the true object. Characterizing the null space of an imaging operator is a computationally demanding task. While computationally efficient methods have been proposed to iteratively estimate the null space components of a single or a small number of images, full characterization of the null space remains intractable for large images using existing methods. This work proposes a novel learning-based framework for constructing a null space projection operator of linear imaging operators utilizing an artificial neural network autoencoder. To illustrate the approach, a stylized 2D accelerated MRI reconstruction problem (for which an analytical representation of the null space is known) was considered. The proposed method was compared to state-of-the-art randomized linear algebra techniques in terms of accuracy, computational cost, and memory requirements. Numerical results show that the proposed framework achieves comparable or better accuracy than randomized singular value decomposition. It also has lower computational cost and memory requirements in many practical scenarios, such as when the dimension of the null space is small compared to the dimension of the object.
A probabilistic conditional adversarial neural network to reduce imaging variation in radiography
Medical images can vary due to differences in imaging equipment and conditions. This variability negatively can impact the consistency and accuracy of diagnostic processes. Hence, it is critical to decrease the variability in image acquisition to achieve consistent analysis, both visually and computationally. There are three main categories that can contribute to image variability: equipment, acquisition protocol, and image processing. The purpose of this study was to employ a deep neural network (DNN) method to reduce variability in radiography due to these factors. Given radiography images acquired with different settings, the network was set up to return harmonized images, targeting a reference standard. This was implemented via a virtual imaging trial platform, utilizing an X-ray simulator (DukeSim) and 77 anthropomorphic, computational phantoms (XCAT). The phantoms were imaged at 120 kV at four different dose levels with DukeSim emulating a typical flat panel radiography system. The raw radiography images were then post-processed using a commercial algorithm at eight different settings resulting in a total of 2464 radiographs. For each XCAT, the reference standard was defined as the noise-less and scatter-less radiography image with image processing parameters based on a radiologist’s preference. The simulated images were then used to train and test the DNN. The test set resulted an average structural similarity index greater than 0.84, and an 𝐿1 error less than 0.02, indicating the harmonized images were visually and analytically more consistent and closer to the desired reference appearance. The proposed method has great potential to provide for effective and uniform interpretation of radiographic images.
Self-supervised learning for CT deconvolution
Images produced by CT systems with larger detector pixels often suffer from lower z resolution due to their wider slice sensitivity profile (SSP). Reducing the effect of SSP blur will result in resolution of finer structures and enables better clinical diagnosis. Hardware solutions such as dicing the detector cells smaller or dynami- cally deflecting the X-ray focal spot do exist to improve the resolution, but they are expensive. In the past, algorithmic solutions like deconvolution techniques also have been used to reduce the SSP blur. These model- based approaches are iterative in nature and are time consuming. Recently, supervised data-driven deep learning methods have become popular in computer vision for deblurring/deconvolution applications. Though most of these methods need corresponding pairs of blurred (LR) and sharp (HR) images, they are non-iterative during inference and hence are computationally efficient. However, unlike the model-based approaches, these methods do not explicitly model the physics of degradation. In this work, we propose Resolution Amelioration using Machine Adaptive Network (RAMAN), a self-supervised deep learning framework, that explicitly uses best of both learning and model based approaches. The framework explicitly accounts for the physics of degradation and appropriately regularizes the learning process. Also, in contrary to supervised deblurring methods that need paired LR and HR images, the RAMAN framework requires only LR images and SSP information for training, making it self-supervised. Validation of proposed framework with images obtained from larger detector systems shows marked improvement in image sharpness while maintaining HU integrity.
Generation of contrast-enhanced CT with residual cycle-consistent generative adversarial network (Res-CycleGAN)
Contrast-enhanced computed tomography (CECT) has been commonly used in clinical practice of radiotherapy for enhanced tumor and organs at risk (OARs) delineation since it provides additional visualization of soft tissue and vessel anatomy. However, the additional CECT scan leads to increased radiation dose, prolonged scan time, risks of contrast induced nephropathy (CIN), potential requirement of image registration to non-contrast simulation CT, as well as elevated cost, etc. Hypothesizing that the non-contrast simulation CT contains sufficient features to differentiate blood and other soft tissues, in this study, we propose a novel deep learning-based method for generation of CECT images from non-contrast CT. The method exploits a cycle-consistent generative adversarial network (CycleGAN) framework to learn a mapping from non-contrast CT to CECT. A residual U-Net was employed as the generator of the CycleGAN to force the model to learn the specific difference between the non-contrast CT and CECT. The proposed algorithm was evaluated with 20 sets of abdomen patient data with a manor of five-fold cross validation. Each patient was scanned at the same position with non-contrast simulation CT and CECT. The CECT images were treated as training target in training and ground truth in testing. The non-contrast simulation CT served as the input. The preliminary results of visual and quantitative inspections suggest that the proposed method could effectively generate CECT images from non-contrast CT. This method could improve anatomy definition and contouring in radiotherapy without additional clinic efforts in CECT scanning.
Mitigating unknown biases in CT data using machine learning
CT reconstruction requires an accurate physical model. Mismatches between model and data represent unmodeled biases and can induce artifacts and systematic quantitation errors. Bias effects are dependent on bias structure and data processing (e.g. model-based iterative reconstruction versus filtered-backprojection). In this work, we illustrate this sensitivity and develop a strategy to estimate unmodeled biases for situations where the underlying source is unknown or difficult to estimate directly. We develop a CNN framework for projection-domain de-biasing using a ResUNet architecture and spatial-frequency loss function. We demonstrate a reduction in reconstruction errors across bias conditions and reconstruction methods.
Investigation of the efficacy of a data-driven CT artifact correction scheme for sparse and truncated projection data for intracranial hemorrhage diagnosis
Data-driven CT-image reconstruction techniques for truncated or sparsely acquired projection data to reduce radiation dose, iodine volume, and patient motion artifacts have been proposed. These approaches have shown good performance and preservation of image quality metrics. To continue these efforts, we investigated whether these techniques affect the performance of a machine-learning algorithm to identify the presence of intracranial hemorrhage (ICH). Ten-thousand head CT scans were collected from the 2019 RSNA Intracranial Hemorrhage Detection and Classification Challenge dataset. Sinograms were simulated and then resampled in both a one-third truncated and onethird sparse manner. GANs were tasked with correcting the incomplete projection data in two ways. Firstly, in the sinogram domain, where the incomplete sinogram was filled by the GAN and then reconstructed. Secondly, in the reconstruction domain, where the incomplete data were first reconstructed and the sparse or truncation artifacts were corrected by the GAN. Eighty-five hundred images were used for artifact correction network training, and 1500 were withheld for network assessment via an already trained machine-learning algorithm tasked with diagnosis of ICH presence. Fully-sampled reconstructions were compared with the sparse and truncated reconstructions for classification accuracy. Difference in classification accuracy between the fully sampled (83.4%), sparse (82.0%), and truncated (82.3%) reconstructions was minimal, demonstrating that the network diagnosis performance is unaffected by 2/3 reduction of projection data. This work indicates that data-driven reconstructions for a sparse or truncated projection dataset can provide high diagnostic performance for ICH detection at a fraction of the typical radiation dose.
Estimation of patient eye-lens dose during neuro-interventional procedures using a dense neural network (DNN)
The patient’s eye-lens dose changes for each projection view during fluoroscopically-guided neuro-interventional procedures. Monte-Carlo (MC) simulation can be done to estimate lens dose but MC cannot be done in real-time to give feedback to the interventionalist. Deep learning (DL) models were investigated to estimate patient-lens dose for given exposure conditions to give real-time updates. MC simulations were done using a Zubal computational phantom to create a dataset of eye-lens dose values for training the DL models. Six geometric parameters (entrance-field size, LAO gantry angulation, patient x, y, z head position relative to the beam isocenter, and whether patient’s right or left eye) were varied for the simulations. The dose for each combination of parameters was expressed as lens dose per entrance air kerma (mGy/Gy). Geometric parameter combinations associated with high-dose values were sampled more finely to generate more high-dose values for training purposes. Additionally, dose at intermediate parameter values was calculated by MC in order to validate the interpolation capabilities of DL. Data was split into training, validation and testing sets. Stacked models and median algorithms were implemented to create more robust models. Model performance was evaluated using mean absolute percentage error (MAPE). The goal for this DL model is that it be implemented into the Dose Tracking System (DTS) developed by our group. This would allow the DTS to infer the patient’s eye-lens dose for real-time feedback and eliminate the need for a large database of pre-calculated values with interpolation capabilities.
Low-dose CT denoising via CNN with an observer loss function
Convolutional neural network (CNN) based low-dose CT denoising is effective to deal with complex CT noise. However, CNN denoiser with pixel-level loss functions (e.g., mean-squared-error (MSE) and mean-absolute- error (MAE)) often produces image blur. To overcome this limitation, perceptual loss function (e.g., VGG loss) is adapted to train CNN denoiser. CNN denoiser with VGG loss preserves structural details in denoised images better. However, because VGG network is trained with natural RGB images, which have different image properties from CT images, extracted features would not be related to diagnosis. Also, CNN denoiser with VGG loss introduces a bias of CT number in denoised images. In this work, we propose observer loss to train CNN denoiser. Observer network (i.e., feature extractor in observer loss) is trained with CT images to classify lesion-present and lesion-absent cases. We conduct two binary classification tasks, signal-known-exactly (SKE) and signal-known-statistically (SKS) tasks. Because it is hard to obtain labeled CT images, we insert simulated lesions. CNN denoiser with observer loss preserves small structures and edges in denoised images without introducing bias in CT number.
CNN-based CT denoising with an accurate image domain noise insertion technique
Convolutional neural network (CNN)-based CT denoising methods have attracted great interest for improving the image quality of low-dose CT (LDCT) images. However, CNN requires a large amount of paired data consisting of normal-dose CT (NDCT) and LDCT images, which are generally not available. In this work, we aim to synthesize paired data from NDCT images with an accurate image domain noise insertion technique and investigate its effect on the denoising performance of CNN. Fan-beam CT images were reconstructed using extended cardiac-torso phantoms with Poisson noise added to projection data to simulate NDCT and LDCT. We estimated local noise power spectra and a variance map from a NDCT image using information on photon statistics and reconstruction parameters. We then synthesized image domain noise by filtering and scaling white Gaussian noise using the local noise power spectrum and variance map, respectively. The CNN architecture was U-net, and the loss function was a weighted summation of mean squared error, perceptual loss, and adversarial loss. CNN was trained with NDCT and LDCT (CNN-Ideal) or NDCT and synthesized LDCT (CNN-Proposed). To evaluate denoising performance, we measured the root mean squared error (RMSE), structural similarity index (SSIM), noise power spectrum (NPS), and modulation transfer function (MTF). The MTF was estimated from the edge spread function of a circular object with 12 mm diameter and 60 HU contrast. Denoising results from CNN-Ideal and CNN-Proposed show no significant difference in all metrics while providing high scores in RMSE and SSIM compared to NDCT and similar NPS shapes to that of NDCT.
A deep learning post-processing to enhance the maximum likelihood estimate of three material decomposition in photon counting spectral CT
Alma Eguizabal, Mats U. Persson, Fredrik Grönberg
Photon counting detectors in x-ray computed tomography (CT) improve the decomposition of the CT scans into different materials. This decomposition is however not straightforward to solve, both in terms of computation expense and Photon counting detectors in x-ray computed tomography (CT) are a major technological advancement that provides additional energy information, and improve the decomposition of the CT image into material images. This material decomposition problem is however a non-linear inverse problem that is difficult to solve, both in terms of computation expense and accuracy. The most accepted solution consists in defining an optimization problem based on a maximum likelihood (ML) estimate with Poisson statistics, which is a model-based approach very dependent on the considered forward model and the chosen optimization solver. This may make the material decomposition result noisy and slow to be computed. To incorporate data-driven enhancement to the ML estimate, we propose a deep learning post-processing technique. Our approach is based on convolutional residual blocks that mimic the updates of an iterative optimization process and consider the ML estimate as an input. Therefore, our architecture implicitly considers the physical models of the problem, and in consequence needs less training data and fewer parameters than other standard convolutional networks typically used in medical imaging. We have studied a simulation case of our deep learning post-processing, first on a set of 350 Shepp-Logan -based phantoms, and then in a 600 human numerical phantoms. Our approach has shown denoising enhancement over two different ray-wise decomposition methods: one based on a Newton’s method to solve the ML estimation, and one based on a linear least-squares approximation of the ML expression. We believe this new deep learning post-processing approach is a promising technique to denoise material-decomposed sinograms in photon-counting CT.
Synthetic dual energy CT imaging from single energy CT using deep attention neural network
This work presents a learning-based method to synthesize dual energy CT (DECT) images from conventional single energy CT (SECT). The proposed method uses a residual attention generative adversarial network. Residual blocks with attention gates were used to force the model to focus on the difference between DECT maps and SECT images. To evaluate the accuracy of the method, we retrospectively investigated 20 headand-neck cancer patients with both DECT and SECT scans available. The high and low energy CT images acquired from DECT acted as learning targets in the training process for SECT datasets and were evaluated against results from the proposed method using a leave-one-out cross-validation strategy. The synthesized DECT images showed an average mean absolute error around 30 Hounsfield Unit (HU) across the wholebody volume. These results strongly indicate the high accuracy of synthesized DECT image by our machinelearning-based method.
Image reconstruction from projections of digital breast tomosynthesis using deep learning
Davi D. de Paula, Denis H. P. Salvadeo, Darlan M. N. de Araújo
The Filtered Backprojection (FBP) algorithm for Computed Tomography (CT) reconstruction can be mapped entire in an Artificial Neural Network (ANN), with the backprojection (BP) operation simulated analytically in a layer and the Ram-Lak filter simulated as a convolutional layer. Thus, this work adapts the BP layer for Digital Breast Tomosynthesis (DBT) reconstruction, making possible the use of FBP simulated as an ANN to reconstruct DBT images. We showed that making the Ram-Lak layer trainable, the reconstructed image can be improved in terms of noise reduction. Finally, this study enables additional proposals of ANN with Deep Learning models for DBT reconstruction and denoising.
Transfer learning-based synthetic CT generation for MR-only proton therapy planning in children with pelvic sarcomas
Proton therapy planning requires Hounsfield unit (HU) data from CT images to calculate dose and, in the case of pelvic sarcomas, accurate registration of the CT to MRI to delineate tumor. MR-only proton therapy planning would eliminate the uncertainty associated with CT/MR image registration and the need for CT, reducing exposure to radiation and anesthesia in children. We determined whether MR-only proton therapy planning is feasible by introducing a transfer learning-based cycleGAN (TL-cycleGAN) method to convert pelvic MRI to synthetic CT (sCT) for dose calculation and to specifically address the challenge of a small training dataset, commonly associated with pediatric studies. The TLcycleGAN was designed to transfer knowledge gained from converting a large number (n=125) of pediatric brain MRI studies to sCT and finetune the well-trained model on pelvic data. Sixteen patients (aged 1.1–21.3 years, 7 females) who received proton therapy to the pelvis were randomly divided into training (n=11) and testing (n=5) groups. sCT generated from T1- T2-weighted fat suppression MR images was compared to the real CT in terms of peak signal-tonoise ratio (PSNR), structural similarity (SSIM) index, mean error (ME), and mean absolute error (MAE) in HU. The mean ± standard deviation of PSNR, SSIM, ME and MAE were 30.6±3.0, 0.93±0.04 -3.4±10.2 HU, and 52.4±17.6 HU, respectively, for T1W MRI; 29.2±1.5, 0.93±0.02, -6.6±24.8 HU, and 85.4±18.8 HU, respectively, for T2W MRI. Transfer learning facilitates MR-only pediatric pelvic proton therapy planning by generating highly accurate sCT for a small training dataset and a large variation.
PET image resolution uniformity improvements using deep learning
Jing Lin, Yingying Li, Huihui Ye, et al.
The position-coding errors of the lines of response (LORs) caused by the depth of interaction (DOI) effects, especially the radial DOI effect, leads to the nonuniform resolution of the determined field of view (FOV) in positron emission tomography (PET), so that severe “tailing” will appear in the reconstructed image. At present, the most commonly used approach to solve the radial DOI problem is the hardware method, which mainly includes designing new detectors by using different principles. However, those approaches rely one more complex detector structures and signal processing methods. Inspired by systems that employ machine learning at the hardware level, a new deep learning based approach dedicated for improving the uniformity of radial resolution in PET imaging is proposed, whose basic idea is that images with high resolution at the center of FOV are fed as labels into ISTA-Net to train the sinograms, which are from the same objects at the edge of the FOV. The network makes use of the nonuniformity itself without requiring new detector or obtaining any additional information, such as DOI information from specified detectors or point spread function (PSF) information. Qualitative assessment and quantitative analysis based on real rats dataset and Monte Carlo simulations where four-layered DOI coding system is designed to obtain DOI information demonstrated that the proposed reconstruction method greatly improves the resolution of the image at the edge of FOV and enhances the uniformity of radial resolution in detection space.
Enhanced PET/MRI reconstruction via dichromatic interpolation of domain-translated zero-dose PET
We present a new strategy for incorporating high resolution structural information from MRI into the reconstruction of PET imagery via deep domain translated image priors. The strategy involves two steps: (1) predicting a PET uptake volume directly from MRI without requiring a radiation dose, and (2) using the predicted dose-free PET volume to impose sparsity constraints on the PET reconstruction from measured sinograms. The key idea of our approach is that domain translated PET imagery can capture the true spatial and sparsity patterns of PET imagery, which can be used to guide the convergence of the statistics-limited inverse problem. This scheme can be superior to joint-sparsity reconstruction, among other methods, since the mismatch between PET and MRI features is significantly reduced by using the domain translated zero-dose PET as the prior instead. We evaluate this technique on a wholebody 18F-FDG-PET dataset, demonstrating that dichromatic interpolation can recover high quality PET imagery from noisy and low dose PET/MRI, with no observed failure cases.
A topological approach for the pattern analysis on chest x-ray images of COVID-19 patients
Due to the global outbreak of COVID-19, in this work, anterior-posterior (AP) and posterior-anterior (PA) chest X-Ray images were used as the input data for computational image processing. This to approximate a range of luminescence that could filter the anatomical region of the lungs, by comparing local maxima in the luminescence histograms obtained from an open dataset of chest X-Ray images stored in a public GitHub repository at https://github.com/ieee8023/covid-chestxray-dataset. Luminescence masks were obtained from the approximated values of luminescence in the image histograms that correspond to the anatomical region of the lungs in the original chest AP and PA X-Ray images. The luminescence masks were used to segment the regions of interest containing the lungs, storing them in a separate image. The luminescence histograms from the segmented images were given as inputs for the K-means algorithm; a non-supervised learning algorithm that was applied as part of the pipeline of the mapper algorithm to obtain groups of information in data in the process of clusterization. The mapper algorithm provides a visual representation of the patterns found in clusters obtained from the values of luminescence frequency in the images through interconnected nodes in a simplicial complex. A simplicial complex is a mathematical object that allows observing topological features in a graph created by nodes connected by edges. Mapper algorithm closely connects regions of nodes in the simplicial complex, it indicates ranges of luminescence values in the input images which provide helpful information in the analysis of chest X-Ray images
Deep learning based similarity-consistency abnormality detection (SCAD) model for classification of MRI patterns of multiple myeloma (MM) infiltration
With IRB approval, a total of 132 sagittal views of T1-weighted (T1W) sequence in the spinal MRI scans was collected from 67 patients at our institution and used in this study. We developed a similarity-consistency abnormality detection model (SCAD) for classification of MRI patterns that are associated with low and high risk of multiple myeloma (MM) disease. Our SCAD model consisted of five CNN structures: a generator, an encoder, and three discriminators. The generator was used to capture the distribution of training samples by mapping the given distributions to the distribution of training samples. The encoder mapped the distribution of training samples back to the given distribution to speed up the inference time. The three discriminators were designed to force the generator and the encoder to meet a cycleconsistency constraint. The five components are trained together following the min-max game in GAN. The MRI patterns of each vertebra (normal, focal, variegated and diffused) were provided by an experienced radiologist as reference standard. We trained our SCAD model using the vertebras with the normal pattern, and deployed the trained SCAD model to the three non-normal patterns. The results showed that, our SCAD model achieved a test AUC of 0.71, 0.79 and 0.88 in identifying the focal, variegated and diffused patterns from the normal pattern, respectively.
Using a convolutional neural network for human recognition in a staff dose management software for fluoroscopic interventional procedures
J. Troville, R. S. Dhonde, S. Rudin, et al.
Staff dose management is a continuing concern in fluoroscopically-guided interventional (FGI) procedures. Being unaware of radiation scatter levels can lead to unnecessarily high stochastic and deterministic risks due to the effects of absorbed dose by staff members. Our group has developed a scattered-radiation display system (SDS) capable of monitoring system parameters in real-time using a controller-area network (CAN) bus interface and displaying a color-coded mapping of the Compton-scatter distribution. This system additionally uses a time-of-flight depth sensing camera to track staff member positional information for dose rate updates. The current work capitalizes on our body tracking methodology to facilitate individualized dose recording via human recognition using 16-bit grayscale depth maps acquired using a Microsoft Kinect V2. Background features are removed from the images using a depth threshold technique and connected component analysis, which results in a body silhouette binary mask. The masks are then fed into a convolutional neural network (CNN) for identification of unique body shape features. The CNN was trained using 144 binary masks for each of four individuals (total of 576 images). Initial results indicate high-fidelity prediction (97.3% testing accuracy) from the CNN irrespective of obstructing objects (face masks and lead aprons). Body tracking is still maintained when protective attire is introduced, albeit with a slight increase in positional data error. Dose reports are then able to be produced which contain cumulative dose to each staff member at the eye lens level, waist level, and collar level. Individualized cumulative dose reporting through the use of a CNN in addition to real-time feedback in the clinic will lead to improved radiation dose management.
Improving proton CT beyond iterative methods with a convolutional neural network
Relative stopping power (RSP) values of tissues in patients are needed to plan proton beam therapy accurately. Proton CT (pCT) is an alternative imaging method for obtaining more accurate RSP values than by using X-ray CT. This imaging modality gives mostly accurate RSP values but is blurred due to elastic multiple Coulomb scattering. To improve the blurriness of reconstructed pCT images, we have investigated a denoising convolutional neural network trained on known ground RSP values of a digital phantom. In our initial results, with the denoising network receiving pCT images reconstructed with an iterative method as input we observed improved spatial resolution and better RSP accuracy in the output images. The improved images had a higher peak signal-to-noise ratio (PSNR) and significantly improved structural similarity index measure (SSIM). More accurate RSP values with better spatial resolution will pave the way for more widespread adoption of pCT for proton treatment planning.
Posters: Photon Counting and Phase Contrast Imaging
icon_mobile_dropdown
Low-dose photon-counting CT with penalized-likelihood basis-image reconstruction: image quality evaluation
Mats U. Persson
Photon-counting CT scanners promise improvements in terms of noise performance, spatial resolution and material-discrimination capabilities, and their ability to reject electronic noise gives them a particularly large advantage for low-dose imaging compared to energy-integrating CT. Since filtered backprojection is suboptimal for highly noisy image data, model-based iterative reconstruction can be expected to give improved image quality for low-dose CT imaging. Several ”one-step” algorithms have been proposed that combine material decomposition and image reconstruction in a single optimization problem. The purpose of this simulation study is to evaluate the image quality that can be achieved with a one-step model-based iterative reconstruction for photon-counting low-dose CT. To this end, a penalized Poisson-likelihood model is used to reconstruct material basis images and virtual monoenergetic images from simulated measurements with a silicon-based photon-counting CT scanner and study the resulting image quality in terms of the edge-spread function, contrast-to-noise ratio and noise power spectrum, so that the tradeoff between noise and spatial resolution can be studied. The results are compared with a two-step method where projection-space material decomposition is followed by filtered backprojection. Our results show that the unconstrained one-step method can give good image quality even for low-dose images where the unconstrained two-step method fails. These results demonstrate the potential of photon-counting CT for low-dose imaging applications.
Maximum A Posteriori material decomposition for spectral photon-counting CT: application to human blood iron level estimation
Adam P. Harrison, Stefan Ulzheimer, Amir Pourmorteza
Purpose: To develop a dose-efficient image-based material decomposition technique for spectral photon-counting computed tomography (PCCT) data and investigate estimating human blood iron concentration from contrast- enhanced scans. Methods: We adapt a maximum a posteriori (MAP) approach to decomposition, formulating spectral material decomposition as maximizing a posterior likelihood that incorporates both the standard linear generative model of decomposition and a smoothness prior. Our approach employs numeric tensor algebra and software, which can naturally handle the high-dimensional nature of decomposition. To ensure accurate priors, we compute smoothness weights using the image created from all detected photons. Our MAP approach only requires a large and sparse linear system to solve, with one tuning parameter. Results: We test the algorithm on 4-energy threshold spectral PCCT scans of a human subject pre- and post- contrast. MAP estimates remain stable while reducing noise standard deviation by 80.1% and 75.4% for iron and iodine, respectively, which again suggests over 4x decrease in radiation. Aortic iron concentration measured from MAP had small bias post-contrast, but with a noise reduction of roughly 80%. This small bias (-5%) in iron content may be attributed to the blood volume increase after contrast injection. Conclusion: The dose-efficient MAP decomposition method shows improved precision over the standard approach in estimating blood-iron concentration. Future work will include additional human studies to determine the optimal trade-off between precision and algorithmic bias.
Multivariate SNR in spectral computed tomography
Jayasai R. Rajagopal, Faraz Farhadi, Ayele H. Negussie, et al.
In this work, we define a theoretical approach to characterizing the signal-to-noise ratio (SNR) of multi-channeled systems such as spectral computed tomography image series. Spectral image datasets encompass multiple near-simultaneous acquisitions that share information. The conventional definition of SNR is applicable to a single image and thus does not account for the interaction of information between images in a series. We propose an extension of the conventional SNR definition into a multivariate space where each image in the series is treated as a separate information channel thus defining a spectral SNR matrix. We apply this to the specific case of contrast-to-noise ratio (CNR). This matrix is able to account for the conventional CNR of each image in the series as well as a covariance weighted CNR (Cov-CNR), which accounts for the covariance between two images in the series. We evaluate this experimentally with data from an investigational photon-counting CT scanner (Siemens).
Theoretical comparison of energy-resolved and temporal-subtraction angiography
Energy-resolving x-ray detectors may enable producing iodine-specific images of the coronary arteries without the presence of motion artifacts. We refer to this approach as energy-resolved angiography (ERA), which uses basis material decomposition to produce iodine-specific images. We compared the theoretical iodine pixel signal-to- noise ratio (SNR) and the zero-frequency SNR of ERA with that of conventional digital subtraction angiography (DSA), the latter of which produces iodine-specific images by subtracting images acquired before and after iodine injection. For both ERA and DSA, we modeled iodine SNR with and without the response of realistic x-ray detectors. For ERA, we used a validated model of the energy response of a cadmium zinc telluride (CZT) spectroscopic x-ray detector to account for spectral degradation and spatio-energetic cross talk due to charge sharing. For DSA, we modeled the response of a cesium-iodine (CsI)-based detector and validated our model by comparison with published data. Incorporating a realistic energy response for spectroscopic x-ray detectors decreased the pixel SNR and zero-frequency SNR by greater than a factor of two. In the case of DSA, optical blur in the scintillator increased iodine SNR relative to ideal systems, a result attributable to reduced high-frequency noise in the presence of optical blur. Our results suggest that, for the same patient x-ray exposure, the pixel SNR and zero-frequency SNR of ERA will be ~1/6 and ~1/3 of that DSA, respectively.
An experimental evaluation of material separability in photon-counting CT
Jayasai R. Rajagopal, Faraz Farhadi, Ayele H. Negussie, et al.
Signal separability is an important factor in the differentiation of materials in spectral computed tomography. In this work, we evaluated the separability of two such materials, iodine and gadolinium with k-edges of 33.1 keV and 50.2 keV, respectively, with an investigational photon-counting CT scanner (Siemens, Germany). A 20 cm water equivalent phantom containing vials of iodine and gadolinium was imaged. Two datasets were generated by either varying the amount of contrast (iodine – 0.125-10 mg/mL, gadolinium 0.125-12 mg/mL) or by varying the tube current (50-300 mAs). Regions of interest were drawn within vials and then used to construct multivariate Gaussian models of signal. We evaluated three separation metrics using the Gaussian models: the area under the curve (AUC) of the receiver operating characteristic curve, the mean Mahalanobis distance, and the Jaccard index. For the dataset with varying contrast, all three metrics showed similar trends by indicating a higher separability when there was a large difference in signal magnitude between iodine and gadolinium. For the dataset with varying tube current, AUC showed the least variation due to change in noise condition and had a higher coefficient of determination (0.99, 0.97) than either mean Mahalanobis distance (0.69, 0.62) or Jaccard index (0.80, 0.75) when compared to material decomposition results for iodine or gadolinium respectively.
Simulation strategy for using large realistic phantoms in propagation-based phase contrast x-ray imaging
Ilian Häggmark, Kian Shaker, Hans M. Hertz
We present a straight-forward strategy for how to process anthropomorphic phantoms to enable simulations of propagation-based phase-contrast X-ray imaging (PBI). PBI requires sampling of a few micrometers which results in few TB of storage for a centimeter-sized model. With long propagation distances the thickness of each material in the X-ray direction is enough to perform accurate simulations. By using a boundary-preserving upsampling procedure on a coarsely voxelated phantom followed by a summation in the X-ray direction thickness maps with high sampling can be built. This opens up for improved imaging optimization and virtual clinical trials.
Towards x-ray phase contrast tomography in clinical conditions: simulation and phase retrieval development
Laurène Quénot, Ludovic Broche, Clément Tavakoli, et al.
X-ray phase contrast imaging has been proven to have a great interest for the diagnosis and the study of many different pathologies, especially for osteoarticular diseases as it allows to visualize every kind of tissue of the joint within a single image. For the time being, phase contrast tomography has been reserved to synchrotrons and its clinical transfer has therefore become a major challenge in the past decades. Different phase contrast imaging techniques are currently studied for that purpose: Grating interferometry, edge illumination and more recently speckle-based imaging. Because of its simplicity, in this work we study the possibility of transferring speckle-based imaging on conventional x-ray sources. The main challenges we have to face are the loss of spatial and temporal coherence of the conventional sources and the loss in resolution when compared to synchrotrons. We present here a numerical simulation code that we can use to study the influence of different experimental parameters. We also introduce a new phase retrieval algorithm for low coherence systems and compare it to already existing ones, showing that it is already performs well, even for conventional sources.
Tracking cells in the brain of small animals using synchrotron multi-spectral phase contrast imaging
Clément Tavakoli, Elisa Cuccione, Chloé Dumot, et al.
Synchrotron X-ray multi-spectral imaging is a novel imaging modality that may allow tracking cells at high resolution in small animal models. The data volume generated by such technique can be of hundreds of Gigabytes for one animal. Automatic, robust and rapid pipeline is therefore of paramount importance for large-scale studies. The goal of this article is to present a full image analysis pipeline ranging from the CT reconstruction up to the segmentation of nanoparticleslabeled- cells. Experimentally, rats that had received an intracerebral transplantation of gold nanoparticles-labeled cells were imaged in vivo in phase contrast mode (propagation-based imaging technique) at two different energies strategically chosen around the k-edge of gold. We apply a dedicated phase retrieval technique on each projection (out of 2000 for complete 2π rotation) before CT reconstruction. Then, a rigid registration is performed between the images below and above k-edge for accurate subtraction of the two data sets, leading to gold concentration maps. Due to the large number of specimens, the registration is based on the automatic segmentation of the cranial skull. Finally, an automatic segmentation of gold-labeled cells within the brain is performed based on high spots of gold concentrations. An example of an in-vivo data set for stroke cell therapy is presented.
Challenges of employing a high resolution x-ray detector in a coded-aperture x-ray phase-contrast imaging system
In this work, we have evaluated the performance of a coded-aperture (edge-illumination) x-ray phase contrast imaging (CA-XPCi) system employing a high resolution x-ray detector with pixel pitch and size equal to 7.8 µm. Two of the main challenges concerning a high resolution x-ray detector employed in a CA-XPCi system that are fabrication of high resolution x-ray absorption masks and environmental vibration are addressed in this paper. We have investigated both the effect of absorption mask thicknesses and mechanical vibration on the performance of a high resolution CA-XPCi system employing a simulation tool based on a wave-optics model. It is demonstrated how the thickness of absorption mask affects the behavior of the CA-XPCi system when more than 30% of the incident x-ray is transmitted through the absorption masks. The behavior of the CA-XPCi system will change to propagation-based (PB) XPCi one when the transmitted portion of x-rays through absorption masks exceeds 60% of the incident beam. It is also highlighted how mechanical (environmental) vibration has an almost minor effect on CA-XPCi systems with big pixel sizes, however, it has a considerable impact on a CA-XPCi system when a high resolution detector is employed. Albeit a high resolution CA-XPCi system has not been yet realized in reality due to technological bottlenecks related to high resolution mask fabrication, this study provides a comprehensive analysis on the challenges we will face to use a high resolution x-ray detector with the current technology, thus they can be considered in future designs.
Quantitative phase retrieval simulation for mesh-based x-ray phase imaging systems
Laila Hassan, Weiyuan Sun, Carolyn A. MacDonald, et al.
A system that employs a stainless steel wire mesh to produce a high-contrast structured illumination pattern reduces the need for source coherence and complex alignments in x-ray phase imaging. Phase is reconstructed from distortions in this pattern due to phase-based x-ray deflection. A computational model of this system has been developed. Accurately assessing detector performance significantly improves the accuracy of the phase reconstruction.
Posters: Breast Imaging
icon_mobile_dropdown
Assessment of a tumour growth model for virtual clinical trials of breast cancer screening
Hanna Tomic, Anna Bjerkén, Gustav Hellgren, et al.
Image-based analysis of breast tumour growth rate may help optimize breast cancer screening and diagnosis. It may improve the identification of aggressive tumours and suggest optimal screening intervals. Virtual clinical trial (VCT) is a simulation-based method used to evaluate and optimize medical imaging systems and design clinical trials. Our work is motivated by desire to simulate multiple screening rounds with growing tumours. We have developed a model to simulate tumours with various growth rates; this study aims at evaluating the model. We used clinical data on tumour volume doubling times (TVDT) from our previous study, to fit a probability distribution (“clinical fit”). Growing tumours were inserted into 30 virtual breasts (“simulated cohort”). Based on the clinical fit we simulated two successive screening rounds for each virtual breast. TVDT from clinical and simulated images were compared. Tumour size was measured from simulated mammograms by a radiologist in three repeated sessions, to estimate TVDT (“estimated TVDT”). Reproducibility of measured sizes decreased slightly for small tumours. The mean TVDT from the clinical fit was 297±169 days, whereas the simulated cohort had 322±217 days, and the average estimated TVDT 340 ± 287 days. The median difference between the simulated and estimated TVDT was 12 days (4% of the mean clinical TVDT). Comparisons between other data sets suggest no significant difference (p>0.5). The proposed tumour growth model suggested close agreement with clinical results, supporting potential use in VCTs of temporal breast imaging.
Iterative method to achieve noise variance stabilization in single raw digital breast tomosynthesis
Renann F. Brandão, Lucas R. Borges, Bruno Barufaldi, et al.
The majority of the denoising algorithms available in the literature are designed to treat signal-independent Gaussian noise. However, in digital breast tomosynthesis (DBT) systems, the noise model seldom presents signal-independence. In this scenario, variance-stabilizing transforms (VSTs) may be used to convert the signaldependent noise into approximately signal-independent noise, enabling the use of ‘off-the-shelf’ denoising techniques. The accurate stabilization of the noise variance requires a robust estimation of the system’s noise coefficients, usually obtained using calibration data. However, practical issues often arise when calibration data are required, impairing the clinical deployment of algorithms that rely on variance stabilization. In this work, we present a practical method to achieve variance stabilization by approaching it as an optimization task, with the stabilized noise variance dictating the cost function. An iterative method is used to implicitly optimize the coefficients used in the variance stabilization, leveraging a single set of raw DBT projections. The variance stabilization achieved using the proposed method is compared against the stabilization achieved using noise coefficients estimated from calibration data, considering two commercially available DBT systems and a prototype DBT system. The results showed that the average error for variance stabilization achieved using the proposed method is comparable to the error achieved through calibration data. Thus, the proposed method can be a viable alternative for achieving variance stabilization when calibration data are not easily accessible, facilitating the clinical deployment of algorithms that rely on variance stabilization.
Computational model of tumor growth for in silico trials
Digital breast tomosynthesis (DBT) can improve the detectability of breast cancer by eliminating overlapping breast tissues that affects the performance of digital mammography (DM) systems. It is yet to be established if DBT can detect lesions at earlier progression stages than DM. To pursue this investigation using in-silico methods, it is necessary to develop computational models that mimic the growth of cancerous lesions. We report on a novel computational model that mimics the progression of breast tumors based on underlying biological and physiological phenomena. Our model includes anisotropic growth and irregularly shaped lesions commonly seen in breast cancer. Our method relies on the assumption that tumor shape is ultimately determined by pressure fields given by surrounding anatomical structures causing the lesion to preferentially proliferate in certain directions. By varying the direction of tumor growth via pressure maps, we simulated various anisotropic lesions seen in clinical cases. We used the open-source, freely available VICTRE imaging pipeline to obtain DM images of growing lesions within breast models and depict several time points in the growth of the tumor as seen by this imaging modality.
Accuracy and precision of a structured light scanning system for acquiring 3D information of the breast under compression
F. Schampers, M. C. Pinto, K. Michielsen
Realistic three-dimensional information about the compressed breast shape is required for the development of some image processing techniques and improvement of image quality in digital mammography and breast tomosynthesis (DBT). Our aim is to evaluate the precision and accuracy of the breast shape recorded by a structured light (SL) surface scanning system and to examine the similarity between these breast surface scans (BSS) and the depth information contained in the DBT images acquired concurrently, in a cranio-caudal view. Phantom tests were performed to examine the agreement between left- and right-side scans by measuring the estimated length and thickness. To evaluate accuracy, the Hausdorff distance was computed between the scanned phantom and the phantom’s known shape, and also between several measurements performed over time. The agreement between the BSS and the corresponding DBT data was assessed by comparing the width and chest-wall to nipple distance (CND) from both measurements. The phantom analysis found no significant difference between the left- and right-side scans over time for length (p=0.224) and for thickness (p=0.314), and an accuracy of 99.58% and 89.04%, respectively. When comparing the phantom scans over time, a consistent Hausdorff distance of 3.2 mm was found between phantom scans. A significant difference was found for the estimated width when comparing the patient DBT to SL data (p=0.004), but this was not the case for CND (p=0.110). To conclude, the SL scanning system has a high precision and is accurate to within approximately 3 mm compared to the ground truth.
Clinical study for evaluation of an inflatable air cushion to reduce patient discomfort during tomosynthesis compression
In a clinical pilot study, we evaluated the impact of a radiolucent, inflatable air cushion during tomosynthesis breast imaging. 101 patients were included to quantify the degree of reduction in discomfort as well as the impact on image quality, patient positioning and applied compression force. All underwent tomosynthesis examination in two different settings, routine compression and compression including the cushion without exposing them to additional acquisitions. The cushion had the same size for all breasts and was placed directly on the patient support table of the mammography unit. In the study, the cushion was inflated with air after the standard compression to a breast-individual level. Due to inflation of the cushion, the contact area between breast and compression paddle increased and additional force was therefore added. We expected a decrease in the peak pressure and, due to increased contact area an increase in the desirable spreading of the breast tissue. After examination, patients were asked to complete a questionnaire to rate the tolerability of compression with and without the cushion. The deployment of the cushion decreased the negative perception significantly, lowering it by 18.4% and only 2.0% (p < 0.001, ∝ = 0.05) of patients left to experience a discomfort during compression. When comparing the two compression settings, the increase in comfort did not have a negative impact on image quality, positioning, and the ability to detect all pertinent anatomy. Design and usability of the cushion as well as more sophisticated compression routines will be further investigated, analyzed, and discussed.
Next generation tomosynthesis image acquisition optimization for dedicated PET-DBT attenuation corrections
Trevor L. Vent, Bruno Barufaldi, Raymond J. Acciavatti, et al.
A next generation tomosynthesis (NGT) prototype is under development to investigate alternative acquisition geometries for digital breast tomosynthesis (DBT). A positron emission tomography (PET) device will be integrated into the NGT prototype to facilitate DBT acquisition followed immediately by PET acquisition (PET-DBT). The aim of this study was to identify custom acquisition geometries that (1) improve dense/adipose tissue classification and (2) improve breast outline segmentation. Our lab’s virtual clinical trial framework (OpenVCT) was used to simulate various NGT acquisitions of anthropomorphic breast phantoms. Five custom acquisition geometries of the NGT prototype, with posteroanterior (PA) x-ray source motion ranging from 40-200 mm in 40 mm steps, were simulated for five phantoms. These acquisition geometries were compared against the simulation of a conventional DBT acquisition geometry. Signal in the reconstruction was compared against the ground truth on a voxel-by-voxel basis. The segmentation of breast from air is performed during reconstruction. Within the breast, we use a threshold-based classification of glandular tissue. The threshold was varied to produce a receiver operating characteristic (ROC) curve, representing the proportion of true fibroglandular classification as a function of the proportion of false fibroglandular classification at each threshold. The area under the ROC curve (AUC) was the figure-of-merit used to quantify adipose-glandular classification performance. Reconstructed breast volume estimation and sensitivity index (d’) were calculated for all image reconstructions. Volume overestimation is highest for conventional DBT and decreases with increasing PA source motion. AUC and d’ increase with increasing PA source motion. These results suggest that NGT can improve PET-DBT attenuation corrections over conventional DBT.
Analysis of digital breast tomosynthesis acquisition geometries in sampling Fourier space
Chloe J. Choi, Trevor L. Vent, Raymond J. Acciavatti, et al.
Tomosynthesis acquires projections over a limited angular range and thus samples an incomplete projection set of the object. For a given acquisition geometry, the extent of tomosynthesis sampling can be measured in the frequency domain based on the Fourier Slice Theorem (FST). In this paper we propose a term, “sampling comprehensiveness”, to describe how comprehensively an acquisition geometry samples the Fourier domain, and we propose two measurements to assess the sampling comprehensiveness: the volume of the null space and the nearest sampled plane. Four acquisition geometries, conventional (linear), T-shape, bowtie, and circular geometries, were compared on their comprehensiveness. The volume of the null space was estimated as the percentage of voxels subtended by zero slices in the sampled Fourier space. For each voxel in the frequency space, the nearest sampled plane and the distance to that plane were recorded. Among the four, the circular geometry was determined to be the most comprehensive based on the two measurements. We review tomosynthesis sampling with a finite number of projections and discuss how the sampling comprehensiveness should be interpreted. We further suggest that the decision on a system geometry should consider multiple factors including the sampling comprehensiveness, the task to be performed, the thickness of the imaged object, system specifications, and reconstruction algorithm.
Realism of mammography tissue patches simulated using Perlin noise: a forced choice reading study
Magnus Dustler, Predrag Bakic, Debra M. Ikeda, et al.
Software breast phantoms are central to the optimization of breast imaging, where in many cases the use of real images would be inefficient – or impossible. Establishing the realism of such phantoms is critical. For this study, patches of simulated breast tissue with different composition – fatty, scattered, heterogenous and dense tissue – were generated using a method based on Perlin noise. The composition of the patches is controlled by numerical parameters derived from input by radiologists and medical physicists with experience of breast imaging. Separate Perlin noise-based methods were used to simulate skin pores, high-frequency noise (representing quantum and electronic noise) and ligaments and vascular structures. In a forced choice reading study, the realism of the simulated tissue patches compared to patches from real mammograms was determined. Patches of 200-500 pixels were extracted from radiolucent, linear, nodular or homogenous (10 per category) mammograms randomly selected from a previously acquired dataset. Eighteen simulated patches in the same size range were added. Four readers, two radiologists and two medical physicists were shown the images in random order and asked to rate them as real or simulated. All readers accepted a substantial fraction of simulated images as real, ranging from 22% to 72%. Only two readers showed a significant difference in the number of images rated real in the real and simulated groups, 22% vs 73% (P= .0003) and 33% vs 63% (P=.04), respectively. These results suggest that the method employed can create images that are almost indistinguishable from patches of real mammograms.
Posters: Imaging Method
icon_mobile_dropdown
Estimation of the dynamic volume of each lung via rapid limited-slice dynamic MRI
You Hao, Jayaram K. Udupa, Yubing Tong, et al.
Dynamic lung volumetric parameters are useful for clinical assessment of many thoracic disorders, given that respiration is a dynamic process. Estimation of such parameters based on imaging and analysis is an important goal to achieve if implementation in routine clinical practice is to become a reality. Compared to CT, dynamic thoracic MRI has several advantages including better soft tissue contrast, lack of ionizing radiation, and flexibility in selecting scanning planes. 4D dynamic MRI seems to be the best choice for some clinical applications, notwithstanding the major limitation of a long image acquisition time (~45 minutes). Therefore, approaches to acquire images and estimate volumetric parameters rapidly is highly desirable in dynamic MRI-based clinical applications. In this paper, we present a technique for estimating lung volumetric parameters from limited-slices dynamic thoracic MRI, greatly reducing the number of slices to be scanned and therefore also the time required for image acquisition. We demonstrate a relative RMS error of predicted lung volumes of less than 5% by utilizing only 5 sagittal MRI slices through each lung compared to the current full scan involving about 20 slices per lung. As such, this approach can lead to time-saving during scan acquisition and therefore increased patient comfort and convenience for practical real-world clinical applications. This may potentially also improve image quality and usability due to the reduction of patient motion, abnormal breathing patterns, etc. ensuing from improved patient comfort and scan duration.
Internal layer illumination method and apparatus for medical imaging by multiple beam interference
A novel medical imaging method has been created with theoretical demonstration for its feasibility. The method uses multi-beam interference to create destructive interference in the propagation path to reduce composite light intensity and so reduce light absorption and scattering, and to create constructive interference at a designed position to produce composite light intensity maximum which forms an internal light layer to illuminate the target tissue. This method can enhance signal strength more than 600dB. The imaging depth can be over 5cm in human body. The imaging resolutions are less than1μm along object plane and near1μm along direction of depth of field.
Electrode displacement elastography for differentiating metastatic liver cancer microwave ablation procedures
Robert M. Pohlman, James L. Hinshaw, Timothy J. Ziemlewicz, et al.
Primary liver cancers and liver metastases are malignancies resulting in high mortality rates [1, 2]. As a treatment method, microwave ablation (MWA) presents with treatment success similar to surgical resection while significantly reducing morbidity and mortality for patients, requiring ablative margins of 0.5 – 1.0 cm verified using medical imaging. Unfortunately, traditional imaging modalities used namely, contrast enhanced computed tomography and B-mode ultrasound have limitations in providing visualization for validating MWA efficacy. Alternatively, electrode displacement elastography (EDE) has shown potential in visualizing MWA post ablation regions. However, EDE has not been used previously for comparing the pre and post ablation cross-sectional area measurements for ablative margin estimation and treatment of liver cancers. This work shows that EDE can achieve ablative margin estimation by comparing pre- and post- ablation EDE strain imaging for use in determining MWA efficacy.
Proton imaging with machine learning
G. M. Finneman, N. Meskell, T. Caplice, et al.
The purpose of this study is to introduce a compact calorimeter that can offer an additional imaging tool for proton therapy centers. The tungsten, gadolinium, and lanthanide based high-density scintillating glass designed for this purpose has the ability to stop 200 MeV protons with thicknesses less than 60 mm, which allows us to model a compact detector that can be attached to a gantry. The details of the glass development and preliminary imaging efforts with this detector were previously reported. This study summarizes the Artificial Neural Network based imaging efforts with this novel proton imager detector. A library of proton conical beam CT (CBCT) scans of 800 tumors was created via GATE simulations. This tumor library was used for training purposes with two different machine learning tools, Flux and PyTorch. Here, the proof-of-concept machine learning imaging study is reported. The novel material development, compact detector design, and machine learning based imaging can make this approach useful for clinical applications.
Ring artifact and non-uniformity correction method for improving XACT imaging
X-ray induced acoustic computed tomography (XACT) as a novel imaging modality has shown great potential in applications ranging from biomedical imaging to nondestructive testing. Improve the signal to noise ratio and removing the artifacts are major challenges in XACT imaging. We introduce an efficient non-uniformity correction method for the ultrasound ring transducer. Non-uniformity appears as ring artifacts in the reconstructed image when all sensor elements have a simultaneous time-variant response during the acquisition. Also, non-uniformity causes a contrast change effect in the reconstructed image when some defective sensor elements have a different response than the neighbors over the whole scan time. These two types of non-uniformity appear as horizontal or vertical strip patterns in sinogram domain based on the reason. The proposed method is a sinogram-based-algorithm, which is based on estimating a correction vector and localizing the abnormalities to compensate for the non-uniformity response. We applied the proposed algorithm on simulation data. The results have shown that the proposed method can greatly reduce the ring artifacts and also correct the distortion comes from sensor elements non-uniform response. Despite the great reduction of artifacts, the proposed method does not compromise the original spatial resolution and contrast after using the interpolation. The quantitative analysis has shown a great improvement in the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR), while the normalized absolute error (NAE) has been reduced by 77–80 %.
Thermal interpretation procedure for the adjunct detection of thyroid pathologies
Thyroid cancer is the most common type of neoplasm in the head and neck regions, its incidence rate has grown steadily and it is estimated that it will become the fourth most common type of cancer by 2030. Thermal imaging is a noninvasive and non-contact technique that can detect temperature patterns of the skin, this technique has been used successfully for medical diagnostic purposes in areas such as breast cancer, wound care, vascular diseases, skin cancer and eye diseases. The thyroid is a richly vascularized gland that is located close to the skin, its hyper or hypo activity modifies the temperature pattern of the neck making thermography a good candidate to evaluate possible pathologies by digital infrared thermography. Thyroid nodules are a common pathology which its differentiation from thyroid cancer is becoming an important issue. One of the reasons infrared thermography has not become mainstream in medical diagnosis is that standard interpretation techniques for infrared thermographies do not exist. The objective of the present study is to develop a thermal interpretation procedure, that could eventually become a standard in the detection of thermal anomalies in the neck which could point towards thyroid pathologies.
Realistic head modeling of electromagnetic brain activity: an integrated Brainstorm-DUNEuro pipeline from MRI data to the FEM solutions
Human brain activity generates scalp potentials (electroencephalography – EEG), intracranial potentials (iEEG), and external magnetic fields (magnetoencephalography – MEG), all capable of being recorded, often simultaneously, for use in research and clinical purposes. The so-called forward problem is modeling these fields at their sensors for a given putative neural source configuration. While early approaches modeled the head as a simple set of isotropic spheres, today’s ubiquitous magnetic resonance imaging (MRI) data allows detailed descriptions of head compartments with assigned isotropic and anisotropic conductivities. In this paper, we present a complete pipeline, integrated into the Brainstorm software, that allows users to generate an individual and accurate head model from the MRI and then calculate the electromagnetic forward solution using the finite element method (FEM). The head model generation is performed by the integration of the latest tools for MRI segmentation and FEM mesh generation. The final head model is divided into five main compartments: white matter, gray matter, cerebrospinal fluid (CSF), skull, and scalp. For the isotropic compartments, widely-used default conductivity values are assigned. For the brain tissues, we use the process of the effective medium approach (EMA) to estimate anisotropic conductivity tensors from diffusion weighted imaging (DWI) data. The FEM electromagnetic calculations are performed by the DUNEuro library, integrated into Brainstorm and accessible with a user-friendly graphical interface. This integrated pipeline, with full tutorials and example data sets freely available on the Brainstorm website, gives the neuroscience community easy access to advanced tools for electromagnetic modeling using FEM.
Modified sensitivity, noise equivalent count rate performance, and scatter fraction measurements of asymmetrical and dedicated brain positron emission tomographs
C. Ross Schmidtlein, Eric S. Harmon, Michael Thompson, et al.
Most positron emission tomography (PET) systems use an (almost) cylindrically symmetric detector geometry that acquires data in step-wise or continuous fashion. The National Electrical Manufacturers Association (NEMA) has developed performance standards (NEMA NU2-2018) to evaluate the performance of these systems. However, many Brain PET scanners no longer use a cylindrically symmetric detector arrangement; instead favoring unconventional, asymmetric spatial distributions of detectors to improve the geometric efficiency. The comparison of these systems with cylindrical devices is difficult because the NEMA standards may not be directly compatible with these non-cylindrical detector geometries. The incompatibility is due to both the source geometry and use of single-slice-rebinning (SSR). In this study, we extended the standard cylindrical polyethylene phantom used for the noise equivalent count rate (NECR) and scatter fraction (SF) measurements in NEMA NU2-2018 by adding a 20 cm diameter polyethylene sphere with a line-source channel. To avoid the use of SSR in NECR, SF and sensitivity tests, which can incorrectly assign slice locations in non-cylindrical tomographs, we instead propose a different method that uses the known positions of the line-source and the detection points of the line-of-response (LOR). Axial position can be determined from the minimum of the distance between the LOR and the line-source. These correctly binned counts were compared to cylindrical and spherical cap PET geometries using the well-validated GATE Monte Carlo code to estimate performance. The results show that our proposed modifications provide a means to estimate a non-cylindrical tomograph’s NECR, SF,and sensitivity that is consistent with the NEMA methodology.