Proceedings Volume 10948

Medical Imaging 2019: Physics of Medical Imaging

cover
Proceedings Volume 10948

Medical Imaging 2019: Physics of Medical Imaging

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 9 July 2019
Contents: 17 Sessions, 194 Papers, 64 Presentations
Conference: SPIE Medical Imaging 2019
Volume Number: 10948

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10948
  • X-ray Imaging
  • Tomosynthesis Imaging
  • Detector Physics I
  • Quantitative Image Quality Assessment
  • Machine Learning I
  • Imaging Physics: Pushing the Boundary
  • Image Reconstruction
  • Detector Physics II
  • Spectral Imaging
  • Breast Imaging
  • Cone Beam CT
  • X-ray Phase Contrast Imaging
  • Photon Counting Imaging
  • Algorithm
  • Machine Learning II
  • Poster Session
Front Matter: Volume 10948
icon_mobile_dropdown
Front Matter: Volume 10948
This PDF file contains the front matter associated with SPIE Proceedings Volume 10948, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
X-ray Imaging
icon_mobile_dropdown
Single-exposure contrast enhanced spectral mammography
Contrast-enhanced spectral mammography (CESM) is being implemented to overcome the limitations of conventional mammography where tumor visualization is obstructed by overlapping glandular tissue. CESM exploits the spectral properties of a contrast agent by subtracting two images one obtained above and other below the K-edge energy. The most common approach requires dual-exposure where two images are obtained with differ- ent incident spectra. However, this comes at the expense of increased patient dose and susceptibility to motion artifacts. We propose the use of photon counting spectral detectors to simultaneously obtain multiple images with single-exposure. This is demonstrated using a wide area CdTe Medipix3RX detector to acquire images of iodine contrast agent in an anthropomorphic breast imaging phantom. The electronic thresholds in the detector replace the traditional physical filters. Our results show single-exposure CESM for the detection of iodine with concentrations as low as 2.5 mg/mL of a 10 mm diameter target in a 5 cm thick heterogeneous background. These results demonstrate the viability of photon counting detectors for low dose contrast-enhanced mammography.
Model of malignant breast biopsies and predictions of their WAXS energy integrated signals
A numerical study of the wide angle x-ray scatter (WAXS) energy integrated signals (EISs) from breast biopsies was conducted. A benign biopsy was chosen as fibroglandular (fib) tissue whereas biopsies with cancer were approximated as consisting of fib tissue and a cluster of epithelial cells. The grouping of cells represented the malignant portion. The EISs due to scatter were computed for biopsies of thickness dbio = 2 mm, 5 mm, 10 mm and 20 mm. For the malignant biopsies, the fractional volumes of cells (νcell) studied ranged from 0.01 to 0.1. Incident 2 mm diam beams of 30 kV, 50 kV, 80 kV, and 140 kV were considered. The tube current and exposure time were 3 mA and 1 minute, respectively. The WAXS signals were computed by adding the signals from annular detectors subtending scattering angles θ = 2°, 3°, , 23° and solid angles Ωθ = 2.0 x π x [cos(θ - ∆θ/2) - cos(θ + ∆θ/2)] x cos θ where ∆θ = 1°. Let the EIS due to a malignant biopsy be EISms and that of a benign one, EISbs. With these signals, values of SNR were computed. The 30 kV beam provided SNRs < 5 for the lowest entrance exposure, X = 0.0015 C/kg. For biopsies with dbio = 2, 5, 10, 20 mm and νcell = 0.1, the SNRs were 28.0, 38.2, 43.0, and 40.0. The findings suggest that there is potential to use WAXS EISs to diagnose malignancy in breast biopsies.
Deep learning framework for digital breast tomosynthesis reconstruction
Nikita Moriakov, Koen Michielsen, Jonas Adler, et al.
Digital breast tomosynthesis is rapidly replacing digital mammography as the basic x-ray technique for evaluation of the breasts. However, the sparse sampling and limited angular range gives rise to different artifacts, which manufacturers try to solve in several ways. In this study we propose an extension of the Learned Primal- Dual algorithm for digital breast tomosynthesis. The Learned Primal-Dual algorithm is a deep neural network consisting of several ‘reconstruction blocks’, which take in raw sinogram data as the initial input, perform a forward and a backward pass by taking projections and back-projections, and use a convolutional neural network to produce an intermediate reconstruction result which is then improved further by the successive reconstruction block. We extend the architecture by providing breast thickness measurements as a mask to the neural network and allow it to learn how to use this thickness mask. We have trained the algorithm on digital phantoms and the corresponding noise-free/noisy projections, and then tested the algorithm on digital phantoms for varying level of noise. Reconstruction performance of the algorithms was compared visually, using MSE loss and Structural Similarity Index. Results indicate that the proposed algorithm outperforms the baseline iterative reconstruction algorithm in terms of reconstruction quality for both breast edges and internal structures and is robust to noise.
Initial study of the radiomics of intracranial aneurysms using Angiographic Parametric Imaging (API) to evaluate contrast flow changes
Purpose: The purpose of this study is to apply targeted Parametric Imaging on aneurysms to quantitatively investigate contrast flow changes at pre-, post-treatment and follow-up with outcome scoring. Methods: The angiograms for 50 patients were acquired, 25 treated with coil embolization and 25 treated using a flow diverter. API was performed by synthesizing the time density curve (TDC) at every pixel. Based on the TDCs, we calculated various parameters for the quantitative characterization of contrast flow through the vascular network and aneurysms and displayed them using color encoded maps. The parameters included were : Time to Peak (TTP), Mean Transit Time (MTT), Time of Arrival (TTA), Peak Height (PH) and Area Under the Curve (AUC). Two Regions of Interest (ROI) were manually marked over the aneurysm dome and the main artery. Average aneurysm parameter values were normalized to those values recorded in the main artery and recorded pre-/post-treatment and follow-up and compared to Raymond Roy scores and flow diverter stent scoring. Results: The normalized mean values were as follows (pre and post treatment): TTP (1.09+/-0.14, 1.55+/-1.36), MTT (1.07+/-0.23, 1.27+/-0.42), TTA (0.14+/-0.15, 0.26+/-0.23), PH (1.2+/-0.54, 0.95+/-0.83) and AUC (1.29+/-0.69, 1.44+/- 1.92). The neural network gave a validation accuracy of 0.8036 with a loss of 0.0927. A receiver operating characteristic curve with an AUC of 0.866 was obtained. Conclusions: API can quantitatively describe the flow in the aneurysm for initial investigation of the radiomics of intracranial aneurysms. It also shows a clear demarcation between pre and post treatment. Statistical modelling and a machine learning network is used to prove the success of our model.
Anatomically- and computationally-informed hepatic contrast perfusion simulations for use in virtual clinical trials
Thomas J. Sauer, Ehsan Abadi, Paul Segars, et al.
This study modeled a framework for virtual human liver phantoms, focusing primarily on the intricate vascular networks that comprise the liver. Large vasculature was segmented from clinical liver perfusion images to ascertain a general starting point for the vascular networks of the liver that would be common among a healthy population. Clinical imaging methods cannot currently resolve the vast majority of the vasculature of the liver, and at the limiting resolution, modeling techniques continued the structure of the existing vasculature according to empirically known properties of blood vessel formation. Such advances in virtual phantom modeling enable simulation work in CT liver imaging, as clinical CT liver imaging is not ideally performed without contrast and multi-phasic acquisitions taking place over the course of the contrast's perfusion. The total amount of contrast in each organ in the body as a function of time is known from prior work, and the complete vascular network of the liver allows this information to be translated into an organ-specific contrast-concentration as a function of time. The ability to simulate this physiology is necessary for liver perfusion imaging, as pathologies typically impede or otherwise alter healthy perfusion patterns. The perfusion simulated here was in good agreement with known patterns of perfusion. Thus, virtual clinical trials can be performed with a dynamic model of the liver containing a fully integrated and realistic vascular network.
Tomosynthesis Imaging
icon_mobile_dropdown
Generating synthetic mammograms for stationary 3D mammography
Connor Puett, Christina Inscoe, Jianping Lu, et al.
Purpose. Investigate synthetic mammography approaches for carbon nanotube (CNT)-enabled stationary digital breast tomosynthesis (sDBT). Methods. Projection images of breast-mimicking phantoms containing soft-tissue masses and microcalcification clusters collected by sDBT were used to develop weighted-intensity forward-projection algorithms that generated a synthetic mammogram from the reconstructed 3D image space. Reconstruction was accomplished by an adapted fan-volume modification of the simultaneous iterative reconstruction technique. Detectability indices were used to quantify mass and calcification visibility. The image processing chain was then applied to projection views collected by sDBT on women with “suspicious” breast lesions detected by standard screening 2D digital mammography. Results. Quantifying detectability allowed correlation between the visibility of clinically-important image features and the order of the polynomial weighting function used during forward projection. The range of weighted functions exists between the extremes of an average-intensity projection (zero-order) and maximum-intensity projection (infinite-order), with lower order weights emphasizing soft-tissue features and higher-order weights emphasizing calcifications. Application of these algorithms to patient images collected by sDBT, coupled with dense-artifact reduction and background equalization steps, produced synthetic mammograms on which additional post-processing approaches can be explored, with the actual mammogram providing a reference for comparison. Conclusions. An image-processing chain with adjustable weighting during forward projection, dense-artifact reduction, and background equalization can yield a range of sDBT-based synthetic mammograms that display clinically-important features differently, potentially improving the ability to appreciate the association of masses and calcifications.
Adaptively-weighted total-variation (AwTV) in a prototype 4D digital tomosynthesis system for fast and low-dose target localization
Sunghoon Choi, Sooyeul Lee, Young-Nam Kang, et al.
On-board 4D cone-beam CT (CBCT) imaging using a linear accelerator (LINAC) is recently favored scanning protocol in image guided radiotherapy, but it raises the problems of excessive radiation dose. Alternatively, the 4D digital tomosynthesis (DTS) has been introduced for small-sized target localization, such as pancreas, prostate, or partial breast scan, which does not require a full 3D information. However, conventional filtered back-projection (FBP) reconstruction method produces severe aliasing artifacts due to sparsely sampled projections measured in each respiration phase within a limited angle range. This effect is even more severe when the LINAC gantry sweep speed is too fast to sufficiently cover the respiratory gating phase. Previous studies on total-variation (TV) minimization-based reconstruction framework would be an alternative approach to solve this problem, however, it presents the loss of sharpness during the iterations. In this study, we adopted an adaptively-weighted TV (AwTV) scheme which penalizes the images after the TV optimization. We introduced a look-up table which contains all possible weighting parameters during each iteration step. As a result, the proposed AwTV method provided better image quality compared to the conventional FBP and adaptive steepest descent-projection onto convex set (ASD-POCS) frameworks, showing higher structural similarities (SSIM) by factor of 1.12 compared to FBP and less root-mean-square error (RMSEs) by factor of 1.06 compared to ASD-POCS. The horizontal line profile of the spherical target inserted in the moving phantom showed that the images from FBP and ASD-POCS provided severe aliasing artifact and smoothed pixel intensities, but proposed AwTV scheme reduces the aliasing artifact while maintaining the object’s sharpness. In conclusion, the proposed AwTV method has a potential for low-dose and faster 4D-DTS imaging, which indicates an alternative option to 4D-CBCT for small region target localization.
Metal artifact correction based on combination of 2D and 3D region growing for x-ray tomosynthesis
We propose a new metal artifact correction method for the X-ray digital tomosynthesis by accurately detecting metal in the projection data. We combined 3D region growing for growing a few points in the metal to other projection angles and 2D region growing for growing the points further in order not to force the user to set the starting points at each projection angle. We compared the proposed method with the conventional FBP. In the phantom experiment with a mimicked artificial joint using the proposed method, the metal artifacts around the metal object were reduced. At the distance within 5 mm from the metal object, the root mean square errors evaluated with the conventional and proposed methods were 2700 and 200, respectively, and the root mean square errors improvement of more than 90% was demonstrated. When the distance from the metal object was shorter, the metal artifact became more significant in the conventional method, and the effectiveness of the proposed metal artifact correction was higher.
Verification of the accuracy of a partial breast imaging simulation framework for virtual clinical trial applications
L. Vancoillie, N. W. Marshall, L. Cockmartin, et al.
Aim: The impact of x-ray system parameters on detectability of specific (clinical) signals can be studied with simulation platforms if these tools are sufficiently accurate and realistic. This work describes the steps taken to verify and confirm the accuracy of a local platform developed for the use in virtual clinical trials of breast tomosynthesis. The (gold standard) reference data will be made available to the community. Materials and methods: Our simulation platform simulates specific targets, including microcalcifications into existing 2D FFDM and DBT background images, a method called partial simulation. There are three steps: (1) creation of a voxel model or 3D analytical object to be inserted into the ‘For Processing’ projections; (2) generation of a realistic object template for the geometry under study and the relevant resolution, scatter and noise properties; (3) insertion of the target into the projections and DBT reconstruction plus image processing. Three objects were simulated as part of the verification: a small high contrast 0.5 mm aluminum (Al) sphere in a poly(methyl methacrylate) (PMMA) stack, a 0.2 mm thick Al sheet in a PMMA stack and a 0.8 mm steel edge. For the small Al sphere, the peak contrast, the signal difference to noise ratio (SDNR), the profile in the (in plane) xy-direction and the artifact spread function (ASF) were compared to results from real acquisitions. Contrast and SDNR were compared to data from a real 0.2 mm Al sheet. Sharpness modelling was verified by comparing the modulation transfer function (MTF) calculated from real and simulated edges. The study was performed for a Siemens Inspiration DBT system. Results: Comparing peak contrast and SDNR for both sphere and sheet showed good agreement (<5% error) in 2D FFDM and DBT. The similarity of the pixel value profiles through the sphere and the sheet in the xy-direction and the ASF for real and simulated Al spheres confirmed accurate geometric modelling. Absolute and relative average deviation between MTF measured from real and simulated edge in the front-back and left-right directions show a good correlation for frequencies up to the Nyquist frequency for 2D FFDM and DBT mode. Real and simulated objects could not be differentiated visually. Conclusion: The close correspondence between simulated and real objects, both visually and quantitatively, indicates that this simulation framework is a strong candidate for use in virtual clinical studies employing 2D FFDM and DBT.
Personalization of x-ray tube motion in digital breast tomosynthesis using virtual Defrise phantoms
Raymond J. Acciavatti, Bruno Barufaldi, Trevor L. Vent, et al.
In digital breast tomosynthesis (DBT), projection images are acquired as the x-ray tube rotates in the plane of the chest wall. We constructed a prototype next-generation tomosynthesis (NGT) system that has an additional component of tube motion in the perpendicular direction (i.e., posteroanterior motion). Our previous work demonstrated the advantages of the NGT system using the Defrise phantom. The reconstruction shows higher contrast and fewer blurring artifacts. To expand upon that work, this paper analyzes how image quality can be further improved by customizing the motion path of the x-ray tube based on the object being imaged. In simulations, phantoms are created with realistic 3D breast outlines based on an established model of the breast under compression. The phantoms are given an internal structure similar to a Defrise phantom. Two tissue types (fibroglandular and adipose) are arranged in a square-wave pattern. The reconstruction is analyzed as a binary classification task using thresholding to segment the two tissue types. At various thresholds, the classification of each voxel in the reconstruction is compared against the phantom, and receiver operating characteristic (ROC) curves are calculated. It is shown that the area under the ROC curve (AUC) is dependent on the x-ray tube trajectory. The trajectory that maximizes AUC differs between phantoms. In conclusion, this paper demonstrates that the acquisition geometry in DBT should be personalized to the object being imaged in order to optimize the image quality.
Noise measurements from reconstructed digital breast tomosynthesis
Rodrigo B. Vimieiro, Lucas R. Borges, Renato F. Caron, et al.
In this work, we investigated and measured the noise in Digital Breast Tomosynthesis (DBT) slices considering the back-projection (BP) algorithm for image reconstruction. First, we presented our open-source DBT reconstruction toolbox and validated with a freely available virtual clinical trials (VCT) software, comparing our results with the reconstruction toolbox available at the Food and Drug Administration's (FDA) repository. A virtual anthropomorphic breast phantom was generated in the VCT environment and noise-free DBT projections were simulated. Slices were reconstructed by both toolboxes and objective metrics were measured to evaluate the performance of our in-house reconstruction software. For the noise analysis, commercial DBT systems from two vendors were used to obtain x-ray projections of a uniform polymethyl methacrylate (PMMA) physical phantom. One system featured an indirect thallium activated cesium iodide (CsI(TI)) scintillator detector and the other a direct amorphous selenium (a-Se) detector. Our in-house software was used to reconstruct raw projections into tomographic slices, and the mean pixel value, noise variance, signal-to-noise ratio (SNR) and the normalized noise power spectrum (NNPS) were measured. In addition, we investigated the adequacy of a heteroskedastic Gaussian model, with an affine variance function, to describe the noise in the reconstruction domain. The measurements show that the variance and SNR from reconstructed slices report similar spatial and signal dependency from previously reported in the projection domain. NNPS showed that the reconstruction process correlates the noise of the DBT slices in the case of projections degraded with almost uncorrelated noise.
Detector Physics I
icon_mobile_dropdown
A simple Monte Carlo model for the statistics of photon counting detectors
Karl Stierstorfer, Martin Hupfer, Niko Köster
The statistics of photon counting detectors (PCDs) differ in several aspects from the statistics of energy integrating detectors (EIDs). Particularly, the effect of crosstalk in a PCD involves a 0/1 decision: a photon may be counted in a neighboring pixel or not whereas in an EID the neighboring pixel may just receive a fraction of the signal. Another interesting effect is that, especially for high counting thresholds, there exists a zone at the edge of the pixel where absorbed x-ray energy will not produce any signal. This may lead to a modulation transfer function (MTF) exceeding the theoretical limit given by the nominal pixel aperture. This fact has also been observed in measurements. Goal of this work is to present a simple but comprehensive description of PCD detectors in the low flux limit capable of including all relevant effects. The model presented is based on a Monte Carlo simulation of the x-ray energy deposition in the detector and a simple model of the charge cloud propagation. A reformulation of the probability generating function formalism allows calculating all relevant quantities like mean signal values or covariances between thresholds and/or neighboring pixels or the MTF and DQE as a function of input photon energy directly from the Monte Carlo simulation.
Quantitative comparison of Hi-Def and FPD zoom modes in an innovative detector using Relative Object Detectability (ROD) metrics
Neuro-endovascular image-guided interventions (EIGIs) are aided by use of detectors with improved spatial resolution. A new detector is capable of switching between standard resolution, flat-panel detector (FPD) zoom and high-definition (Hi-Def) zoom modes, with 194 and 76 μm pixels respectively. The relative performance of the two zoom modes to image specific, high-resolution objects was quantitatively investigated. Detector DQEs were measured for both zoom modes and used to determine the Relative Object Detectability (ROD), which compares two imaging detectors’ relative abilities to image a simulated object by integrating the DQE of one detector with the square of the Fourier transform of the simulated object function and dividing the result by a similar integral for a second reference detector. Initial evaluations used a pre-whitened matched filter (PWMF) ideal-observer model. Comparisons were also made using the generalized-ROD (G-ROD) which uses the generalized-DQE (GDQE) that includes the effects of clinically relevant parameters such as magnification, focal-spot size, and scatter and the generalized-measured-ROD (GM-ROD) which uses the square of the Fourier transform of the actual images of the object acquired with the detectors. Each of the metrics demonstrated improved performance of the Hi-Def zoom mode over the standard FPD mode in imaging a wide array of objects such as stents, wires, and spheres. Of particular note is the greater performance of the Hi-Def when considering the high spatial frequencies necessary for visualizing fine image details of a pipeline stent. These initial investigations demonstrate the great potential of the Hi-Def zoom mode during neuro-interventions.
Comparison of CMOS and amorphous silicon detectors: determining the correct selection criteria, to optimize system performance for typical imaging tasks
Complementary metal-oxide-semiconductors (CMOS) flat panel detectors (FPD) have steadily gained acceptance into medical imaging applications1-15. Selecting the proper detector technology for the imaging task requires optimization to balance the cost and the image quality. To facilitate this, fundamental detector performance of CMOS and a-Si panels were evaluated using the following quantitative imaging metrics: X-ray sensitivity, Noise Equivalent Dose (NED,) Noise Power Spectrum (NPS), Modulation Transfer Function (MTF), and Detective Quantum Efficiency (DQE). Imaging task measurements involved high-contrast and low-contrast resolution assessment. Varex FPDs evaluated for this study included: CMOS 3131 (150 μm pixel), a-Si 3030X (194 μm pixel), a-Si XRpad2 3025 (100 μm) and CMOS 2020 (100 μm pixel). Performance comparisons were organized by pixel size: large pixels, 150 μm CMOS and 194 μm a-Si, and small pixels, 100 μm in a-Si and CMOS technology. The results showed high dose DQE of the a-Si 3030X was about 10% higher than the CMOS 3131 between 0 - 1.8 cycles/mm, while beyond 1.8 cycles/mm, the CMOS performed better. The 3030X low dose DQE was higher than the 3131 between 0-1.3 cycles/mm, while the CMOS performance was higher beyond 1.3 cycles/mm. The high dose DQE of 100 μm a-Si was higher than the 100 μm CMOS for all frequencies. However, the low dose DQE of 100 μm CMOS was higher beyond 0.6 cycles/mm, while the 100 μm a-Si pixel had higher DQE only between 0 – 0.6 cycles/mm. Large pixel image quality (IQ) assessment favored a-Si pixel with 7% higher Contrast-to-Noise-Ratio (CNR) results for both high and low contrast-detail at 500 nGy. Small pixel CNR favored CMOS with ~38% better high contrast-detail and 12% greater low contrast-detail at ~500 nGy. Through these measurements that combine imaging metrics and image quality, we demonstrated a practical method for selecting the appropriate detector technology based on the requirements of the imaging applications.
First results developing time-of-flight proton radiography for proton therapy applications
William A. Worstell, Bernhard W. Adams, Melvin Aviles, et al.
In proton therapy treatment, proton residual energy after transmission through the treatment target may be determined by measuring sub-relativistic transmitted proton time-of-flight velocity and hence the residual energy. We have begun developing this method by conducting proton beam tests using Large Area Picosecond Photon Detectors (LAPPDs) which we have been developing for High Energy and Nuclear Physics Applications. LAPPDs are 20cm x 20cm area Micro Channel Plate Photomultiplier Tubes (MCP-PMTs) with millimeter-scale spatial resolution, good quantum efficiency and outstanding timing resolution of ≤70 picoseconds rms for single photoelectrons. We have constructed a time-of-flight telescope using a pair of LAPPDs at 10 cm separation, and have carried out our first tests of this telescope at the Massachusetts General Hospital's Francis Burr Proton Therapy Center. Treatment protons are sub-relativistic, so precise timing resolution can be combined with paired imaging detectors in a compact configuration while still yielding high accuracy in proton residual energy measurements through proton velocity determination from nearly monoenergetic protons. This can be done either for proton bunches or for individual protons. Tests were performed both in "ionization mode" using only the Microchannel Plates to detect the proton bunch structure and also in "photodetection mode" using nanosecond-decay-time quenched plastic scintillators to excite the photocathode within each of the paired LAPPDs. Data acquisition was performed using a remotely operated oscilloscope in our first beam test, and using 5Gsps DRS4 Evaluation Board waveform digitizers in our second test, in each case reading out both ends of single microstrips from among the 30 within an LAPPD. First results for this method and future plans are presented.
An experimental method to correct drift-induced error in zero-frequency DQE measurement
Xu Ji, Mang Feng, Ran Zhang, et al.
In 1963, Shaw applied Fourier analysis to the zero-frequency DQE and developed the frequency-dependent DQE or DQE(k) and made it clear that DQE(k) is applicable to every frequency level within the system bandwidth, including the zero frequency. Over time, especially after entering the modern era of digital x-ray imaging, the experimental measurement methods of DQE(k) (particularly the measurements of the NPS which is an important element in DQE(k)) have evolved, and some measurement methods may generate nonphysical NPS and DQE results at k=0. As a result, an experimental DQE(k) curve is often cut off at certain low frequency above zero. This work presents a new experimental method to deal with two challenges: severe NPS(k=0) underestimation due to polynomial-based background detrending; severe NPS(k=0) overestimation due to the presence of faint but non-negligible system drift. Based on a theoretical analysis of the impact of drift to the measured autocovariance function, the error introduced by drift can be isolated, and corresponding correction can be applied to NPS(k=0). Both numerical simulation with known ground truth and experimental studies demonstrated that the proposed method enables accurate DQE(k=0) measurement.
Quantitative Image Quality Assessment
icon_mobile_dropdown
Patient-specific noise power spectrum measurement via generative adversarial networks (Conference Presentation)
Chengzhu Zhang, Daniel Gomez-Cardona, Yinsheng Li, et al.
A deep learning Generative Adversarial Networks (GANs) were developed and validated to provide an accurate way of direct NPS estimation from a single patient CT scan. GANs were utilized to map a white noise input to a CT noise realization with correct CT noise correlations specific to a single local uniform ROI. To achieve this, a two-stage strategy was developed. In the pre-training stage, ensembles of 64x64 MBIR noise-only images of a quality assurance phantom were used as training samples to jointly train the generator and discriminator. They were fined-tuned using training samples from a single 101x101 ROI of an abdominal anthropomorphic phantom. Results from GANs and physical scans were compared in terms of its mean frequency and radial averaged NPS. This workflow was extended to a patient case where reference dose and 25% of reference dose CT scans were provided for fine-tuning. GANs generated noise-only image samples that are indistinguishable from physical measurement. The overall mean frequency discrepancy between NPS generated from GANs and those from physically acquired data was 0.2% for anthropomorphic phantom validation. The KL divergence for 1D radial averaged NPS profile of these two NPS acquisitions was 2.2×10^(-3). Statistical test indicates trained GANs generated equivalent NPS to physical scans. In clinical patient-specific NPS studies, it showed a distinction between the reference dose case and 25% of reference dose case. It was demonstrated the GANs characterized the properties of CT noise in terms of its mean frequency and 1D NPS profile shape.
A case study on the impact of a reduction in MTF on test object detectability score in mammography
K. T. Wigati, H. Bosmans, L. Cockmartin, et al.
This work examined the impact of the presampling Modulation Transfer Function (MTF) on detectability of lesion-like targets in digital mammography. Two needle CR plates (CR1 and CR2) with different MTF curves but identical detector response (sensitivity) were selected. The plates were characterized by MTF, normalized noise power spectrum (NNPS) and detective quantum efficiency (DQE). Three image quality phantoms were applied to study the impact of the difference in MTF: first, the CDMAM contrast-detail phantom to give gold thickness threshold (T); second, a 3D structured phantom with lesion models (calcifications and masses), evaluated via a 4-alternative forced-choice study to give threshold diameter (dtr) and third, a detectability index (d') from a 50 mm PMMA flat field image and an 0.2 mm Al contrast square. MTF coefficient of variation was ~1%, averaged up to 5 mm-1. At 5 mm-1, a significant 24% reduction in MTF was observed. The lower MTF caused a 12% reduction in NNPS for CR2 compared to CR1 (at detector air kerma 117 μGy). At 5 mm-1, there was a drop in DQE of 34% for CR2 compared to CR1. For the test objects, there was a trend to lower detectability for CR2 (lower MTF) for all but one parameter, however none of the changes were significant. The MTF is a sensitive and easily applied means of tracking changes in sharpness before these changes are uncovered using lesion simulating objects in test objects.
Evaluating the imaging performance of a next-generation digital breast tomosynthesis prototype
A next generation tomosynthesis (NGT) prototype was designed to investigate alternative scanning geometries for digital breast tomosynthesis (DBT). The NGT system uses a 2D plane as an address space for the x-ray source, and onedimensional linear detector motion to determine an acquisition geometry. This design provides myriad acquisition geometries for investigation. This system is also capable of magnification DBT. We performed image quality measurements to evaluate performance of the NGT system for both contact and magnification imaging in 2D and 3D. The modulation transfer function (MTF) was computed using the slanted-edge method to evaluate spatial resolution. The first zero of the MTF was observed to increase by a factor of the magnification. In-plane spatial resolution performance for 3D was measured using an in-house metric, and was found to be commensurate to the MTF. This metric uses a star pattern as an input object to produce the contrast transfer function (CTF). The 2D noise power spectra (NPS) were calculated to evaluate the degradation of image quality due to noise. 3D NPS were also calculated for various 3D image reconstructions. 3D renditions of the NPS show how the NGT can sample a broader range of frequencies in the Fourier domain than conventional DBT. The system’s lag was measured and found not to affect 3D image reconstructions significantly. A wax calcification phantom was constructed and imaged using the NGT system. The performance of this system has been evaluated and the results suggest that image quality is sufficient for clinical investigation.
Generalized prediction framework for reconstructed image properties using neural networks
Model-based reconstruction (MBR) algorithms in CT have demonstrated superior dose-image quality tradeoffs compared to traditional analytical methods. However, the nonlinear and data-dependent nature of these al- gorithms pose significant challenges for performance evaluation and parameter optimization. To address these challenges, this work presents an analysis framework for quantitative and predictive modeling of image proper- ties in general nonlinear MBR algorithms. We propose to characterize the reconstructed appearance of arbitrary stimuli by the generalized system response function that accounts for dependence on the imaging conditions, reconstruction parameters, object, and the stimulus itself (size, contrast, location). We estimate this nonlinear function using a multilayer perceptron neural network by providing input and output pairs that samples the range of imaging parameters of interest. The feasibility of this approach was demonstrated for predicting the appearance of a spiculated lesion reconstructed by a penalized-likelihood objective with a Huber penalty in a physical phantom as a function of its location and reconstruction parameters β and δ. The generalized system response functions predicted from the trained neural network show good agreement with those computed from mean reconstructions, proving the ability of the framework in mapping out the nonlinear function for combinations of imaging parameters not present in the training data. We demonstrated utility of the framework to achieve desirable (e.g., non-blocky) lesion appearance in arbitrary locations in the phantom without the need for performing actual reconstructions. The proposed prediction framework permits efficient and quantifiable performance evaluations to provide robust control and understanding of image properties for general classes of nonlinear MBR algorithms.
Simulation and experimental validation of high-resolution test objects for evaluating a next-generation digital breast tomosynthesis prototype
Star pattern test objects are used to evaluate the high-contrast performance of imaging systems. These objects were used to investigate alternative scanning geometries for a prototype next-generation tomosynthesis (NGT) system. The NGT system has 2D planar source motion and linear detector motion, and is capable of myriad acquisition geometries. We designed a virtual star pattern with a voxel size of 5𝜇m, and used it to evaluate the spatial resolution performance of the NGT system for three different acquisition geometries. The Open Virtual Clinical Trials (OpenVCT) framework was used to simulate virtual star patterns for acquisition geometries of the NGT system. Simulated x-ray projections of the virtual phantom were used to create super-sampled 3D image reconstructions. Using the same acquisition geometries on the NGT system, a physical star pattern was imaged to create experimental 3D image reconstructions. The simulated and physical data were compared qualitatively by visual inspection, and quantitatively using an in-house metric. This metric computes the Fourier transform radially for one quadrant of the star pattern to discern the limit of spatial resolution (LSR) and the existence of aliasing. The results exhibit the same characteristics in terms of super-resolution and Moiré patterns (arising from aliasing) with visual inspection. The simulated LSR for the 12 conditions analyzed are all within 3% of the physical data. Aliasing was determined to be present in the same simulated image reconstructions as the experimental complements. Super-resolution is observed for two of the three NGT acquisition geometries in the experimental and simulated images.
Multiple-reader, multiple-case ROC analysis for determining the limit of calcification detection in tomosynthesis
Bruno Barufaldi, Predrag Bakic, Andrew Maidment
We have conducted virtual clinical trials (VCTs) of digital breast tomosynthesis (DBT) to evaluate the parameters that affect detectability of breast lesions. The OpenVCT framework was used to simulate the breast anatomy and imaging systems. We generated 36 anthropomorphic breast phantoms (700 ml volume, 6.33 cm compressed thickness), varying the number of simulated tissue compartments and their shape. We inserted 42 calcifications into each phantom with variable sizes of 1-3 voxels. DBT projections of phantoms with and without lesions were synthesized assuming a clinical acquisition geometry. We varied the detector element size (140 μm and 70 μm), the source motion (continuous and stepand- shoot), and the reconstructed voxel size (100 μm and 70 μm). The reconstructed images were cropped in the plane where the calcifications are located, with regions of interest (ROIs) centered on the lesion position. We also simulated virtual readers to evaluate the calcification detectability using multiple-reader, multiple-case method, using Barco’s Medical Virtual Image Chain (MeVIC) software. Human readers were simulated using channelized Hotelling observers with 15 Laguerre-Gauss channels. We used spreads of 22 and 31, and ROIs of 150×150 and 214×214 pixels for images reconstructed with pixel size of 100 μm and 70 μm, respectively. Reconstructed voxels of 70μm provided better overall calcification detection, especially for small calcifications. For one-voxel polycubes, the difference in AUC using five readers was 6.5% (0.713 and 0.667). The impact of calcification detection from most to least significant is: reconstruction voxel size, source motion, and detector element size, especially for small calcifications.
Machine Learning I
icon_mobile_dropdown
Virtual clinical trial for task-based evaluation of a deep learning synthetic mammography algorithm
Image processing algorithms based on deep learning techniques are being developed for a wide range of medical applications. Processed medical images are typically evaluated with the same kind of image similarity metrics used for natural scenes, disregarding the medical task for which the images are intended. We propose a com- putational framework to estimate the clinical performance of image processing algorithms using virtual clinical trials. The proposed framework may provide an alternative method for regulatory evaluation of non-linear image processing algorithms. To illustrate this application of virtual clinical trials, we evaluated three algorithms to compute synthetic mammograms from digital breast tomosynthesis (DBT) scans based on convolutional neural networks previously used for denoising low dose computed tomography scans. The inputs to the networks were one or more noisy DBT projections, and the networks were trained to minimize the difference between the output and the corresponding high dose mammogram. DBT and mammography images simulated with the Monte Carlo code MC-GPU using realistic breast phantoms were used for network training and validation. The denoising algorithms were tested in a virtual clinical trial by generating 3000 synthetic mammograms from the public VICTRE dataset of simulated DBT scans. The detectability of a calcification cluster and a spiculated mass present in the images was calculated using an ensemble of 30 computational channelized Hotelling observers. The signal detectability results, which took into account anatomic and image reader variability, showed that the visibility of the mass was not affected by the post-processing algorithm, but that the resulting slight blurring of the images severely impacted the visibility of the calcification cluster. The evaluation of the algorithms using the pixel-based metrics peak signal to noise ratio and structural similarity in image patches was not able to predict the reduction in performance in the detectability of calcifications. These two metrics are computed over the whole image and do not consider any particular task, and might not be adequate to estimate the diagnostic performance of the post-processed images.
Forward and cross-scatter estimation in dual source CT using the deep scatter estimation (DSE)
Tim Vöth, Joscha Maier, Julien Erath, et al.
Cross-scatter is often the dominant scatter mode in modern dual source CT (DSCT). Like forward scatter (intra source-detector-pair scatter), which is present in all CT systems, cross-scatter (inter source-detector-pair scatter) leads to streak and cupping artifacts. Having recently developed the deep scatter estimation (DSE) to estimate forward scatter in single source CT, we now tested the performance of DSE in such a cross-scatter-dominated DSCT. Given only the total intensity in a projection as input, we trained a deep convolutional neural network to estimate the scatter distribution, which was then subtracted from the total intensity, to obtain scatter-corrected data. The projections used for training and testing were simulated using Monte Carlo methods. Our method estimates cross- and forward scatter simultaneously and in real-time, with a mean error of only 1.7 %. The error of the CT values is reduced from hundreds of HU to a few dozens of HU. Our method can compete with a measurement-based approach, but does not require any additional hardware.
Focal spot deconvolution using convolutional neural networks
Jan Kuntz, Joscha Maier, Marc Kachelrieß, et al.
The focal spot size of x-ray tubes as well as the pixel size and scintillator thickness limit the spatial resolution of projection images as they result in blurring and degradation of the system’s point spread function. Deblurring of those images has been a topic of research for several decades. However, it is not solved in general. In this manuscript the application of a convolutional neural network for the deblurring of x-ray projection images is presented and compared to a standard deblurrig technique. The advantages of the neural network in terms of image quality and applicability are demonstrated with simulations and measurements originating from table top and gantry based micro-CT systems.
Low-dose CT count-domain denoising via convolutional neural network with filter loss
Nimu Yuan, Jian Zhou, Kuang Gong, et al.
Reducing the radiation dose of computed tomography (CT) and thereby decreasing the potential risk suffered by the patients is desirable in CT imaging. However, lower dose often results in additional noise and artifacts in reconstructed images that may negatively affect the clinical diagnoses. Recently, many image-domain denoising approaches based on deep learning have been proposed and obtained promising results. However, since reconstructed CT image values are not directly related to noise level, estimating noise level from CT images is not an easy task. In this work, we propose a count-domain denoising approach using a convolutional neural network (CNN) and a filter loss function. Compared with image-domain denoising methods, the proposed count-domain method can easily estimate the noise level in projections based on the measurement in each detector bin. Moreover, because each projection is ramp-filtered before being backprojected to the image-domain, we propose a filter loss function where the training loss is computed using the ramp filtered projection, rather than the original projection. Since the filter loss is closely related to the differences in the image-domain, it further improves the quality of reconstructed CT images.
Image reconstruction from fully-truncated and sparsely-sampled line integrals using iCT-Net
Yinsheng Li, Ke Li, Chengzhu Zhang, et al.
Image reconstruction from line integrals is one of foundations in computed tomography (CT) for medical diagnosis and non-destructive detection purpose. To accurately recover the density function from measurements taken over straight lines, analytic-formula-based or optimization-based inversions have been discovered over the past several decades. Accurate image reconstruction can be achieved if the acquired dataset satisfies data sufficiency conditions and data consistency conditions. However, if these conditions are violated, accurate image reconstruction remains an intellectual challenge provided that significant a priori information about image object and/or physical process of data acquisition need to be incorporated. In this work, we show that a deep learning method based upon a brand new network architecture, termed intelligent CT neural network (iCT-Net), can be employed to discover accurate image reconstruction solutions from fully-truncated and sparsely-sampled line integrals without explicit incorporations of a priori information of either image object or data acquisition process. After a two-stage training, the trained iCT-Net was directly applied to real human subject data to demonstrate the generalizability of iCT-Net to experimental data.
Imaging Physics: Pushing the Boundary
icon_mobile_dropdown
World’s deepest-penetration and fastest optical cameras: photoacoustic tomography and compressed ultrafast photography (Conference Presentation)
We developed photoacoustic tomography to peer deep into biological tissue. Photoacoustic tomography (PAT) provides in vivo omniscale functional, metabolic, molecular, and histologic imaging across the scales of organelles through organisms. We also developed compressed ultrafast photography (CUP) to record 10 trillion frames per second, 10 orders of magnitude faster than commercially available camera technologies. CUP can tape the fastest phenomenon in the universe, namely, light propagation, and can be slowed down for slower phenomena such as combustion. PAT physically combines optical and ultrasonic waves. Conventional high-resolution optical imaging of scattering tissue is restricted to depths within the optical diffusion limit (~1 mm in the skin). Taking advantage of the fact that ultrasonic scattering is orders of magnitude weaker than optical scattering per unit path length, PAT beats this limit and provides deep penetration at high ultrasonic resolution and high optical contrast by sensing molecules. Broad applications include early-cancer detection and brain imaging. The annual conference on PAT has become the largest in SPIE’s 20,000-attendee Photonics West since 2010. CUP can image in 2D non-repeatable time-evolving events. CUP has a prominent advantage of measuring an x, y, t (x, y, spatial coordinates; t, time) scene with a single exposure, thereby allowing observation of transient events occurring on a time scale down to 100 femtoseconds, such as propagation of a light pulse. Further, akin to traditional photography, CUP is receive-only—avoiding specialized active illumination required by other single-shot ultrafast imagers. CUP can be coupled with front optics ranging from microscopes to telescopes for widespread applications in both fundamental and applied sciences.
Multi-x-ray source array for stationary tomosynthesis or multi-cone angle cone beam CT
Field-emission x-ray source arrays have been studied for both tomosynthesis and CT applications, however these arrays tend to have limited output. We propose the use of multi-source x-ray arrays using thermionic cathodes, contained within a single vacuum housing. A prototype 3-source x-ray array has been fabricated and tested, and the utility of multi-x-ray-source arrays has been demonstrated using physical simulations in both tomosynthesis and in cone beam CT. The prototype x-ray tube made use of a cylindrical molybdenum anode, machined to have 3 specific focal tracks. Grid-controlled cathode assemblies were fabricated and aligned to each focal tract, and the individual x-ray focal spots were evaluated with a star pattern at 35 kV and 40 mA. The 3-source assembly was used to physically simulate tomosynthesis imaging geometry, and tomosynthesis images of a lemon were obtained. Physical simulations using a cone beam breast CT scanner were also performed, by vertically moving the single x-ray source into 5 different locations – simulating 5 different source positions. A new geometry for cone beam CT imaging is proposed, where each source of a multi-x-ray source array is individually collimated to eliminate rays involving large cone angles. This geometry also allows three sources to be simultaneously pulsed onto a single flat panel detector, achieving better duty cycle and view sampling in cone beam CT. A reconstruction algorithm was written to accommodate the different source positions, and phantoms designed to demonstrate cone beam artifacts were imaged. The tomosynthesis images illustrate appropriate depth resolution in the test object. Analysis of the CT data demonstrate marked improvement compared to one source. We conclude that multi-source x-ray arrays using thermionic cathodes will have important applications in medical imaging, especially breast tomosynthesis and cone beam computed tomography.
Dose-independent near-ideal DQE of a 75 μm pixel GaAs photon counting spectral detector for breast imaging
Spyridon Gkoumas, Thomas Thuering, Alfonso G. Taboada, et al.
State-of-the-art X-ray breast imaging (BI) modalities such as digital breast tomosynthesis (DBT), contrast enhanced spectral mammography (CESM) and breast CT (BCT) impose demanding requirements on digital X-ray detectors. This work studies the imaging performance of a GaAs two-threshold photon-counting detector (PCD) prototype for BI relevant X-ray spectra. The prototype has a 75 μm pixel size, two calibrated energy thresholds from 8 to 60 keV, 8 x 4 cm2 area and a 0.5 mm thick GaAs sensor. The X-ray spectra used were 28 and 35 kVp with 2 mm Al filtration from a W-target tube emulating RQA-M2. The main imaging metrics probed include modulation transfer function (MTF) and detective quantum efficiency (DQE). Air kerma spanned three orders of magnitude from 370 nGy to 330 μGy. Furthermore, the detector’s linearity, lag and ghosting were also tested. For 28 kVp, the GaAs PCD exhibits 85% and 48% DQE for 0 and 5 lp/mm respectively, independent of the applied dose. MTF ranges from 98% to 53% for 1 and 6.667 lp/mm (Nyquist limit). Excellent linearity, zero lag and ghosting were observed. GaAs PCD technology is an ideal candidate for BI detector panels. Its stable temporal behavior, inherent zero readout noise and excellent DQE independent of the applied X-ray dose, improve on the combined advantages of current CsI/CMOS and a-Se/a-Si BI detectors. In addition, the multiple energy thresholds can enable spectral single-shot methods without motion blur.
Novel hybrid organic-inorganic perovskite detector designs based on multilayered device architectures: simulation and design
Cost effective direct conversion detectors for different photon energies that provide high performance are of fundamental importance in medical diagnosis. Conventional direct conversion detectors typically provide either large area devices with moderate performance at reduced costs or expensive high performance architectures with limited size. In order to investigate the feasibility of highly efficient detectors based on low-cost large area processable hybrid organic-inorganic materials, multilayered device architectures consisting of stacked conversion layers are investigated. For this purpose, models that describe the sensitivity and the detective quantum efficiency are extended to the proposed detector design. This enables to evaluate the performance of multilayered detectors based on scintillator-sensitized organic and polycrystalline perovskite materials. A sensitivity analysis based on various multilayered designs at different photon energies shows significantly higher performance of polycrystalline perovskite conversion layers compared to scintillator-sensitized organic materials. The evaluation of detective quantum efficiencies lead to limitations for the number of stacked layers and enables to deduce design rules based on optimal layout parameters. The comparison with conventional single layer detectors shows competitive performance of multilayered detectors compared to high quality single crystals for all investigated photon energies.
Human-compatible multi-contrast mammographic prototype system
Ran Zhang, Ke Li, John W. Garrett, et al.
In the past decade, grating-based x-ray multi-contrast imaging has demonstrated potential advantages for breast imaging, including reduced anatomical noise, sharper tumor boundary and improved visibility of microcalcifications. However, most of the studies have been performed on benchtop-based systems. The experimental conditions including the dose, scanning time and system geometry may not meet clinical standards. Therefore, to evaluate true clinical benefits of grating-based multi-contrast breast imaging, in-vivo imaging should be performed, which requires a human-compatible system. The purpose of this paper is to report the development of a human-compatible prototype multi-contrast imaging system. In particular, this work focuses on several key challenges in building the prototype system. Regarding the challenge of patient safety, the mean glandular dose (MGD) and the scatter radiation were evaluated for the prototype system. Regarding the challenge of the limited field-of-view (FOV), the origin of the problem and corresponding technical solutions are presented. Finally, imaging results of several test phantoms are presented and strategies to improve the image quality are discussed.
Image Reconstruction
icon_mobile_dropdown
Quantitative cone-beam CT of bone mineral density using model-based reconstruction
Purpose: We develop and validate a model-based framework for artifact correction and image reconstruction to enable application of Cone-Beam CT (CBCT) in quantitative assessment of bone mineral density (BMD). Compared to conventional quantitative CT, this approach does not require a BMD calibration phantom in the field-of-view during an object scan. Methods: The quantitative CBCT (qCBCT) imaging framework combined fast Monte Carlo (MC) scatter estimation, accurate models of detector response, and polyenergetic Poisson likelihood (PolyPL, Elbakri et al 2003). The underlying object model assumed that the tissues were ideal mixtures of water and calcium carbonate (CaCO3). Accuracy and reproducibility of qCBCT was evaluated in benchtop test-retest studies emulating a compact extremity CBCT system (axis-detector distance=56 cm, 90 kVp x-ray beam, ~16 mGy central dose). Various arrangements of Ca inserts (50 – 500 mg/mL) were placed in water cylinders of ~11 cm to ~15 cm diameter and scanned at multiple positions inside the fieldof- view for a total of 20 configurations. In addition, a cadaveric ankle was imaged in five configurations (with and without Ca inserts and water bath). Coefficient of variation (CV) of BMD values across different experimental configurations was used to assess reproducibility under varying imaging conditions. The performance of the model-based qCBCT framework (MC + PolyPL) was compared to FDK with water beam hardening correction and MC scatter correction. Results: The PolyPL framework achieved accuracy of 20 mg/mL or better across all insert densities and experimental configurations. By comparison, the accuracy of the FDK-based BMD estimates deteriorated with higher mineralization, resulting in ~120 mg/mL error for a 500 mg/mL Ca insert. Additionally, the model-based approach mitigated residual streaks that were present in FDK reconstructions. The CV of both methods was ~15% at 50 mg/mL Ca and less than ~8% for higher density inserts, where the PolyPL framework achieved 20-25% lower CV than the FDK-based approach. Conclusion: Accurate and reproducible BMD measurements can be achieved in extremity CBCT, supporting clinical applications in quantitative monitoring of fracture risk, osteoporosis treatment, and early osteoarthritis.
CT-guided PET parametric image reconstruction using deep neural network without prior training data
Jianan Cui, Kuang Gong, Ning Guo, et al.
Deep neural networks have attracted growing interests in medical image due to its success in computer vision tasks. One barrier for the application of deep neural networks is the need of large amounts of prior training pairs, which is not always feasible in clinical practice. Recently, the deep image prior framework shows that the convolutional neural network (CNN) can learn intrinsic structure information from the corrupted image. In this work, an iterative parametric reconstruction framework is proposed using deep neural network as constraint. The network does not need prior training pairs, but only the patient’s own CT image. The training is based on Logan plot derived from multi-bed-position dynamic positron emission tomography (PET) images using 68Ga-PRGD2 tracer. We formulated the estimation of the slope of Logan plot as a constraint optimization problem and solved it using the alternating direction method of multipliers (ADMM) algorithm. Quantification results based on real patient dataset shows that the proposed parametric reconstruction method is better than the Gaussian denoising and non-local mean denoising methods.
Radon inversion via deep learning
Radon transform is widely used in physical and life sciences and one of its major applications is the X-ray computed tomography (X-ray CT), which is significant in modern health examination. The Radon inversion or image reconstruction is challenging due to the potentially defective radon projections. Conventionally, the reconstruction process contains several ad hoc stages to approximate the corresponding Radon inversion. Each of the stages is highly dependent on the results of the previous stage. In this paper, we propose a novel unified framework for Radon inversion via deep learning (DL). The Radon inversion can be approximated by the proposed framework with an end-to-end fashion instead of processing step-by-step with multiple stages. For simplicity, the proposed framework is short as iRadonMap (inverse Radon transform approximation). Specifically, we implement the iRadonMap as an appropriative neural network, of which the architecture can be divided into two segments. In the first segment, a learnable fully-connected filtering layer is used to filter the radon projections along the view-angle direction, which is followed by a learnable sinusoidal back-projection layer to transfer the filtered radon projections into an image. The second segment is a common neural network architecture to further improve the reconstruction performance in the image domain. The iRadonMap is overall optimized by training a large number of generic images from ImageNet database. To evaluate the performance of the iRadonMap, clinical patient data is used. Qualitative results show promising reconstruction performance of the iRadonMap.
Accelerating coordinate descent in iterative reconstruction
Iterative coordinate descent (ICD) is an optimization strategy for iterative reconstruction that is sometimes considered incompatible with parallel compute architectures such as graphics processing units (GPUs). We present a series of modifications that render ICD compatible with GPUs and demonstrate the code on a diagnostic, helical CT dataset. Our reference code is an open-source package, FreeCT ICD, which requires several hours for convergence. Three modifications are used. First, as with our reference code FreeCT ICD, the reconstruction is performed on a rotating coordinate grid, enabling the use of a stored system matrix. Second, every other voxel in the z-is updated direction simultaneously, and the sinogram data is shuffled to coalesce memory access. This increases the parallelism available to the GPU. Third, NS voxels in the xy-plane are updated simultaneously. This introduces possible crosstalk between updated voxels, but because the interaction between non-adjacent voxels is small, small values of NS still converge effectively. We find NS = 16 enables faster reconstruction via greater parallelism, and NS = 256 remains stable but has no additional computational benefit. When tested on a pediatric dataset of size 736x16x14000 reconstructed to a matrix size of 512x512x128 on a single GPU, our implementation of ICD can converge within 10 HU RMS in less than 5 minutes. This suggests that ICD could be competitive with simultaneous update algorithms on modern, parallel compute architectures.
Ultra-low dose PET reconstruction using generative adversarial network with feature matching (Conference Presentation)
Purpose: Our goal is to synthesize high quality and accurate Amyloid PET images with only ultra-low-dose PET images as input by using Generative Adversarial Network (GAN). Methods: 40 patients’ PET data was acquired with the injection of 330±30 MBq amyloid radiotracer 18F-florbetaben. The raw list mode PET data was reconstructed as the standard-dose ground truth and was randomly undersampled by a factor of 100 to reconstruct 1% low-dose PET scans. 32 volumes were used for training and the other 8 for testing. A 2D encoder-decoder network was implemented as the generator to synthesize a standard-dose image and a CNN based discriminator network was used to evaluate them. The two networks contested with each other to achieve accurate synthesis of standard-dose PET images with high visual quality from ultra-low-dose PET. Multi-slice input is used to reduce noise by providing the network with 2.5D information. Feature matching was applied to reduce the hallucinate structure. The image quality was evaluated by peak signal-to-noise ratio (PSNR), structural similarity (SSIM), mean square error (MSE), frequency domain blur measure (FBM) and edge blur measure (EBM) metrics. Results: The synthesized PET images showed remarkable improvement on all quality metric compared with the low-dose images. Comparing with the state-of-art method, adversarial learning is essential to ensure image quality and mitigate the blurring in the generated image. Multi-slice input reduced random noise and feature matching suppressed the hallucinate structure. Conclusion: Standard-dose Amyloid PET images can be synthesized from ultra-low-dose image by GAN. Applying adversarial learning, multi-slice input and feature matching technique are essential to ensure image quality.
Patient evaluation of breast shape-corrected tomosynthesis reconstruction
Iterative reconstruction is a good match with the sparsely sampled limited angle data generated by breast tomosynthesis systems. However, it suffers from a specific artifact near the breast edge where it overestimates the x-ray path length, resulting in a considerable underestimation of the reconstructed linear attenuation coefficients. In this work, we present the application of a method that uses the measured 3D breast shape to reduce these artifacts in patient data, by including this information as an additional constraint in the image reconstruction process. A series of 50 patients undergoing breast tomosynthesis were additionally imaged with a pair of structured light cameras placed left and right of the mammography unit. These 3D surfaces were then aligned with the help of the backprojected breast outline from the x-ray data to form a single contour following the true breast shape. This was then further processed to generate a binary 3D mask set to 1 inside and to 0 outside the breast, and used as constraint in the reconstruction. Due to incomplete coverage and image artifacts, this mask was created successfully for only 19 out of 50 cases. Reconstructions were created with and without this constraint, and comparing attenuation profiles found that the artifact was almost completely corrected, bringing the reconstructed attenuation near the breast edge to the same level as the central region. Further visual inspection does show that higher quality optical 3D measurements and more precise alignment between optical and x-ray data are needed to avoid introducing new artifacts in the reconstruction.
Cone-beam CT statistical reconstruction with a model for fluence modulation and electronic readout noise
Purpose: Cone-beam CT (CBCT) systems with a flat-panel detector (FPD) have advanced in a variety of specialty diagnostic imaging scenarios, with fluence modulation and multiple-gain detectors playing important roles in extending dynamic range and improving image quality. We present a penalized weighted least-squares (PWLS) reconstruction approach with a noise model that includes the effects of fluence modulation and electronic readout noise, and we show preliminary results that tests the concept with a CBCT head scanner prototype. Methods: Statistical weights in PWLS were modified using a realistic noise model for the FPD that considers factors such as system blur and spatially varying electronic noise in multiple-gain readout detectors (PWLSe). A spatially varying gain term was then introduced in the calculation of statistical weights to account for the change in quantum noise due to fluence modulation (e.g. bowtie filter) (PWLS∗). The methods were tested in phantom experiments involving an elliptical phantom specially designed to stress dual-gain readout, and a water phantom and an anthropomorphic head phantom to quantify improvements in noise-resolution characteristics for the new PWLS methods (PWLS𝑒 and PWLS∗, and combined PWLS∗e). The proposed methods were further tested using a high-quality, low-dose CBCT head scanner prototype in a clinical study involving patients with head injury. Results: Preliminary results show that the PWLSe method demonstrated superior noise-resolution tradeoffs compared to conventional PWLS, with variance reduced by ~15-25% at matched resolution of 0.65 mm edge-spread-function (ESF) width. Clinical studies confirmed these findings, with variance reduced by ~15% in peripheral regions of the head without loss in spatial resolution, improving visual image quality in detection of peridural hemorrhage. A bowtie filter and polyenergetic gain correction improved image uniformity, and early results demonstrated that the proposed PWLS∗ method showed a ~40% reduction in variance compared to conventional PWLS when used with a bowtie filter. Conclusion: A more accurate noise model incorporated in PWLS statistical weights to account for fluence modulation and electronic readout noise reduces image noise and improves soft-tissue imaging performance in CBCT for clinical applications requiring a high degree of contrast resolution.
Detector Physics II
icon_mobile_dropdown
Dual energy imaging with a dual-layer flat panel detector
Dual Energy (DE) imaging has been widely used in digital radiography and fluoroscopy, as has dual energy CT for various medical applications. In this study, the imaging performance of a dynamic dual-layer a-Si flat panel detector (FPD) prototype was characterized for dual energy imaging tasks. Dual energy cone beam CT (DE CBCT) scans were acquired and used to perform material decomposition in the projection domain, followed by reconstruction to generate material specific and virtual monoenergetic (VM) images. The dual-layer FPD prototype was built on a Varex XRD 4343RF detector by adding a 200 μm thick CsI scintillator and a-Si panel of 150 μm pixel size on top as a low energy detector. A 1 mm copper filter was added as a middle layer to increase energy separation with the bottom layer as a high energy detector. The imaging performance, such as Modulation Transfer Function (MTF), Conversion Factor (CF), and Detector Quantum Efficiency (DQE) of both the top and bottom detector layers were characterized and compared with those of the standard single layer XRD4343 RF detector. Several tissue equivalent cylinders (solid water, liquid water, bone, acrylic, polyethylene, etc.) were placed on a rotating stand, and two separate 450-projection CBCT scans were performed under continuous 120 kV and 80 kV X-ray beams. After an empirical material decomposition calibration, water and bone images were generated for each projection, and their respective volumes were reconstructed using Varex’s CBCT Software Tools (CST 2.0). A VM image, which maximized the contrast-to-noise ratio of water to polyethylene, was generated based on the water and bone images. The MTF at 1.0 lp/mm from the low energy detector was 32% and 22% higher than the high energy detector and the standard detector, respectively; the DQE of both high and low energy detectors is much lower than that of the standard XRD 4343RF detector. The CNR of water to polyethylene from the VM image improved by 50% over that from the low energy image alone at 120 kV, and by 80% at 80 kV. This study demonstrates the feasibility of using a dual-layer FPD in applications such as DE CBCT for contrast enhancement and material decomposition. Further evaluations are underway.
Performance evaluation of a Se/CMOS prototype x-ray detector with the Apodized Aperture Pixel (AAP) design
Tomi F. Nano, Christopher C. Scott, Yunzhe Li, et al.
An x-ray detector’s ability to produce high signal-to-noise ratio (SNR) images for a given exposure is described by the detective quantum efficiency (DQE) as a function of spatial frequency. Current mammography and radiography detectors have poor DQE performance at high frequencies due to noise aliasing when using a high- resolution converter layer. The Apodized-Aperture Pixel (AAP) design is novel detector design that increases high-frequency DQE by removing noise aliasing using smaller sensor elements (eg. 5 - 50 μm) than image pixel size (eg. 50 - 200 μm). The purpose of this work is to implement the AAP design on a selenium (Se) CMOS micro-sensor prototype with 7.8 × 7.8 μm size elements. Conventional (binned) and AAP images with 47 μm pixel size were synthesized and used to measure the modulation transfer function (MTF), normalized Wiener noise power spectrum (NNPS) and DQE. A micro-focus x-ray source (with a tungsten target) and a 60kV beam filtered with 2mm of aluminum was used to measure performance with DQEPro (DQEInstruments Inc., London, Canada) in a dynamic image acquisition mode at a high exposure level (9.7mR). The AAP design has 1.5x greater MTF near the image cut-off frequency (uc = 10.6 cyc/mm) than conventional design. DQE near ucwas 2.5x greater with the AAP design than conventional, and specimen imaging of a kidney stone shows greater SNR of fine-detail in the AAP image.
Towards large-area photon-counting detectors for spectral x-ray imaging
Thomas Thuering, Spyridon Gkoumas, Pietro Zambon, et al.
Photon-counting detector technology using directly converting high-Z sensor materials has recently gained popularity in medical imaging due to its capability to reduce patient dose, increase spatial resolution and provide single shot multienergy information. However, in medical imaging applications, the novel technology is not yet widely used due to technical challenges in manufacturing gap-less detectors with large areas. Here, a nearly gap-less, large-area, multi-energy photoncounting detector prototype is presented which was built with existing ASIC and sensor technology. It features an active area of 8x8 cm2, a 1 mm thick cadmium telluride (CdTe) sensor, four independent energy thresholds and a pixel size of 150 μm. Single and multi-threshold imaging performance of the detector is evaluated by assessing various metrics relevant for conventional (polychromatic) and spectral imaging applications. A high detective quantum efficiency (DQE(0)=0.98), a low dark noise threshold (6.5 keV) and a high count rate capability (up to 3x108 counts/s/mm2) indicate that the detector is optimally suited for conventional medical X-ray imaging, especially for low dose applications. Spectral performance was assessed by acquiring spectra from fluorescence samples, and the results show a high accuracy of energy peak positions (< 1 keV), precise energy resolution (within a few keV) and decent peak-to-background ratios. Spectral absorption measurements of water and iodinated contrast agent, as well as spectral X-ray radiographs of a human hand phantom, decomposed into bone and soft tissue basis images, demonstrate the multi-energy performance of the detector.
Novel direct conversion imaging detector without selenium or semiconductor conversion layer
Denny L. Lee, Andrew Maidment, Ali Kassaee, et al.
It has been reported and discussed that electrical current can be produced when an insulating material interacts with ionizing radiation. We have found that high-resolution images can be obtained from insulating materials if this current is guided by an electric field to the pixels of a TFT array. The charge production efficiency of insulators is much smaller than that of photoconductor materials such as selenium, silicon, or other conventional semiconductors. Nevertheless, when the intensity of the ionizing radiation is sufficiently high, a charge sensitive TFT imaging array with only dielectric material can produce high MTF images with contrast resolution proportional to the intensity of the radiation. The function of the dielectric in this new detector may be similar to that of an ionization chamber. Without the semiconductor charge generating material, the dielectric imaging detector does not exhibit charge generation fatigue or charge generation saturation. Prototype detectors have been tested using diagnostic x-ray beams with energy ranging from 25 kVp to 150 kVp, and therapeutic 2.5MV, 6MV, 10MV, and 15MV photon beams (with and without an electron built-up layer), electron beams, broad area proton beams, and proton pencil beams in the energy range of 150 MeV. High spatial resolution images up to the Nyquist frequency have been demonstrated. The physics, structure, and the imaging properties as well as the potential application of this detector will be presented and discussed.
Theoretical count rate capabilities of polycrystalline silicon photon counting imagers for CBCT applications
Albert K. Liang, Youcef El-Mohri, Qihua Zhao, et al.
Photon counting detectors are of increasing interest for clinical imaging. By measuring and digitally recording the deposited energy of each detected x-ray photon, these detectors can decrease the effect of electronic readout noise and Swank noise, potentially leading to improved image quality at decreased dose. The energy information of each photon can also be used to perform advanced imaging techniques such as material separation and k-edge imaging. Photon counting detectors employing crystalline silicon for the detector backplane are currently used for mammographic imaging and are under active investigation for fan-beam breast computed tomography (BCT). Our group has been exploring the possibility of creating monolithic, large-area photon counting detectors in order to perform cone-beam CT (CBCT) for BCT and radiation therapy (kV CBCT) applications. The detectors employ polycrystalline silicon (poly-Si) – a thin-film material that can be used to economically manufacture monolithic, large-area backplanes. In addition, poly-Si transistors have demonstrated good radiation damage resistance. The introduction of poly-Si-based photon counting detectors to BCT and kV CBCT would combine the benefits of photon counting with those of CBCT. However, one major challenge is designing circuits that are capable of handling the x-ray fluxes associated with these applications. A previously developed simulation methodology has been employed to model various levels of input flux in order to evaluate the count rate capabilities of poly-Si circuit designs. In this paper, the count rate capabilities of a promising poly-Si amplifier design under BCT and kV CBCT conditions are reported.
Spectral Imaging
icon_mobile_dropdown
Physical modeling and performance of spatial-spectral filters for CT material decomposition
Material decomposition for imaging multiple contrast agents in a single acquisition has been made possible by spectral CT: a modality which incorporates multiple photon energy spectral sensitivities into a single data collection. This work presents an investigation of a new approach to spectral CT which does not rely on energy-discriminating detectors or multiple x-ray sources. Instead, a tiled pattern of K-edge filters are placed in front of the x-ray to create spatially encoded spectra data. For improved sampling, the spatial-spectral filter is moved continuously with respect to the source. A model-based material decomposition algorithm is adopted to directly reconstruct multiple material densities from projection data that is sparse in each spectral channel. Physical effects associated with the x-ray focal spot size and motion blur for the moving filter are expected to impact overall performance. In this work, those physical effects are modeled and a performance analysis is conducted. Specifically, experiments are presented with simulated focal spot widths between 0.2 mm and 4.0 mm. Additionally, filter motion blur is simulated for a linear translation speeds between 50 mm/s and 450 mm/s. The performance differential between a 0.2 mm and a 1.0 mm focal spot is less than 15% suggesting feasibility of the approach with realistic x-ray tubes. Moreover, for reasonable filter actuation speeds, higher speeds are shown to decrease error (due to improved sampling) despite motion-based spectral blur.
Multi-energy CT with triple x-ray beams and photon-counting-detector CT for simultaneous imaging of two contrast agents: an experimental comparison
Multi-energy CT (MECT) enabled by energy-resolved photon-counting-detector CT (PCD-CT) is promising for materialspecific imaging with multiple contrast agents. However, non-idealities of the PCD such as pulse pileup, K-edge escape, and charge sharing may degrade the spectral performance. To perform MECT, an alternative approach was proposed by extending a “Twin Beam” design to a dual-source CT scanner with energy-integrating-detector (EID) by operating one or both sources in the “Twin Beam” mode to acquire three (triple-beam configuration) or four (quadruple-beam configuration) distinct X-ray beam measurements. Previous computer simulation studies demonstrated that the image quality and dose efficiency of the triple-beam configuration were comparable to that in PCD-CT for a three-material decomposition task involving iodine, bismuth, and water. The purpose of this work is to experimentally validate the proposed triple-beam MECT technique in comparison with PCD-CT. To mimic the dual-source triple-beam acquisition, two separate scans, one at 80 kV and the other at 120 kV operated in the “Twin Beam” mode, were performed on a single-source CT scanner. Two potential clinical applications of MECT for multiple contrast agents were investigated: iodine/gadolinium for biphasic liver imaging and iodine/bismuth for small bowel imaging. The results indicate that the imaging performance of the EID-based MECT may be comparable to that on the current PCD-CT platform for both the iodine/gadolinium and the iodine/bismuth material decomposition tasks.
Spectrum optimization in photon counting detector based iodine K-edge CT imaging
Iodine K-edge CT imaging utilizes the sudden increase in the attenuation coefficient of iodine when the x-ray energy exceeds the K-shell binding energy of iodine. Early works on K-edge CT used multiple K-edge filters to generate different quasi-monoenergetic spectra with mean energies that straddled the iodine K-edge, and then multiple projections acquired with these spectra were processed to enhance the sensitivity of imaging iodine. Recent developments in energy-resolving photon counting detector (PCD) technology offer the potential for single-shot K-edge CT imaging. However, the performance of PCD-based iodine K-edge CT is often limited by the relatively low energy of the iodine K-edge (33.2 keV) compared with the mean energy of a polychromatic spectrum used in CT. This work explored the potential of introducing an iodine beam filter to PCD-based iodine K-edge CT to improve its imaging performance. To optimize the beam filtration condition, a realistic energy response function of an experimental PCD system was used when calculating the Cramér-Rao Lower Bounds (CRLBs) of three-material (iodine, bone, and water) decomposition estimators for each filtration condition. Experimental studies with a benchtop PCD CT system were performed to confirm the CRLB results. Both theoretical and experimental results demonstrated that by using an optimized iodine filter, quantitative accuracy of material basis images was improved. Compared with a commercial dual-energy-CT system, the optimized experimental K-edge CT system effectively reduced residual iodine signal in the bone basis image and reduced residual bone signal in the water-basis image.
Theoretical feasibility of dual-energy functional x-ray imaging of respiratory disease
We propose a two-dimensional (2D) contrast-enhanced dual-energy (DE) approach for functional x-ray imaging of respiratory disease. With this approach, non-radioactive xenon is used to provide contrast between ventilated regions of the lung and unventilated regions of the lung (i.e. ventilation defects); DE subtraction is used to suppress rib structures from 2D thoracic images. We modeled theoretically the signal-to-noise ratio (SNR) and area under the receiver operating characteristic curve (AUC) of a human observer for a defect present vs. defect absent binary classification task under signal-known-exactly/background-known-exactly conditions. Our model accounted for the size of ventilation defects, contrast of ventilation defects, quantum noise, finite spatial resolution, x-ray attenuation and observer efficiency. We modeled spherical defects with diameters up to 2.5 cm, and contrast and noise levels relevant for imaging of children, adolescents, adult males and adult females. Quantum noise and spatial resolution properties were calculated assuming an ideal energy-integrating x-ray detector. All calculations were performed assuming low-energy and high-energy applied tube voltages of 70 kV and 140 kV, respectively, with 2 mm of added copper filtration on the high-energy spectrum, and a total entrance exposure of 18 mR, which is typical for anterior-posterior thoracic imaging procedures. Our analysis shows that an AUC of 0.85 can be achieved for defect diameters as small as 1.1 cm, 1.2 cm, 1.3 cm and 1.4 cm for children ages 2 to 8, adolescents ages 9 to 14, adult males and adult females, respectively. Our results suggest that the DE approach proposed here warrants further investigation as a low-dose, low-cost alternative to existing approaches for functional imaging of respiratory disease.
Comparison study of dual-energy techniques in chest radiography
Dual-energy (DE) technology is useful in chest radiography because it can separate anatomical structures such as bone and soft tissue. The standard log subtraction (SLS), simple smoothing of the high-energy image (SSH), anti-correlated noise reduction (ACNR), and a general linear noise reduction algorithm (GLNR) are used as conventional DE techniques to separate bone and soft tissue. However, conventional DE techniques cannot accurately decompose the anatomical structures because these techniques are based on the assumptions that X-ray imaging is a linear relationship. This relationship can cause quantum noise as well as anatomical loss of normal tissue and difficulty in detecting lesions. In this study, we propose a non-linear DE technique which requires a step to calculate the coefficient in advance using a calibration phantom. The calibration phantom composed to aluminum and PMMA material to calculate non-linear coefficients using the quadratic fitting model for soft tissue and bone. The results demonstrated that a non-linear DE technique showed the higher contrast-to-noise ratio (CNR), signal to noise ratio (SNR) and figure of merit (FOM) at 60 /70 kVp and 130 kVp. In addition, it showed better performance and image quality than conventional DE technique in terms of material decomposition capability. In conclusion, a non-linear DE technique is expected to increase the diagnostic accuracy in chest radiography.
Spectral data completion for dual-source x-ray CT
D. P. Clark, C. T. Badea
In the context of x-ray CT, data completion is the process of augmenting truncated projection data to avoid artifacts during reconstruction. Data completion is commonly employed in dual-source CT where physical or hardware constraints limit the field of view covered by one of the two imaging chains. Practically, data completion is accomplished by extrapolating missing data based on the imaging chain with the full field of view, including some reweighting to approximate any spectral differences. While this approach works well in clinical applications, there are applications which would benefit from improved spectral estimation over the full field of view, including model-based iterative reconstruction, contrastenhanced abdominal imaging of large patients, and combined temporal and spectral imaging. Additionally, robust spectral data completion methods could provide an alternative to interior tomography for dose management in cardiac and spectral CT applications. To illustrate challenges with and potential machine-learning (ML) solutions for the spectral data completion problem, we present two realistic simulation experiments. A circular, cone-beam experiment disambiguates three contrast materials with dual-energy data and uses a generative network to inject 3D geometric information into a 2D, image-domain completion problem. A second clinical MDCT experiment uses a sophisticated variational network based on the split Bregman method and is structured to integrate directly into existing analytical reconstruction pipelines. While further work is required to establish performance limits and expectations, the results of both experiments strongly recommend the use of ML in spectral data completion problems.
Breast Imaging
icon_mobile_dropdown
The feasibility study for classification of breast microcalcifications based on photon counting spectral mammography
The purpose of this study was to evaluate the feasibility of spectral mammography using the dual-energy method to noninvasively distinguish between type I (calcium oxalate, CO) and type II (calcium hydroxyapatite, HA) microcalcifications. Two types of microcalcifications are difficult to distinguish due to a similar linear attenuation coefficient. In order to improve the detection efficiency of microcalcifications, we used the photon counting detector with energy discrimination capability and microcalcifications were classified into optimal energy bins. Two energy bins were used to obtain dualenergy images. In this study, photon counting spectral mammography system was simulated using Geant4 Application for Tomographic Emission (GATE) simulation tools. The thickness of the breast phantom was 3 cm and microcalcifications of various sizes ranging from 130-550 μm were embedded into the breast phantom. Microcalcifications were classified as being calcium hydroxyapatite or calcium oxalate based on score calculation with the dual-energy images. According to the results, the measured CNR of calcium hydroxyapatite (HA) was higher than that of the calcium oxalate (CO) in conventional single-energy image. In addition, two types of microcalcifications were distinguished using dual-energy analysis method. This classification represents better performance with a high energy of 50 kVp and an energy threshold of 30 keV. These results indicate that the classification performance was improved when the difference in the low energy image and high energy image was used. This study demonstrated the feasibility of photon counting spectral mammography for classification of breast microcalcifications. We expect that dual-energy method can reduce the frequency of biopsy and discriminate microcalcifications in mammography. These results are expected to potentially improve the efficiency of early breast cancer diagnosis.
Cascaded-systems analysis of signal and noise in contrast-enhanced spectral mammography using amorphous selenium photon-counting field-shaping multi-well avalanche detectors (SWADs)
Contrast-enhanced digital mammography using spectroscopic x-ray detectors may improve image quality relative to existing contrast-enhanced breast imaging approaches. We present a framework for theoretical modelling of signal and noise in contrast-enhanced spectral mammography (CESM) and apply our framework to systems that use a spectroscopic amorphous selenium (a-Se) field-Shaping multi-Well Avalanche Detector (SWAD) which uses avalanche gain to overcome the low conversion gain of a-Se. We modelled an approach that uses an a-Se SWAD with 100x100 μm2 detector elements, a converter thickness of 300 μm, an avalanche gain of ten, a 10-keV electronic noise floor and two energy bins. We modelled the influence of quantum efficiency, conversion gain, avalanche gain, characteristic emission, electronic noise, energy thresholding and image subtraction on the modulation transfer function (MTF), noise power spectrum (NPS) and iodine contrast. We investigated the choice of energy thresholds for the task of visualizing iodine signals. Our analysis demonstrates that reabsorption of characteristic photons yields energy-bin-dependent MTFs. As a result, spectral subtraction of low-energy and high-energy images enhances high spatial frequencies relative to low spatial frequencies. This effect, combined with better noise performance when using the lowest possible threshold to separate low-energy photons from electronic noise, results in better imaging performance than when reabsorption is suppressed through thresholding. Our theoretical framework enables quantifying trade offs between contrast, spatial resolution and noise for analysis of novel approaches for CESM, and provides a theoretical platform for comparison of CESM with existing approaches.
Ultra-short, high-dose rate digital x-ray tube based on carbon nanotube emitters for advanced cone-beam breast computed tomography
Jun-Tae Kang, Jin-Woo Jeong, Sora Park, et al.
Cone-beam breast computed tomography (CBCT) would be a promising modality in screening and diagnosis of breast, providing complete 3-dimensional images with little painful compression of breast during the imaging compared to conventional mammography and tomosynthesis. To date, all CBCT systems including a commercial one by Koning have been utilizing a typical filament-based x-ray tube. However, the filament-based x-ray tube even in a grid type has strict limitation in time resolution, of longer than few milliseconds, with a limited dose rate to cause a large motion blur in CBCT projection images. Micro-calcifications of less than 1 mm in early breast cancer could be hardly distinguished by using conventional CBCT systems. We tried to solve this problem by adopting a fast digital x-ray tube based on carbon nanotube (CNT) field emitters. We, for the first time, developed a rotational anode x-ray tube with CNT emitters for advanced CBCTs. The x-ray tube consisted of CNT paste-emitters and a rotating anode made of W/Re target, and was fully vacuum-sealed with a glass envelope. Ultra-short x-ray pulses of less than sub-ms with a moderate high current of more than 200 mA and a focal spot of ~0.3 in nominal value was successfully obtained. We performed preliminary studies on CBCT imaging using the digital x-ray tube and achieved 300 projection images for 10 s, great reducing motion blurs in the images. It is expected that the CNT digital x-ray tube developed improves CBCT imaging greatly and then promotes CBCT modality in breast screening and diagnosis.
Evaluation of silver sulfide nanoparticles as a contrast agent for spectral photon-counting digital mammography: a phantom study
The objective of our study was to evaluate the feasibility of using silver sulfide nanoparticles (Ag2S-NP) as a contrast agent for photon-counting mammography. The efficacies of Ag2S-NP and iodine contrast were evaluated using a contrastembedded gradient phantom. The phantom was constructed using tissue-equivalent materials and varied continuously in composition from 100% glandular tissue to 100% adipose tissue. Each contrast agent was prepared at eight different concentrations: 1, 2, 5, 10, 15, 20, 25, and 30 mg/mL. Tubes of contrast agent were inserted into holes bored through the phantom in the direction of varying glandularity. Various images of the phantom were acquired by altering the acquisition parameters (kV, mAs, and high bin fraction). A range of beam energies from 26 kV to 40 kV was tested in this study. Our results demonstrate that for a given contrast agent, the contrast-to-noise ratio (CNR) is linearly proportional to concentration, and its magnitude is dependent on the kV of the spectrum. At mammographic energies, the Ag2S-NP contrast increases with increasing kV and increasing solution concentration. Comparatively, the iodine signal becomes detectable only when the kV of the image is above iodine’s K-edge (33.2 keV). This indicates that the optimal energy for imaging iodine may exceed the clinical mammographic energy range. In summary, we have demonstrated the feasibility of using Ag2S-NP as a contrast agent for breast imaging. Preliminary results from spectral photon-counting mammography indicate that Ag2S-NP contrast has a significantly higher signal than iodine, especially when imaging at lower energies.
Validation of a method to simulate the acquisition of mammographic images with different techniques
A previously developed image modification method used to simulate mammographic images as if acquired with different detectors and at different dose levels was extended to account for changes in the acquisition technique factors. The method was validated using two 3D printed realistic breast phantoms denoted ‘Anna’ and ‘Barbara’ of thickness 34 mm and 50 mm, respectively. A complete imaging system characterization of a commercial digital mammography system was performed. Using this system, three images from each phantom were acquired with Mo/Mo 26 kV and Rh/Rh 29 kV spectra, respectively, at two dose levels each, denoted the original images. To validate this method, additional images at a lower dose and at lower and higher tube voltages, denoted the target images, were also acquired. The original images were modified to simulate the target images, resulting in the simulated images. The signal in the original images was changed taking into account the target images conditions. Additional noise was also added to match that in the target images. The power spectra of the target and simulated images for each acquisition factor were found to match to within an average difference of 4% and 5% for the ‘Anna’ and ‘Barbara’ phantoms, respectively. Also, average structural similarity indices of 0.999 and 0.975 respectively, were obtained, meaning that all images are very similar. The method was found to accurately reflect acquisition with different techniques, as tested with 3D printed breast phantoms of different thicknesses, making it possible to use this method for future image quality evaluation studies.
Cone Beam CT
icon_mobile_dropdown
A robotic x-ray cone-beam CT system: trajectory optimization for 3D imaging of the weight-bearing spine
Purpose: We optimize scan orbits and acquisition protocols for 3D imaging of the weight-bearing spine on a twin-robotic x-ray system (Multitom Rax). An advanced Cone-Beam CT (CBCT) simulation framework is used for systematic optimization and evaluation of protocols in terms of scatter, noise, imaging dose, and task-based performance in 3D image reconstructions. Methods: The x-ray system uses two robotic arms to move an x-ray source and a 43×43 cm2 flat-panel detector around an upright patient. We investigate two classes of candidate scan orbits, both with the same source-axis distance of 690 mm: circular scans with variable axis-detector distance (ADD, air gap) ranging from 400 to 800 mm, and elliptical scans, where the ADD smoothly changes between the anterior-posterior view (ADDAP) and the lateral view (ADDLAT). The study involved elliptical orbits with fixed ADDAP of 400 mm and variable ADDLAT, ranging 400 to 800 mm. Scans of human lumbar spine were simulated using a framework that included accelerated Monte Carlo scatter estimation and realistic models of the x-ray source and detector. In the current work, x-ray fluence was held constant across all imaging configurations, corresponding to 0.5 mAs/frame. Performance of circular and elliptical orbits was compared in terms of scatter and scatter-to-primary ratio (SPR) in projections, and contrast, noise, contrast-to-noise ratio (CNR), and truncation (field of view, FOV) in 3D image reconstructions. Results: The highest mean SPR was found in lateral views, ranging from ~5 at ADD of 300 mm to ~1.2 at ADD of 800 mm. Elliptical scans enabled image acquisition with reduced lateral SPR and almost constant SPR across projection angles. The improvement in contrast across the investigated range of air gaps (due to reduction in scatter) was ~2.3x for circular orbits and ~1.9x for elliptical orbits. The increase in noise associated with increased ADD was more pronounced for circular scans (~2x) compared to elliptical scans (~1.5x). The circular orbit with the best CNR performance (ADD=600 mm) yielded ~10% better CNR than the best elliptical orbit (ADDLAT=600 mm); however, the elliptical orbit increased FOV by ~16%. Conclusion: The flexible imaging geometry of the robotic x-ray system enables development of highly optimized scan orbits. Imaging of the weight-bearing spine could benefit from elliptical detector trajectories to achieve improved tradeoffs in scatter reduction, noise, and truncation.
Misalignment compensation for ultra-high-resolution and fast CBCT acquisitions
Magdalena Herbst, Christoph Luckner, Julia Wicklein, et al.
The acquisition time of cone-beam CT (CBCT) systems is limited by different technical constraints. One important factor is the mechanical stability of the system components, especially when using C-arm or robotic systems. This leads to the fact that today’s acquisition protocols are performed at a system speed, where geometrical reproducibility can be guaranteed. However, from an application point of view faster acquisition times are useful since the time for breath-holding or being restraint in a static position has direct impact on patient comfort and image quality. Moreover, for certain applications, like imaging of extremities, a higher resolution might offer additional diagnostic value. In this work, we show that it is possible to intentionally exceed the conventional acquisition limits by accepting geometrical inaccuracies. To compensate deviations from the assumed scanning trajectory, a marker-free auto-focus method based on the gray-level histogram entropy was developed and evaluated. First experiments on a modified twin-robotic X-ray system (Multitom Rax, Siemens Healthcare GmbH, Erlangen, Germany) show that the acquisition time could be reduced from 14 s down to 9 s, while maintaining the same high-level image quality. In addition to that, by using optimized acquisition protocols, ultra-high-resolution imaging techniques become accessible.
CBCT auto-calibration by contour registration
Susanne Maur, Dzmitry Stsepankou, Jürgen Hesser
This paper discusses a novel strategy for estimating projection matrices using the contours of anatomical structures tracked on x-ray projection images. It establishes an auto-calibration routine which calculates the geometrical projection parameters from unknown patient geometry based on iterative reconstruction and registration. By introducing the uncertainty of calibration parameters in registration, we achieve a robust correction for broad types of patient motion. As our method does not rely on consistency between projection data and tomography reconstruction it is robust towards truncation, noise and other typical artifacts of CBCT reconstruction. We evaluated our proposed method of auto-calibration on digital reconstructed radiographs (DRRs) of a CT headscan. In a standard dental CBCT setup our approach shows an average recovery of volume sharpness of 83:67% for different motion types.
Image-based deformable motion compensation for interventional cone-beam CT
A. Sisniega, S. Capostagno, W. Zbijewski, et al.
Purpose: Interventional cone-beam CT (CBCT) is used for 3D guidance in interventional radiology (IR) procedures in the abdomen, with extended presence in trans-arterial chemoembolization (TACE) interventions for liver cancer. Image quality in this scenario is challenged by complex motion of soft-tissue abdominal structures, and by long acquisition times. We propose an image-based approach to estimate complex deformable motion through a combination of locally rigid motion trajectories. Methods: Deformable motion is estimated by minimizing a multi-region autofocus cost function. Motion is considered locally rigid for each region of interest (ROI) and the deformable motion field is obtained through spatial spline-based interpolation of the local trajectories. The multi-component cost function includes two regularization terms; one to penalize abrupt temporal transitions, and another to penalize abrupt spatial changes in the trajectory. Performance of deformable motion compensation was assessed in simulation studies with a digital abdomen phantom featuring a motion-induced deformable liver in static surrounding anatomy. Spherical inserts (4 – 12 mm diameter, -100 – 100 HU contrast) were placed in the liver. Image quality was evaluated by structural similarity (SSIM) with the static image as reference. Results: Motion compensated liver images showed better delineation of structure boundaries and recovery of distorted spherical shapes compared to their motion-corrupted counterparts. Consistent increase in SSIM was observed after motion compensation for the range of motion amplitudes studied (4 mm to 10 mm), showing 11% and 26% greater SSIM for 4 mm and 10 mm motion, respectively. Conclusion: The results indicate feasibility of image-based deformable motion compensation in soft-tissue abdominal CBCT imaging.
X-ray Phase Contrast Imaging
icon_mobile_dropdown
Joint-reconstruction-enabled partial-dithering strategy for edge-illumination x-ray phase-contrast tomography
Edge illumination X-ray phase-contrast tomography (EIXPCT) is a emerging imaging technology in which partially opaque gratings are utilized with laboratory-based X-ray sources to estimate the distribution of the complex-valued refractive index. Spatial resolution in EIXPCT is mainly determined by the grating period of a sample mask, but can be significantly improved by a dithering technique in which multiple projection images are required per tomographic view angle as the object is moved over sub-pixel distances. Drawbacks of dithering include increased data acquisition times and radiation doses. Motivated by the flexibility in data acquisition designs enabled by a recently developed joint reconstruction (JR) method, a novel partial-dithering strategy for data acquisition is proposed. In this strategy, dithering is implemented at only a subset of the tomographic view angles. This results in spatial resolution that is comparable to that of the conventional full-dithering strategy where dithering is performed at every view angle, but the acquisition time is substantially decreased. The effect of dithering parameters on image resolution is explored.
Impact of the sensitivity factor on the signal-to-noise ratio in grating-based phase contrast imaging
Xu Ji, Ran Zhang, Ke Li, et al.
The sensitivity factor of a grating-based x-ray differential phase contrast (DPC) imaging system determines how much fringe shift can be observed for a given refraction angle. It is commonly believed that increasing the sensitivity factor will improve the signal-to-noise ratio (SNR) of the phase signal. However, this may not always be the case if the intrinsic phase wrapping effect is taken into consideration. In this work, a theoretical derivation is provided to quantify relationship between the sensitivity and SNR for a given refraction angle, exposure level, and grating based x-ray DPC system. The theoretical derivation shows that the expected phase signal is not always proportional to the sensitivity factor and may even decrease when the sensitivity factor becomes too large. The noise variance of the signal is not always solely dependent on the exposure level and fringe visibility but may become signal-dependent under certain circumstances. As a result, SNR of the phase signal does not always increase with higher sensitivity. Numerical simulation studies were performed to validate the theoretical models. Results show that when the fringe visibility and exposure level are fixed, there exists an optimal sensitivity factor which maximizes the SNR for a given refraction angle; further increase of the sensitivity factor may decrease the SNR.
Laboratory-based x-ray phase contrast CT technology for clinical intra-operative specimen imaging
Lorenzo Massimi, Charlotte K. Hagen, Marco Endrizzi, et al.
The design of an X-ray phase contrast tomography system for intra-operative specimen imaging based on edge illumination is presented. The use of edge illumination makes possible working with large focus, polychromatic X-ray sources reducing acquisition times of tomography scans down to values compatible with clinical use, while maintaining phase sensitivity in a compact device. The results collected so far show that application of this technology to breast conservation surgery has great potential to reduce re-operations, thus saving additional costs for healthcare services and stress for patients.
3D histopathology speckle phase contrast imaging: from synchrotron to conventional sources
Standard histopathological examination is the gold standard for many disease diagnoses although the technique is limited with no full 3D volume analysis possible. Three dimensional X-ray Phase-Contrast Imaging(PCI) methods have been under constant and fast developments in the recent decades due to their superior performance for imaging low density objects and their ability to provide complementary information compared to attenuation based imaging. Despite the progresses, X-ray Phase Contrast Tomography still encounters remaining challenges to overcome on its way to become a routine non-invasive technique allowing the 3D assessment of tissue architecture in laboratory set-ups. Speckle Based Imaging (SBI) forms a new class of X-ray PCI techniques, sensitive to the first derivative of the phase. The set-up involved and the simplicity of implementation provide many advantages to SBI such as having no field of view and no resolution limitation in addition to have low requirements on the beam coherences. These advantages make SBI a good candidate for the transfer on conventional sources. In this work, we present preliminary results obtained on a conventional μCT and their comparison with data acquired at the European Synchrotron. We used a new phase retrieval algorithm based on optical energy conservation. We applied the method on both phantoms and biological samples in order to evaluate its quantitativeness for a transfer. A comparison to previously available speckle tracking algorithms is also performed. We demonstrate that the combination of the phase retrieval method with a standard μCT can achieve high resolution and high contrast within a few minutes, with a comparable image quality to the results obtained using synchrotron light.
Low dose imaging with simultaneous scatter, attenuation and mesh-based phase contrast
Rohaan Khan, Brenda Adhiambo, Sean Starr-Baier, et al.
X-ray phase differences are a thousand times greater than attenuation differences, but phase imaging has found limited clinical use due to requirements on x-ray coherence which may not be easily translated to clinical practice. Instead, this work employs a conventional source to create structured illumination with a simple wire mesh. The system simultaneously collects phase, attenuation, and scatter information. X-ray coherent scatter allows differentiation between tissue types with potentially much higher contrast than conventional radiography. Coherent-scatter images are collected with simple 1D slot-scanning and an angular shield to select signatures of interest from a relatively large region.
Photon Counting Imaging
icon_mobile_dropdown
Indirect photon-counting x-ray imaging using CMOS Photon Detector (CPD)
Toshiyuki Nishihara, Hiroyasu Baba, Masao Matsumura, et al.
CMOS photon detectors (CPDs) are recently proposed photon sensing devices utilizing the latest CMOS image sensor (CIS) technologies [1]. CPDs are non-electron-multiplying devices, whose pixels have a fully depleted photo diode and have a readout noise of sub electron RMS even at room temperature. Using a 15μm pixel CPD test device coupled to a CsI(Tl) scintillator, we successfully obtained photon-count X-ray images. A Hamamatsu Photonics scintillator with 150μm CsI(Tl) layer coupled to fiber optic plate (FOP) of 3mm thickness is diced in dry condition and directly glued to the sensor surface. X-ray photons are injected from an X-ray tube with accelerating voltage of 30kV and 45kV using W target. Each X-ray photon creates a scintillation light spot in the captured images, where the injected position and photon energy are determined by integrating multiple pixel outputs at that spot. X-ray energy distributions were obtained at both 30kV and 45kV with reasonable differences. Modulated transfer function (MTF) of over 0.7 at 10LP/mm was achieved by mapping injected positions at 30kV. Photon-count images for slanted-edge MTF measurements as well as 10LP/mm of X-ray test chart were achieved. Those photon-count images were compared with conventional energy integrating images obtained with the same sensing device. Both image types confirmed superior resolution with photon-counting. This indirect X-ray photon counting technique using CPD has a potential of getting critical X-ray information for medical applications by achieving accurate injected positions of X-ray photons and their energies simultaneously.
Simulation model for evaluating energy-resolving photon-counting CT detectors based on generalized linear-systems framework
Mats Persson, Norbert J. Pelc
Photon counting detectors are interesting candidates for next-generation clinical computed tomography scanners, promising improved contrast-to-noise ratio, spatial resolution and energy information compared to conventional energy-integrating detectors. Most attention is focused on cadmium telluride (CdTe) (or CZT) detectors, but silicon (Si) has been proposed as an alternative. We present detector simulation models fitted to published spectral response data for CdTe and Si, and use linear-systems theory to evaluate the spatial-frequency dependent DQE for lesion quantification and detection. Our fitted spectral response is consistent with Gaussian charge clouds with σ = 20.5 µm independent of energy for CdTe, and σ = 17 µm at 60 keV with an energy dependence of E0.54 for Si. For a silicon strip detector with 0.5 × 0.5 mm2 pixels separated by a 1D grid of 20 µm tungsten foils, the zero-frequency DQE for iodine detection is 0.43 for 30 mm detector absorption length and 0.46 for 60 mm detector absorption length. For iodine quantification in a water-iodine decomposition, the DQE is 0.26 for 30 mm and 0.27 for 60 mm Si. Compared to this detector, the DQE of a 1.6 mm thick CdTe detector with 0.225 mm pixels and two energy bins is 11-36 % higher for water and iodine detection but 28-51 % lower for material quantification. The predicted performance of Si is competitive with CdTe, suggesting that further consideration is warranted.
Increased count-rate performance and dose efficiency for silicon photon-counting detectors for full-field CT using an ASIC with adjustable shaping time
Christel Sundberg, Mats Persson, Andreas Ehliar, et al.
Photon-counting silicon strip detectors are attracting interest for use in next generation CT scanners. For silicon detectors, a low noise floor is necessary to obtain a good dose efficiency. A low noise floor can be achieved by having a filter with a long shaping time in the readout electronics. This also increases the pulse length, resulting in a long deadtime and thereby a degraded count-rate performance. However, as the flux typically varies greatly during a CT scan, a high count-rate capability is not required for all projection lines. It would therefore be desirable to use more than one shaping time within a single scan. To evaluate the potential benefit of using more than one shaping time, it is of interest to characterize the relation between the shaping time, the noise, and the resulting pulse shape. In this work we present noise and pulse shape measurements on a photon-counting detector with two different shaping times along with a complementary simulation model of the readout electronics. We show that increasing the shaping time from 28.1 ns to 39.4 ns decreases the noise and increases the signal-to-noise ratio (SNR) with 6.5% at low count rates and we also present pulse shapes for each shaping time as measured at a synchrotron source. Our results demonstrate that the shaping time plays an important role in optimizing the dose efficiency in a photon-counting x-ray detector.
Frequency-dependent MTF and DQE of photon-counting x-ray imaging detectors (Conference Presentation)
Theoretical modeling of the performance of x-ray imaging detectors enables understanding relationships between the physics of x-ray detection and x-ray image quality, and enables theoretical optimization of novel x-ray imaging techniques and technologies. We present an overview of a framework for theoretical modeling of the frequency-dependent signal and noise properties of single-photon-counting (SPC) and energy-resolving x-ray imaging detectors. We show that the energy-response function, large-area gain, modulation transfer function (MTF), noise power spectrum (NPS) (including spatio-energetic noise correlations) and detective quantum efficiency (DQE) of SPC and energy-resolving x-ray imaging detectors are related through the probability density function (PDF) describing the number electron-hole (e-h) pairs collected in detector elements following individual x-ray interactions. We demonstrate how a PDF-transfer approach can be used to model analytically the MTF and NPS, including spatio-energetic noise correlations, of SPC and energy-resolving x-ray imaging detectors. Our approach enables modeling the combined effects of stochastic conversion gain, electronic noise, characteristic emission, characteristic reabsorption, coulomb repulsion and diffusion of e-h pairs and energy thresholding on the MTF and NPS. We present applications of this framework to (1) analysis of the frequency-dependent DQE of SPC systems that use cadmium telluride (CdTe) x-ray converters, and (2) analysis of spatio-energetic noise correlations in CdTe energy-resolving x-ray detectors. The developed framework provides a platform for theoretical optimization of next-generation SPC and energy-resolving x-ray imaging detectors.
Experimental study of neural network material decomposition to account for pulse-pileup effects in photon-counting spectral CT
Parker Jenkins, Taly Gilat Schmidt
Spectral CT with photon-counting detectors has demonstrated potential for improved material decomposition but is challenged by nonideal effects such as pulse pileup. The purpose of this study was to investigate the performance of Neural Network (NN) material decomposition under varying pulse-pileup conditions. We hypothesize that the NN can compensate for pileup effects as it learns the relationship between the spectral measurements and basis material thicknesses through calibration. Photon-counting experiments were performed to: (1) investigate the optimal NN architecture across varying pileup conditions, (2) quantify the performance of NN material decomposition across varying pileup conditions and (3) demonstrate material decomposition of photon-counting CT data across varying pileup conditions. The NN was trained with log-normalized spectral transmission measurements through known thicknesses of basis materials (PMMA and aluminum) at five flux levels. The trained NN was then applied to photoncounting transmission measurements through Teflon, Delrin, and Neoprene to estimate the basis material thicknesses. The trained neural network was also applied to photon-counting CT data of a rod phantom. The optimal NN configuration remained generally consistent across the studied flux levels, thus a NN configuration with five hidden-layer nodes was selected for the subsequent analysis. For the test material slabs, overall decomposition error decreased with flux for Teflon and Delrin, while increasing with flux for Neoprene. The CT experiments showed lowest material decomposition error for the lower flux condition, but similar error for the two higher flux conditions. Pulse-pileup decreased the variation in material decomposition estimates. The effects of pileup on material decomposition error varied across different materials. These preliminary results suggest that NN material decomposition may account for pulse-pileup effects in photon-counting CT.
Impacts of photon counting detector to cerebral CT angiography maximum intensity projection (MIP) images
Cerebral CT angiography (CTA) is widely used for the diagnosis of various cerebrovascular diseases, including strokes, vasculitis, aneurysms, and etc.2–4 For the diagnosis of ischemic strokes, the availability of high quality CTA images not only helps in identifying the presence/location of large vessel occlusion but also facilitates the assessment of collateral blood supply. As another example, accurate rendering of the superficial temporal arteries is valuable in identifying vessel inflammations induced by giant cell arteritis.5 While CTA is an established clinical gold standard for imaging large cerebral arteries and veins,1 an important challenge that currently remains for MDCT-based CTA is its limited performance in imaging small perforating arteries with a diameter below 0.5 mm.4 As a consequence, the relativley invasive artery biopsy procedure remains the current clinical gold standard for the diagnosis of giant cell arteritis.6 The use of indirect conversion energy integrating detectors puts intrinsic limit on the spatial resolution of MDCT, both in-plane and along the z direction. Severe partial volume averaging effect (PVE) and the preferential weighting of high energy photons7 are among major reasons for the relatively poor performance of MDCT-based CTA for imaging iodinated small vessels. Photon counting detector-based CT (PCD-CT) offers potential technological solutions to these challenges MDCT systems face for CTA. When compared to MDCT, the direct conversion design of PCD reduces limitations on both in-plane and through-plane spatial resolution, and the inherent equal weighting of high and low energy photons of PCD-CT systems offers an improvement in the CNR of iodinated vessels. The purpose of this work was to theoretically and experimentally study the potential impacts of the PCD-CT technology to an important component of CTA image package: the maximum intensity projection (MIP) image. MIP is a simple 3D image visualization method to display CTA data sets. Based on source images alone, it can be very challenging to evaluate occlusion conditions since most vessels extend to different z positions. In comparison, a MIP image that extracted information from a much longer z range can provide clearer evidence for an occlusion; in addition, it can effectively enhance the visibility of small collateral vessels. This work first derived the statistical properties of the MIP image, then analyzed how each of the benefits of PCD (improved z resolution; reduced noise autocovariance along z) propagates from the source CT images to the final MIP image. Finally, experiments were performed using a benchtop PCD-CT system and an anthropomorphic CTA phantom to showcase the significantly improved visibility of small perforating arteries.
Algorithm
icon_mobile_dropdown
Exploring the space between smoothed and non-smooth total variation for 3D iterative CT reconstruction
V. Haase, K. Stierstorfer, K. Hahn, et al.
Because total variation (TV) is non-differentiable, iterative reconstruction using a TV penalty comes with technical difficulties. To avoid these, it is popular to use a smooth approximation of TV instead, which is defined using a single parameter, herein called δ, with the convention that the approximation is better when δ is smaller. To our knowledge, it is not known how well image reconstruction with this approximation can approach a converged non-smooth TV-regularized result. In this work, we study this particular question in the context of X-ray computed tomography (CT). Experimental results are reported with real CT data of a head phantom and supplemented with a theoretical analysis. To address our question, we make proficient use of a generalized iterative soft-thresholding algorithm that allows us to handle TV and its smooth approximation in the same framework. Our results support the following conclusions. First, images reconstructed using the smooth approximation of TV appears to smoothly converge towards the TV result as δ tends to zero. Second, the value of δ does not need to be overly small to obtain a result that is essentially equivalent to TV, implying that numerical instabilities can be avoided. Last, though it is smooth, the convergence with δ is not particularly fast, as the mean absolute pixel difference decreases only as √δ in our experiments. Altogether, we conclude that the approximation is a theoretically valid way to approximate the non-smooth TV penalty for CT, opening the door to safe utilization of a wide variety of optimization algorithms.
Image-domain insertion of spatially correlated, locally varying noise in CT images
Noise simulation methods for computed tomography (CT) scans are powerful tools for assessing image quality at a range of doses without compromising patient care. Current state of the art methods to simulate lower-dose images from standard-dose images insert Poisson or Gaussian noise in the raw projection data; however, these methods are not always feasible. The objective of this work was to develop an efficient tool to insert realistic, spatially correlated, locally varying noise to CT images in the image-domain utilizing information from the image to estimate the local noise power spectrum (NPS) and variance map. In this approach, normally distributed noise is filtered using the inverse Fourier transform of the square root of the estimated NPS to generate noise with the appropriate spatial correlation. The noise is element-wise multiplied by the standard deviation map to produce locally varying noise and is added to the noiseless or high-dose image. Results comparing the insertion of noise in the projection-domain versus the proposed insertion of noise in the image-domain demonstrate excellent agreement. While this image-domain method will never replace projection-domain methods, it shows promise as an alternative for tasks where projection-domain methods are not practical, such as the case for conducting large-scale studies utilizing hundreds of noise realizations or when the raw data is not available.
Utilizing deformable image registration to create new living human heart models for imaging simulation
The Living Heart Model (LHM) was developed as part of the Living Heart Project by Dassault Systemes to provide a numerical finite element (FE) model of the human heart that accurately reproduces the normal cardiac physiology. We previously incorporated the LHM into the 4D extended cardiac-torso (XCAT) phantom for imaging research, rigidly transforming the model to fit it within different anatomies. This captured the variation in the overall size, position, and orientation of the heart but did not capture more subtle geometrical changes. Anatomic measurements of normal heart structures can show standard deviation variations of upwards of 25-30%. In this work, we investigate the use of Hyperelastic Warping to non-rigidly fit the LHM to new anatomies based on 4D CT data from anatomically diverse, normal patients. For each patient target, the mid-diastolic frame from the CT (heart is most relaxed) was segmented to define the cardiac chambers. The geometry of the LHM was then altered to match the targets using Hyperelastic Warping to register the LHM mesh, in its relaxed state, to each segmented dataset. The altered meshes were imported back into the FE software to simulate cardiac motion from the new geometries to incorporate into the XCAT phantom. By preserving the underlying LHM architecture, our work shows that Hyperelastic Warping allows for efficient revision of the LHM geometry. This method can produce a diverse collection of heart models, with added interior variability, to incorporate into the XCAT phantom to investigate 4D imaging methods used to diagnose and treat cardiac disease.
Volume-of-interest imaging using multiple aperture devices
Volume-of-interest (VOI) imaging is a strategy in computed tomography (CT) that restricts x-ray fluence to particular anatomical targets via dynamic beam modulation. This permits dose reduction while retaining image quality within the VOI. VOI-CT implementation has been challenged historically by a lack of hardware solutions for tailoring the incident fluence to the patient and anatomical site as well as challenges involving interior tomography reconstruction of truncated projection data. In this work, we propose a general VOI-CT imaging framework using multiple aperture devices (MADs), an emerging beam filtration scheme based on two binary x-ray filters. Location of VOI is prescribed using two scout views at anterior-posterior (AP) and lateral perspectives. Based on a calibration of achievable fluence field patterns, MAD motion trajectories designed using an optimization objective that seeks to maximize the relative fluence in the VOI subject to minimum fluence constraints. A modified penalized-likelihood method is developed for reconstruction of heavily truncated data using the full-field scout views to help solve the interior tomography problem. Physical experiments were con- ducted to show the feasibility of non-centered and elliptical VOI in two applications - spine and lung imaging. Improved dose utilization and retained image quality is validated with respect to standard full-field protocols. Compared with full-field scans at reference dose, the MAD-VOI scans reduced total dose by 80% while retaining image quality. We observe that the contrast-to-noise ratio is 30% higher compared with low-dose full-field scans at the same dose usage.
Optimized intensity modulation for a dynamic beam attenuator in x-ray computed tomography
Sascha Manuel Huck, George S. K. Fung, Katia Parodi, et al.
In our previous study we successfully built a novel sheet-based dynamic beam attenuator (sbDBA) for fluence field modulation in X-ray computed tomography (CT) and performed a first-time experimental validation. In this work, we focus on the optimization of the DBA transmission properties for a given object. In clinical routine, CT scanners must cope with various attenuation properties differing from patient to patient. Typically, the attenuation of patients is high in the center of the fan beam, decreasing towards the periphery. The attenuation profiles of an object can also change for different X-ray tube positions. These variations cause unfavorable imbalances of image quality in the reconstructed object. Typically, the peripheral region, which is generally of minor diagnostically relevance, has relatively lower noise than the central region because the rays contributing to the peripheral region are less attenuated. This imbalance can be reduced by using beam-shaping prefilters, e.g. bowtie filters, attenuating the propagated intensity towards the periphery in a predefined, static profile. Bowtie filters, however, are not capable of dynamically adapting their attenuation to the attenuation profile of the patient. This can be accomplished by using dynamic beam attenuators (DBA) where the fan beam intensity can dynamically be modulated on a view-by-view basis, reducing noise inhomogeneities and enabling region-of-interest (ROI) imaging. Different scenarios (no attenuator, tube current modulation, conventional bowtie filter, the sbDBA and an ideal DBA) are compared in terms of image quality. The optimized sbDBA with tube current modulation (TCM) not only reduces the total radiation dose but also allows for spatial selection of intensity as required for ROI imaging.
Machine Learning II
icon_mobile_dropdown
Volumetric scout CT images reconstructed from conventional two-view radiograph localizers using deep learning (Conference Presentation)
Juan Camilo Montoya, Chengzhu Zhang, Ke Li, et al.
In this work, a deep neural network architecture was developed and trained to reconstruct volumetric CT images from two-view radiograph scout localizers. In clinical CT exams, each patient will receive a two-view scout scan to generate both lateral (LAT) and anterior-posterior (AP) radiographs to help CT technologist to prescribe scanning parameters. After that, patients go through CT scans to generate CT images for clinical diagnosis. Therefore, for each patient, we will have two-view radiographs as input data set and the corresponding CT images as output to form our training data set. In this work, more than 1.1 million diagnostic CT images and their corresponding projection data from 4214 clinically indicated CT studies were retrospectively collected. The dataset was used to train a deep neural network which inputs the AP and LAT projections and outputs a volumetric CT localizer. Once the model was trained, 3D localizers were reconstructed for a validation cohort and results were analyzed and compared with the standard MDCT images. In particular, we were interested in the use of 3D localizers for the purpose of optimizing tube current modulation schemes, therefore we compared water equivalent diameters (Dw), radiologic paths and radiation dose distributions. The quantitative evaluation yields the following results: The mean±SD percent difference in Dw was 0.6±4.7% in 3D localizers compared to the Dw measured from the conventional CT reconstructions. 3D localizers showed excellent agreement in radiologic path measurements. Gamma analysis of radiation dose distributions indicated a 97.3%, 97.3% and 98.2% of voxels with passing gamma index for anatomical regions in the chest, abdomen and pelvis respectively. These results demonstrate the great success of the developed deep learning reconstruction method to generate volumetric scout CT image volumes.
Harnessing the power of deep learning for volumetric CT imaging with single or limited number of projections
Tomographic imaging using a penetrating wave, such as X-ray, light and microwave, is a fundamental approach to generate cross-sectional views of internal anatomy in a living subject or interrogate material composition of an object and plays an important role in modern science. To obtain an image free of aliasing artifacts, a sufficiently dense angular sampling that satisfies the Shannon-Nyquist criterion is required. In the past two decades, image reconstruction strategy with sparse sampling has been investigated extensively using approaches such as compressed-sensing. This type of approach is, however, ad hoc in nature as it encourages certain form of images. Recent advancement in deep learning provides an enabling tool to transform the way that an image is constructed. Along this line, Zhu et al1 presented a data-driven supervised learning framework to relate the sensor and image domain data and applied the method to magnetic resonance imaging (MRI). Here we investigate a deep learning strategy of tomographic X-ray imaging in the limit of a single-view projection data input. For the first time, we introduce the concept of dimension transformation in image feature domain to facilitate volumetric imaging by using a single or multiple 2D projections. The mechanism here is fundamentally different from the traditional approaches in that the image formation is driven by prior knowledge casted in the deep learning model. This work pushes the boundary of tomographic imaging to the single-view limit and opens new opportunities for numerous practical applications, such as image guided interventions and security inspections. It may also revolutionize the hardware design of future tomographic imaging systems
Image quality improvement in cone-beam CT using deep learning
We propose a learning method to generate corrected CBCT (CCBCT) images with the goal of improving the image quality and clinical utility of on-board CBCT. The proposed method integrated a residual block concept into a cyclegenerative adversarial network (cycle-GAN) framework, which is named as Res-cycle GAN in this study. Compared with a GAN, a cycle-GAN includes an inverse transformation from CBCT to CT images, which could further constrain the learning model. A fully convolution neural network (FCN) with residual block is used in generator to enable end-toend transformation. A FCN is used in discriminator to discriminate from planning CT (ground truth) and correction CBCT (CCBCT) generated by the generator. This proposed algorithm was evaluated using 12 sets of patient data with CBCT and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross correlation (NCC) indexes and spatial non-uniformity (SNU) in the selected regions of interests (ROIs) were used to quantify the correction accuracy of the proposed algorithm. Overall, the MAE, PSNR, NCC and SNU were 20.8±3.4 HU, 32. 8±1.5 dB, 0.986±0.004 and 1.7±3.6%. We have developed a novel deep learning-based method to generate CCBCT with a high image quality. The proposed method increases on-board CBCT image quality, making it comparable to that of the planning CT. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiotherapy.
Artifacts reduction method for phase-resolved Cone-Beam CT (CBCT) images via a prior-guided CNN
Conventional Cone-Beam Computed Tomography (CBCT) acquisition suffers from motion blurring artifacts at the region of the thorax, and consequently, it may result in inaccuracy in localizing the target of treatment and verifying delivered dose in radiation therapy. Although 4D-CBCT reconstruction technology is available to alleviate the motion blurring artifacts with the strategy of projection sorting followed by independent reconstruction, under-sampling streaking artifacts and noise are observed in the set of 4D-CBCT images due to relatively fewer projections and large angular spacing in each phase. Aiming at improving the overall quality of 4D-CBCT images, we explored the performance of the deep learning model on 4D-CBCT images, which has been paid little attention before. Inspired by the high correlation among the 4D-CBCT images at different phases, we incorporated a prior image reconstructed from full-sampled projections beforehand into a lightweight structured convolutional neural network (CNN) as one input channel. The prior image used in the CNN model can guide the final output image to restore detailed features in the testing process, so it is referred to as Prior-guided CNN. Both simulation and real data experiments have been carried out to verify the effectiveness of our CNN model. Experimental results demonstrate the effectiveness of the proposed CNN regarding artifact suppression and preservation of anatomical structures. Quantitative evaluations also indicate that 33.3% and 21.2% increases in terms of Structural Similarity Index (SSIM) have been achieved by our model when comparing with gated reconstruction and images tested on CNN without prior knowledge, respectively.
Multi-organ segmentation in clinical-computed tomography for patient-specific image quality and dose metrology
Wanyi Fu, Shobhit Sharma, Taylor Smith, et al.
The purpose of this study was to develop a robust, automated multi-organ segmentation model for clinical adult and pediatric CT and implement the model as part of a patient-specific safety and quality monitoring system. 3D convolutional neural network (Unet) models were setup to segment 30 different organs and structures at the diagnostic image resolution. For each organ, 200 manually-labeled cases were used to train the network, fitting it to different clinical imaging resolutions and contrast enhancement stages. The dataset was randomly shuffled, and divided with 6/2/2 train/validation/test set split. The model was deployed to automatically segment 1200 clinical CT images as a demonstration of the utility of the method. Each case was made into a patient-specific phantom based on the segmentation masks, with unsegmented organs and structures filled in by deforming a template XCAT phantom of similar anatomy. The organ doses were then estimated using a validated scanner-specific MC-GPU package using the actual scan information. The segmented organ information was likewise used to assess contrast, noise, and detectability index within each organ. The neural network segmentation model showed dice similarity coefficients (DSC) above 0.85 for the majority of organs. Notably, the lungs and liver showed a DSC of 0.95 and 0.94, respectively. The segmentation results produced patient-specific dose and quality values across the tested 1200 patients with representative the histogram distributions. The measurements were compared in global-to-organ (e.g. CTDvol vs. liver dose) and organ-to-organ (e.g. liver dose vs. spleen dose) manner. The global-to-organ measurements (liver dose vs. CTDIvol: 𝑅 = 0.62; liver vs. global d’: 𝑅 = 0.78; liver vs. global noise: 𝑅 = 0.55) were less correlated compared to the organ-to-organ measurements (liver vs. spleen dose: 𝑅 = 0.93; liver vs. spleen d’: 𝑅 = 0.82; liver vs. spleen noise: 𝑅 = 0.78). This variation of measurement is more prominent for iterative reconstruction kernel compared to the filtered back projection kernel (liver vs. global noise: 𝑅𝐼𝑅 = 0.47 vs. 𝑅𝐹𝐵𝑃 = 0.75; liver vs. global d’: 𝑅𝐼𝑅 = 0.74 vs. 𝑅𝐹𝐵𝑃 = 0.83). The results can help derive meaningful relationships between image quality, organ doses, and patient attributes.
Convolutional regularization methods for 4D, x-ray CT reconstruction
D. P. Clark, C. T. Badea
Deep learning methods have shown great promise in tackling challenging medical imaging tasks. Within the field of x-ray CT, deep learning for image denoising is of interest because of the fundamental link between ionizing radiation dose and diagnostic image quality, the limited availability of clinical projection data, and the computational expense of iterative reconstruction methods. Here, we work with 3D, temporal CT data (4D, cardiac CT), where redundancies in spatial sampling necessitate careful control of imaging dose. Specifically, using custom extensions to the Tensorflow and Keras machine learning packages, we construct and train a 4D, convolutional neural network (CNN) to denoise helical, cardiac CT data acquired in a mouse model of atherosclerosis. With the objective of accelerating iterative reconstruction, we train the CNN to map undersampled algebraic reconstructions of the 4D data to fully-sampled and regularized iterative reconstructions under mean-squared-error, perceptual loss, and low rank cost terms. Using phantom data for quantitative validation, we verify that the CNN robustly denoises static potions of the image without compromising temporal fidelity and that the CNN performs similarly to regularized, iterative reconstruction with the split Bregman method (CNN temporal RMSE: 142 HU; iterative temporal RMSE: 136 HU). Using in vivo validation and testing data excluded from CNN training, we verify that the CNN generalizes well, approximately reproducing the noise power spectrum of the iteratively reconstructed data (noise std. in water vial near heart, CNN: 62-73 HU, depending on cardiac phase; iterative: 94-100 HU), without degradation of spatial resolution (axial MTF, 10% cutoff, CNN: 2.69 lp/mm; iterative: 2.63 lp/mm). Overall, the results presented in this work represent a positive step toward realizing the promises of deep learning methods in medical imaging.
Poster Session
icon_mobile_dropdown
Improved wedge scatter correction for multi-slice CT system
Yang Wang, Karl Stierstorfer, Martin Petersilka, et al.
In modern multi-slice CT systems, bowtie shape wedge filter is widely used for optimizing patient received radiation dose distribution. Since wedge filter is usually fixed in the path of X-ray, the induced scatter radiation could give impact on image quality under certain cases. In order to compensate this extra scatter radiation and improve the image quality, we introduced a wedge scatter correction algorithm integrated in the raw data pre-processing workflow. After the algorithm is implemented on our latest CT systems, the improvement can be seen from both water phantom images and clinical patient images.
A directional TV based ring artifact reduction method
Morteza Salehjahromi, Qian Wang, Lars A. Gjesteby, et al.
In computed tomography (CT), miscalibrated or imperfect detector elements produce stripe artifacts in the sinogram. The stripe artifacts in Radon space are responsible for concentric ring artifacts in the reconstructed images. To remove the ring artifacts from the images, there exist several methods. While most of the methods are performed in the image domain, a few of them are adopted in the projection or sinogram domain. In the latter methods, the sinogram is intended to be corrected before the image reconstruction. In this work, a novel optimization model is proposed to remove the ring artifacts in an iterative reconstruction procedure. To correct a sinogram, a new correcting vector is proposed to compensate for malfunctioning of detectors in the projection domain. Moreover, a novel directional total variation (DTV) based regularization is developed to penalize the ring artifacts in the image domain. The optimization problem is solved by using the alternating minimization scheme (AMS). In each iteration, the fidelity term along with the DTV-based regularization is first solved to find the image, and then the correcting coefficient vector is updated according to the obtained image. Because the sinogram and the image are simultaneously updated, the proposed method basically performs in both the image and sinogram domains. The proposed method is evaluated using real preclinical datasets containing strong ring artifacts , and the experimental results show that the proposed algorithm can efficiently suppress the ring artifacts.
Simulation pipeline for virtual clinical trials of dermatology images
Varun Vasudev, Bastian Piepers, Andrew D. A. Maidment, et al.
Skin cancer is the most common cancer in the US; one in five Americans will develop it by the age of 70. Early diagnosis offers more favorable treatment options; currently available diagnostics, however, shows large readerdependence. The simulation approach (virtual trials) to development and validation offers advantages in terms of quantitative and objective assessment of system performance in the design of novel imaging methods, and validation of clinical trial designs prior to execution of real clinical trials. We have designed a pipeline for performing Virtual Clinical Trials of optical imaging of skin lesions. This pipeline includes modules to simulate healthy skin and subcutaneous tissue, to create skin lesions and insert those in the skin models, and to simulate the acquisition of optical images resulting in simulated/virtual images of skin. The last module of the pipeline performs virtual reading of simulated images, which is based on the clinical task-based performance analysis. The physical properties of the skin and lesions used in our simulations were selected to represent clinically plausible conditions. Skin lesion images were simulated assuming the ambient white light, and a linear camera model. We have utilized the standards for optimal data representation based upon the XML schema, adopted from the VCT pipeline developed for breast imaging research. This paper describes the principles used for designing the proposed VCT pipeline and presents preliminary simulation results, including visual and quantitative evaluation. Integration of pipeline modules and its validation is ongoing.
Validation of modelling tools for simulating wide-angle DBT systems
Premkumar Elangovan, Alistair Mackenzie, David R. Dance, et al.
Full-field 2D digital mammography is used for breast cancer screening throughout the world. Digital breast tomosynthesis (DBT) is now widely available and has shown promise as a breast cancer screening modality. Rigorous evaluation and comparison studies must be conducted before considering the new modality for routine breast cancer screening. Conventional clinical trials involving human subjects are time consuming and expensive and are limited to commercially-available system designs. Alternatively, Virtual Clinical Trials (VCTs) can be used to conduct such studies using modelling tools by simulating clinically-realistic images and experimental system designs. The OPTIMAM image simulation toolbox contains a suite of tools that can used to simulate visually and clinically realistic images for VCTs. Recently, tools for simulating a wide-angle DBT system were added to the toolbox. In this paper, we present the simulation methodology and validation results for a wide-angle DBT system. The validation was performed by simulating images of standard test objects and comparing these with real images acquired using identical settings on the simulated real system. The comparison of the contrast-to-noise ratios, geometrical distortion (z-resolution) and image blurring for real and simulated images of test objects showed good agreement. This suggests that the images of a wide-angle DBT system produced using our simulation approach are comparable to real images.
A method to modify mammography images to a appear as if acquired using different radiographic factors
This work presents and tests a new method of adapting mammography images to appear as if acquired using different radiographic factors by changing the signal and noise within the images. A Hologic Selenia Dimensions was used for the validation of the conversion method. Two set of images were acquired for the validation, one set was of 30, 45 and 60 mm thick polymethyl methacrylate (PMMA) acquired for a range of beam qualities while the other set was the same except for the addition of a contrast object of 2 and 4 mm thick pieces of PMMA. One set of images was then adapted to appear the same as a target image acquired with a higher or lower tube voltage. The normalized noise power spectra (NNPS) and standard deviation of the flat field images and the contrast-to-noise ratio (CNR) of the other images were calculated for the simulated and target images. The NNPS and standard deviation for the target and the simulated images were found to be within 2%. The CNRs of the target and simulated images were found to be within 4% of each other. The methodology successfully converted the images and can be used in observer studies to examine the impact of radiographic factors on lesion detection.
Consideration of cerebrospinal fluid intensity variation in diffusion weighted MRI
Colin B. Hansen, Vishwesh Nath, Allison E. Hainline, et al.
Diffusion weighted MRI (DW-MRI) depends on accurate quantification signal intensities that reflect directional apparent diffusion coefficients (ADC). Signal drift and fluctuations during imaging can cause systematic non-linearities that manifest as ADC changes if not corrected. Here, we present a case study on a large longitudinal dataset of typical diffusion tensor imaging. We investigate observed variation in the cerebral spinal fluid (CSF) regions of the brain, which should represent compartments with isotropic diffusivity. The study contains 3949 DW-MRI acquisitions of the human brain with 918 subjects and 542 with repeated scan sessions. We provide an analysis of the inter-scan, inter-session, and intra-session variation and an analysis of the associations with the applied diffusion gradient directions. We investigate a hypothesis that CSF models could be used in lieu of an interspersed minimally diffusion-weighted image (b0) correction. Variation in CSF signal is not largely attributable to within-scan dynamic anatomical changes (3.6%), but rather has substantial variation across scan sessions (10.6%) and increased variation across individuals (26.6%). Unfortunately, CSF intensity is not solely explained by a main drift model or a gradient model, but rather has statistically significant associations with both possible explanations. Further exploration is necessary for CSF drift to be used as an effective harmonization technique.
Development of a real-time scattered radiation display for staff dose reduction during fluoroscopic interventional procedures
J. Troville, J. Kilian-Meneghin, C. Guo, et al.
We have been working on the development of a Scatter Display System (SDS) for monitoring and displaying scatterrelated dose to staff members during fluoroscopic interventional procedures. We have considered various methods for such a display using augmented reality (AR) and computer-generated virtual reality (VR). The current work focuses on development of the VR SDS display, which shows the color-coded scattered dose distribution in a horizontal plane at a selected height above the floor in a top-down view of the interventional suite. Reported is the first development of the methodology for real-time functionality of this software via integration of controller area network (CAN) bus digital signals from the Canon C-Arm Biplane System. Importing the CAN bus information allows immediate selection of the appropriate pre-calculated scatter dose distribution consistent with the x-ray beam orientation and characteristics as well as selection of the proper gantry and table graphic for the display. The Python CAN interface module was used for streamlining the integration of the CAN bus interface. Development of real-time functionality for the SDS allows it to provide feedback to staff during clinical procedures for informed dose management; the SDS can work alongside the patient skin dose tracking system (DTS) for complete clinical monitoring of staff and patient dose.
Incorporating biomechanical modeling and deep learning into a deformation-driven liver CBCT reconstruction technique
Deformation-driven CBCT reconstruction techniques can generate accurate and high-quality CBCTs from deforming prior CTs using sparse-view cone-beam projections. The solved deformation-vector-fields (DVFs) also propagate tumor contours from prior CTs, which allows automatic localization of low-contrast liver tumors on CBCTs. To solve the DVFs, the deformation-driven techniques generate digitally-reconstructed-radiographs (DRRs) from the deformed image to compare with acquired cone-beam projections, and use their intensity mismatch as a metric to evaluate and optimize the DVFs. To boost the deformation accuracy at low-contrast liver tumor regions where limited intensity information exists, we incorporated biomechanical modeling into the deformation-driven CBCT reconstruction process. Biomechanical modeling solves the deformation on the basis of material geometric and elastic properties, enabling accurate deformation in a low-contrast context. Moreover, real clinical cone-beam projections contain amplified scatter and noise than DRRs. These degrading signals are complex, non-linear in nature and can reduce the accuracy of deformation-driven CBCT reconstruction. Conventional correction methods towards these signals like linear fitting lead to over-simplification and sub-optimal results. To address this issue, this study applied deep learning to derive an intensity mapping scheme between cone-beam projections and DRRs for cone-beam projection intensity correction prior to CBCT reconstructions. Evaluated by 10 liver imaging sets, the proposed technique achieved accurate liver CBCT reconstruction and localized the tumors to an accuracy of ~1 mm, with average DICE coefficient over 0.8. Incorporating biomechanical modeling and deep learning, the deformation-driven technique allows accurate liver CBCT reconstruction from sparse-view projections, and accurate deformation of low-contrast areas for automatic tumor localization.
Self-geometric calibration of circular cone beam CT based on epipolar geometry consistency
Liang Zheng, Shouhua Luo, Lujie Chen, et al.
For most offline calibration methods, a specific phantom scanning is often required before the following imaging tasks. It’s time-consuming and unable to be applied to the unstable systems. Some online approaches don’t have such drawbacks but their accuracy cannot fulfil requirements compared with the offline ones. This paper proposes an Epipolar geometry consistency based online geometric calibration for cone beam CT(CBCT). Four parameters: the detector skew, rotation axis, mid-plane and the source to detector distance are employed for geometrical modeling of CBCT in this paper. A cost function is built by exploiting the Epipolar geometry consistency among the projective views. By taking advantage of the simplex-simulated annealing algorithm(SIMPSA) algorithm to minimize the cost function, we can obtain the geometric parameters of our practical CBCT systems for image reconstruction. In simulation, different noise levels are added to projection images respectively and the experimental results show that the proposed method is insensitive to noise. What’s more, the accuracies of the parameters of the detector skew and rotation axis is slightly higher than that of the other two. In the practical situation, we scanned a thin bamboo stick and a Chinese parasol tree branch. Compared with the image reconstructed with the geometric parameters calculated by the conventional offline calibration approach, the ones reconstructed with the proposed method show no apparent difference and geometric artifacts, which shows that the accuracy of our method is comparable to that of the offline calibration ones.
Using one test bolus to monitor bolus arriving at two locations in CT angiography runoff scans: a feasibility simulation study
This study proposes a method of using one test bolus to monitor peak bolus arrival time at two locations, the aorta and the knee, in CT angiography lower-extremity runoff scans. The resulting aortopopliteal transition time will facilitate determining appropriate CT scan parameters to match the bolus speed. The proposed method first monitors the test bolus peak at the aorta. When the contrast enhancement peak is measured, the table is moved to monitor the test bolus at the knee. Instead of cross-sectional images, the proposed method exploits projection (single view) scans for monitoring the bolus to reduce X-ray exposure and to enable real-time peak identification. The feasibility on scan timing of the proposed method was verified by simulations. The medium and high mean blood velocities used in this study were simulated by Monte Carlo methods. Blood velocity at each location inside the arteries were obtained by a three-segment blood velocity simulation. Table motion specifications of a clinical CT scanner were also simulated. Results shown that for medium aortopopliteal distance (690 mm) and medium blood velocity (65.8 mm/sec), the table arrived at the knee position 9.99 seconds ahead of the test bolus peak, which is enough time to monitor the bolus peak at the second location. For the most challenging case, i.e. shortest aortopopliteal distance (624 mm) and high blood velocity (179.5 mm/sec), the time difference between table and bolus peak arrival to the second location was 1.87 seconds, which allows a small window of monitor scans to detect the bolus peak.
Refined locally linear transform based spectral-domain gradient sparsity and its applications in spectral CT reconstruction
Qian Wang, Morteza Salehjahromi, Hengyong Yu
By extending the conventional single-energy computed tomography (SECT) along the energy dimension, spectral CT achieves superior energy resolution and material distinguishability. However, for the state-of-the-art photon counting detector (PCD) based spectral CT, because the emitted photons with a fixed total number for one X-ray beam are divided into several different energy bins, the noise level is increased in each reconstructed channel image, and it further leads to an inaccurate material decomposition. To improve the reconstruction quality and decomposition accuracy, in this work, we first employ a refined locally linear transform to convert the structural similarity among two-dimensional (2D) spectral CT images to a spectral-dimension gradient sparsity. By combining the gradient sparsity in the spatial domain, a global three-dimensional (3D) gradient sparse representation is constructed and measured by L1, L0- and trace-norm, respectively. For each sparsity measurement, we propose the corresponding optimization model, develop the corresponding iterative algorithm, and verify the effectiveness and superiority with both simulated and real datasets.
An automatic dynamic optimal phase reconstruction for coronary CT
Beating of the heart is a type of motion which is the most difficult to control during the cardiac CT scan and causes significant artifacts. Hearts have the least motion at systole and diastole phase which for an average heart happens at 45% and 75% phase respectively. However, in practice this is not guaranteed, so doctors sometimes reconstruct several phases, review all those images and then make diagnosis from the phase that has the least artifacts. The new method for automatic dynamic optimal phase reconstruction has image quality comparable to the manual phase selection but it also significantly reduces the exam time by omitting the review of unnecessary phases.
Sinogram interpolation for sparse-view micro-CT with deep learning neural network
In sparse-view Computed Tomography (CT), only a small number of projection images are taken around the object, and sinogram interpolation method has a significant impact on final image quality. When the amount of sparsity - the amount of missing views in sinogram data – is not high, conventional interpolation methods have yielded good results. When the amount of sparsity is high, more advanced sinogram interpolation methods are needed. Recently, several deep learning (DL) based sinogram interpolation methods have been proposed. However, those DL-based methods have mostly tested so far on computer simulated sinogram data rather experimentally acquired sinogram data. In this study, we developed a sinogram interpolation method for sparse-view micro-CT based on the combination of U-Net and residual learning. We applied the method to sinogram data obtained from sparse-view micro-CT experiments, where the sparsity reached 90%. The interpolated sinogram by the DL neural network was fed to FBP algorithm for reconstruction. The result shows that both RMSE and SSIM of CT image are greatly improved. The experimental results demonstrate that this sinogram interpolation method produce significantly better results over standard linear interpolation methods when the sinogram data are extremely sparse.
Revisit FBP: analyze the tensor data after view-by-view backprojection
For a very long time, low-dose computed tomography (CT) imaging techniques have been performed by either preprocessing the projection data or regularizing the iterative reconstruction. The conventional filtered backprojection (FBP) algorithm is rarely studied. In this work, we show that the intermediate data during FBP possess some fascinating properties and can be readily processed to reduce the noise and artifacts. The FBP algorithm can be technically decomposed into three steps: filtering, view-by-view backprojection and summing. The data after view-by-view backprojection is naturally a tensor, which is supposed to contain useful information for processing in higher dimensionality. We here introduce a sorting operation to the tensor along the angular direction based on the pixel intensity. The sorting for each point in the image plane is independent. Through the sorting operation, the structures of the object can be explicitly encoded into the tensor data and the artifacts can be automatically driven into the top and bottom slices of the tensor. The sorted tensor also provides high dimensional information and good low-rank properties. Therefore, any advanced processing methods can be applied. In the experiments, we demonstrate that under the proposed scheme, even the Gaussian smoothing can be used to remove the streaking artifacts in the ultra-low dose case, with nearly no compromising of the image resolution. It is noted that the scheme presented in this paper is a heuristic idea for developing new algorithms of low-dose CT imaging.
Optimization-based reconstruction for correcting non-linear partial volume artifacts in CT
In this work, we investigate the non-linear partial volume (NLPV) effect caused by sub-detector sampling in CT. A non-linear log-sum of exponential data model is employed to describe the NLPV effect. Leveraging our previous work on multispectral CT reconstruction dealing with a similar non-linear data model, we propose an optimization-based reconstruction method for correcting the NLPV artifacts by numerically inverting the non-linear model through solving a non-convex optimization program. A non-convex Chambolle-Pock (ncCP) algorithm is developed and tailored to the non-linear data model. Simulation studies are carried out with both discrete and continuous FORBILD head phantom with one high-contrast ear section on the right side, based on a circular 2D fan-beam geometry. The results suggest that, under the data condition in this work, the proposed method can effectively reduce or eliminate the NLPV artifacts caused by the sub-detector ray integration.
Trade-off between spatial details and motion artifact in multi-detector CT: A virtual clinical trial with 4D textured human models
Ehsan Abadi, W. Paul Segars, Brian Harrawood, et al.
In Computed Tomography (CT) imaging, high pitch and wide beam collimations accelerate imaging acquisitions and thus reduce motion artifacts. However, increasing pitch and collimation impact acquisition geometry and thus spatial quality of the images. The purpose of this study was to quantify the effects of pitch and beam collimation on image quality using a realistic virtual clinical trial (VCT) construct. The study used extended-cardio torso (XCAT) phantoms enhanced by synthesizing intra-organ heterogeneities within the lungs and bones. Different amounts of cardiac and respiratory motion were simulated, including heart rates of 0, 60, 90, 120 beats per minute, and respiratory rates of 0, 8, 12, 16 breadths per minute. Each case was imaged using a realistic, scanner-specific, and rapid CT simulator setup based on the geometry and physics of a commercial CT scanner (Siemens Definition Flash), at 120 kV, under multiple pitch values, beam collimations, all at the same dose levels. With the knowledge of the ground truth, the quality of the acquired images was quantified by measuring root mean squared error (RMSE) in the lungs. In general, results indicated that RMSE was higher for the phantoms with more respiratory or cardiac motions. For the pitch experiment, images with higher pitch values had less in-plane motion artifacts. RMSE also increased with increasing the pitch. However, the slope of this trend was found to be a function of motion profile, showing the trade-off between motion artifacts and spatial detail loss. For the beam collimation experiment, no major change was observed in in-plane motion artifacts with changes in the beam collimation. RMSE was almost constant with the increase in beam collimation. This study demonstrates the utility of a realistic VCT construct in quantitative evaluation and optimization of CT imaging protocols, when designing such a trial is ethically-prohibitive, costly, and time-consuming.
Material decomposition in photon-counting-detector CT: threshold or bin images?
Liqiang Ren, Shengzhen Tao, Cynthia H. McCollough, et al.
Energy-resolved photon-counting-detector CT (PCD-CT) is promising for material-specific imaging of multiple contrast agents. In each PCD-CT scan, two groups of images can be reconstructed, namely threshold images and bin images, and both can be directly used for material decomposition. The performance may differ for different energy thresholds and imaging tasks and it remains unclear which group of images should be used. The purpose of this work is to evaluate the imaging performance of threshold images and bin images when they are used for a three-material decomposition task (iodine, gadolinium, and water) in PCD-CT. Material decomposition was performed in image-space by using both an ordinary least squares (OLS) method and a generalized least squares (GLS) method. Both numerical analysis and phantom experiments were conducted, which demonstrated that: 1) compared with OLS, GLS provided improved noise properties using either threshold or bin images; 2) for the GLS method, when the covariances among images are taken into account, threshold and bin images showed almost identical material-specific imaging performance. This work suggested that, when correlations among images are incorporated into material decomposition, threshold and bin images perform equivalently well.
Experimental validation of a multi-step material decomposition method for spectral CT
Recently, we proposed a multi-step material decomposition method for spectral CT where the decomposition is solved in a series of steps each separating one new material from the original attenuation data. Until now, this method has only been tested using numerical simulations of multi-material digital phantoms. Here we present the initial results of the multistep method applied to experimental data acquired in our laboratory using a Medipix 3RX detector with a silicon senor. The decomposition of CT images of a 3-material phantom is demonstrated. The materials studied here are gadolinium (Gd), iodine (I) and acrylonitrile butadiene styrene (ABS) plastic. The results show qualitative and quantitative improvement in separation accuracy as the worst-case percent error in one selected slice is reduced by 51.7% when using our new method in comparison to a conventional single-step material decomposition.
Image-based noise reduction for material decomposition in dual or multi energy computed tomography
Clinical dual energy computed tomography (DECT) scanners have a material decomposition application to display the contrast-enhanced computed tomography (CT) scan as if it were scanned without contrast agent: virtual-non-contrast (VNC) imaging. The clinical benefit of VNC imaging can potentially be increased using photon counting detector-based multi energy CT (MECT) scanners. Furthermore, dose efficiency and contrast- to-noise ratio (CNR) may be improved in MECT. Effectively, the material decomposition can be performed in image domain. However, material decomposition increases the noise of the material images. Therefore, we generalized an image filter to achieve less noisy decomposed material images. The image-based noise reduction for the material images can be achieved by adding the highpass of the CNR optimized energy image to the lowpass filtered material image. In this way, the image-based noise reduction has the potential to recover some subtle structures that are less visible in the unfiltered images. In this study, we generalize the measurement-dependent filter of Macovski et al. to the case of MECT. The method is performed using phantom measurements from the Siemens SOMATOM Definition Flash scanner in single energy scan mode at tube voltages 80 kV, 100 kV, 120 kV and 140 kV to mimic 4 energy bins of a photon counting CT. Using the image-based noise reduction, a factor of 4 noise reduction in the material images can be achieved.
A comprehensive GPU-based framework for scatter estimation in single source, dual source, and photon-counting CT
Scattered radiation is one of the leading causes of image quality degradation in computed tomography (CT), leading to decreased contrast sensitivity and inaccuracy of CT numbers. The established gold-standard technique for scatter estimation in CT is Monte Carlo (MC) simulation, which is computationally expensive, thus limiting its utility for clinical applications. In addition, the existing MC tools are generalized and often do not model a realistic patient and/or a scanner-specific scenario, including lack of models for alternative CT configurations. This study aims to fill these gaps by introducing a comprehensive GPU-based MC framework for estimating patient and scanner-specific scatter for single-source, dual-source, and photon-counting CT using vendor-specific geometry/components and anatomically realistic XCAT phantoms. The tool accurately models the physics of photon transport and includes realistic vendor-specific models for x-ray spectra, bowtie filter, anti-scatter grid, and detector response. To demonstrate the functionality of the framework, we characterized the scatter profiles for a Mercury and an XCAT phantom using multiple scanner configurations. The timing information from the simulations was tallied to estimate the speedup over a comparable CPU-based MC tool. We also utilized the scatter profiles from the simulations to enhance the realism of primary-only ray-traced images generated for the purpose of virtual clinical trials (VCT). A speedup as high as 900x over a CPU-based MC tool was also observed for our framework. The results indicate the capability of this framework to quantify scatter for different proposed CT configurations and the significance of scatter contribution for simulating realistic CT images.
Demonstration of phase-assisted material decomposition with a Talbot-Lau interferometer using a single x-ray tube potential
E. R. Shanblatt, B. J. Nelson, S. Tao, et al.
We present a proof-of-principle demonstration of material decomposition using a single X-ray tube potential (38 kVp + 0.2 mm Sn, for an effective energy around 27 keV) with a Talbot-Lau grating-based phase-contrast computed tomography (CT) system. We show good material separation of water and fat and an accurate quantitative measurement of isopropyl alcohol. This method utilizes the distinctiveness of both components of the refractive index, δ and β, and is promising for separating soft tissue materials that have similar attenuation values such as water and fat.
Sparse-sampling computed tomography for pulmonary imaging
Felix K. Kopp, Kai Mei, Ernst J. Rummeny, et al.
Computed tomography (CT) is a valuable imaging modality for pulmonary imaging. Fast acquisition times and sharp cross-sectional images guarantee high diagnostic confidence. With the introduction of low-dose CT, it has been established as standard for lung screening of heavy smokers in many countries around the world. However, at some point the limits for dose reduction with conventional CT are reached and further reduction would suffer from poor image quality. Sparsesampling CT is one technology that would allow a further radiation dose reduction by reducing the number of acquired projection images. Recently, the feasibility of a fast pulsing X-ray tube for CT has been demonstrated, indicating that sparse sampling could become available in future generations of CT scanners. Therefore, we investigated the effect of sparse sampling by a stepwise reduction of the projection images. A lung phantom with synthetic pulmonary nodules was scanned with a clinical CT system. Sparse sampling was simulated by removing projection images prior to reconstruction. The phantom was scanned at the iso-center and at the highest possible table position (off-center). The modulation transfer function (MTF) was determined for different degrees of sparse sampling. Image quality was evaluated by comparing the reduced dose simulations against the full dose image using the structural similarity index (SSIM). MTF was stable down to using 1/4th of the projection images (4-times sparse sampling, SpS-4) with high degradation at the off-center position (full sampling (FS) 10% MTF, iso-center: 0.64; off-center: 0.47). SSIM indicates a small image quality degradation of FS images compared to sparse-sampling images at low radiation doses at the iso-center (35 mAs; FS: 0.91; SpS-4: 0.93) and stronger degradations at the off-center position (35 mAs; FS: 0.65; SpS-4: 0.84). In conclusion, sparse sampling provides stable MTF results down to 1/4th of the projection images. At low dose levels (iso-center: ≤43 mAs; off-center: ≤86 mAs), sparse sampling performs better in terms of SSIM compared to FS.
Simulation of CT images reconstructed with different kernels using a convolutional neural network and its implications for efficient CT workflow
Andrew D. Missert, Shuai Leng, Cynthia H. McCollough, et al.
In this study we simulated the effect of reconstructing computed tomography (CT) images with different reconstruction kernels by employing a convolutional neural network (CNN) to map images produced by a fixed input kernel to images produced by different kernels. The CNN input images consisted of thin slices (0.6 mm) reconstructed with a sharpest kernel possible on the CT scanner. The network was trained using supervised learning to produce output images that simulate medium, medium-sharp, and sharp kernels. Performance was evaluated by comparing the simulated images to actual reconstructions performed on a reserved set of patient data. We found that the CNN simulated the effect of switching reconstruction kernels to a high level of accuracy, and in only a small fraction of the time that it takes to perform a full reconstruction. This application can potentially be used to streamline and simplify the clinical workflow for storing, viewing, and reconstructing CT images.
Quantitative low-dose CT: potential value of low signal correction methods
Juan Pablo Cruz-Bastida, Ran Zhang, Daniel Gomez-Cardona, et al.
In previous works, it has been demonstrated that CT number estimates are biased at low-dose levels for FBPbased CT. The purpose of this work was to explore the potential of noise-reduction methods to address this limitation. One may speculate that these potential CT number bias issues may have been addressed by noisereduction schemes, such as those implemented in commercial products. However, to the best of the authors’ knowledge, this is not the case. In this work, we performed an in-house noise-reduction implementation on benchtop-CT data, to further investigate under what conditions noise-reduction methods could favor quantitative low-dose CT. A quality assurance phantom was scanned at different dose levels, and the CT number bias was estimated for different material inserts of known composition, prior and after noise reduction. Anisotropic diffusion (AD) filtration was used as noise-reduction method, and independently applied in three different signal domains: raw counts, log-processed sinogram and reconstructed CT image. As result, AD filtration in the raw counts domain was the only noise-reduction scheme successful to considerably reduce CT number bias. Our results suggest that noise-reduction methods can help to preserve CT number accuracy to some degree when reducing radiation dose; however, their application cannot be arbitrarily extended to any signal domain. It was also observed that CT number bias reduction is greater for stronger filtration, which suggest that the overall CT image quality may limit the quantification accuracy when noise-reduction is performed.
A practical analysis of scatter-to-primary ratio after beam hardening correction for x-ray CT
Hewei Gao, Shumei Jia, Geng Fu
Scatter and beam hardening degrade image quality for X-ray computed tomography (CT). Applying beam hardening correction before scatter being fully removed may be needed in some scenarios. In this paper, we theoretically analyze the outcome of beam hardening correction with scatter uncorrected. Using the effective spectrum of CT system, beam hardening correction function and its derivatives can be explicitly derived and used, along with a Taylor expansion, to approximate the change of scatter-to-primary ratio (SPR) after beam hardening correction. Two types of SPR compensations are compared using cone- and fan-beam scans of a CatPhan phantom on a tabletop cone-beam CT system, validating our analysis to be accurate in practical applications.
Quantifying truth-based change in radiomics features between CT imaging conditions
The purpose of this study was to develop a method to ascertain the likelihood of actual change in radiomics features measured from pairs of sequentially acquired CT images under variable CT scan settings. Fifteen realistic computational lung nodule models were simulated with varying volumes and morphologies (231-1,245 mm3 volume range with no, medium, and high spiculation). The virtual nodule models were degraded by noise magnitude, noise texture, and resolution properties representing those of five commercial CT systems operating under a wide range of scan and reconstruction settings (297, including 33 reconstruction kernels, three dose levels– CTDIvol = 1.90, 3.75, and 7.5 mGy, and three slice thicknesses– 0.625, 1.25, and 2.5 mm). Images of each nodule were synthesized five repeated times under each imaging condition for a total of 22,275 imaged nodules. The simulated nodule images were segmented using an automatic active contour segmentation algorithm, from which morphology features were calculated and compared to the ground-truth morphology features (aka, truth-based) to estimate the minimum difference, Dmin, between two feature measurements that could be reliably measured between any pair of imaging conditions (scanner, kernel, dose, etc.). Dmin was defined as the minimum difference in a radiomics feature for which a measured difference corresponded to true differences 95% of the time for a single segmentation algorithm. The mean value for Dmin ranged from 1.8% to 70.0% depending on the specific radiomics feature. An analysis of the volume feature revealed that the lowest Dmin occurred for slice thickness of 0.625mm and CTDIvol of 7.5 mGy. This study presents a method to translate radiomics features from measured feature differences to true difference that accounts for statistics of noise between conditions.
Iterative closest-point based 3D stitching in dental computed tomography for a larger view of facial anatomy
Chulkyu Park, Hyosung Cho, Dongyeon Lee, et al.
Recently, dental cone-beam computed tomography (CBCT) scanners using a small-sized detector have been used for both dental diagnosis and sinus examination in otolaryngology and plastic surgery. In this study, we investigated a threedimensional (3D) registration method using two datasets of reconstructed CBCT images with a small-sized detector to enlarge the field-of-view (FOV) of the original CBCT images. We employed an iterative closest-point (ICP) algorithm to registration with bone information as fiducial landmarks. We applied the proposed registration method to a commercially-available dental CBCT system (Green 16TM, Vatech Co.) and performed a systematic experiment to demonstrate the algorithm’s effectiveness for 3D registration in CBCT. In the experiment, the upper part of the head phantom was tilted while the lower part fixed to cover the entire face during projection data acquisition. After the registration processing, intensity-mismatch artifacts in the overlap region were blended by increasing the proportion of artifact-free parts. Consequently, we successfully stitched two datasets of the reconstructed CBCT images obtaining a larger-FOV CBCT image. The proposed method reduced intensity-mismatch artifacts and thus effectively eliminated the seams.
Wasserstein generative adversarial networks for motion artifact removal in dental CT imaging
In dental computed tomography (CT) scanning, high-quality images are crucial for oral disease diagnosis and treatment. However, many artifacts, such as metal artifacts, downsampling artifacts and motion artifacts, can degrade the image quality in practice. The main purpose of this article is to reduce motion artifacts. Motion artifacts are caused by the movement of patients during data acquisition during the dental CT scanning process. To remove motion artifacts, the goal of this study was to develop a dental CT motion artifact-correction algorithm based on a deep learning approach. We used dental CT data with motion artifacts reconstructed by conventional filtered back-projection (FBP) as inputs to a deep neural network and used the corresponding high-quality CT data as labeled data during training. We proposed training a generative adversarial network (GAN) with Wasserstein distance and mean squared error (MSE) loss to remove motion artifacts and to obtain high-quality CT dental images. In our network, to improve the generator structure, the generator used a cascaded CNN-Net style network with residual blocks. To the best of our knowledge, this work describes the first deep learning architecture method used with a commercial cone-beam dental CT scanner. We compared the performance of a general GAN and the m-WGAN. The experimental results confirmed that the proposed algorithm effectively removes motion artifacts from dental CT scans. The proposed m-WGAN method resulted in a higher peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) and a lower root-mean-squared error (RMSE) than the general GAN method.
Multidimensional noise reduction in C-arm cone-beam CT via 2D-based Landweber iteration and 3D-based deep neural networks
Recently, the necessity of using low-dose CT imaging with reduced noise has come to the forefront due to the risks involved in radiation. In order to acquire a high-resolution image from a low-resolution image which produces a relatively small amount of radiation, various algorithms including deep learning-based methods have been proposed. However, the current techniques have shown limited performance, especially with regard to losing fine details and blurring high-frequency edges. To enhance the previously suggested 2D patch-based denoising model, we have suggested the 3D block-based REDCNN model, employing convolution layers paired with deconvolution layers, shortcuts, and residual mappings. This process allows us to preserve the image structure and diagnostic features of an image, increasing image resolution by smoothing noise. Finally, we applied a bilateral filter in 3D and utilized a 2D-based Landweber iteration method to reduce remaining noise under a certain amplitude and prevent the edges from blurring. As a result, our proposed method effectively reduced Poisson noise level without losing diagnostic features and showed high performance in both qualitative and quantitative evaluation methods compared to ResNet2D, ResNet3D, REDCNN2D, and REDCNN3D.
Evaluation of the reliability of a new low dose CBCT acquisition protocol in diagnosing impacted canines: an ex-vivo imaging study
Ayesha Ejaz, Callan Donovan, Vaibhav Gandhi, et al.
Permanent canines are the second most commonly impacted teeth after third molars with females being affected twice as much as males. Impacted canines can be located buccal, palatal or mid-alveolar and further be placed mesially, distally, horizontal, or inverted. Traditionally, permanent canines are radiographically localized using Clark’s method where a straight periapical radiograph of the area of interest/canine is taken, then the tube is shifted either mesial or distal to take a second radiograph. Another approach to localize an impacted canine could use a panoramic radiograph. Both of these 2D methods do not adequately depict the location of the tooth. To be able to localize the canine correctly is important for surgical exposure for further orthodontic treatment. More adequate imaging is 3D imaging in which a 360 degree Cone Beam CT (CBCT) is generally used, however, a different protocol using a 180 degree technique can reduce the radiation dose by 40%. This is important as it would limit the exposure of radiologically sensitive organs in the head and neck region.
Assessment of measurement deviations: length-extended x-ray imaging for orthopedic applications
Measurements of skeletal geometries are a crucial tool for the assessment of pathologies in orthopedics. Usually, those measurements are performed in conventional 2-D X-ray images. Due to the cone-beam geometry of most commercially available X-ray systems, effects like magnification and distortion are inevitable and may impede the precision of the orthopedic measurements. In particular measurements of angles, axes, and lengths in spine or limb acquisitions would benefit from a true 1-to-1 mapping without any distortion or magnification. In this work, we developed a model to quantify these effects for realistic patient sizes and clinically relevant acquisition procedures. Moreover, we compared the current state-of-the-art technique for the imaging of length- extended radiographs, e. g. for spine or leg acquisitions (i. e. the source-tilt technique) with a slot-scanning method. To validate our model we conducted several experiments with physical as well as anthropomorphic phantoms, which turned out to be in good agreement with our model. To this end, we found, that the images acquired with the reconstruction-based slot-scanning technique comprise no magnification or distortion. This would allow precise measurements directly on images without considering calibration objects, which might be beneficial for the quality and workflow efficiency of orthopedic applications.
Investigation of calibration-based projection domain dual energy decomposition CBCT technique for brain radiotherapy applications
Shailaja Sajja, Masoud Hashemi, Christopher Huynh, et al.
The purpose of the present study was to develop and evaluate a practical dual-energy imaging approach for enhancing on-board cone-beam CT (CBCT) image quality for brain radiotherapy applications. The proposed primary technique involves a projection domain calibration procedure. In-house fabricated aluminum and acrylic step wedges were stacked and oriented orthogonally to each other to produce 72 unique combinations of two-material path lengths, i.e. 8 acrylic steps × 9 aluminum steps. High (120 kV) and low (70 kV) energy projections were acquired of the step wedges and a 3rd order polynomial fit was used to map the log-normalized projection intensities to the known acrylic and aluminum thicknesses. The subsequent model was tested on two phantoms: 1) in-house DE phantom with PMMA background and calcium inserts of different concentrations (5 mg/mL, 200 mg/mL and 400 mg/mL) and 2) a RANDO head phantom. The decomposed projections were reconstructed separately as aluminum-only and acrylic-only reconstructions. In addition, virtual monochromatic projections (VMPs) obtained by combining the aluminum-only and acrylic-only projections were reconstructed at different keVs. A quantitative improvement was observed in the SDNR (signal difference to noise ratios) of the calcium inserts using Aluminum-reconstructions and synthesized VMPs (40 to 100 keV) compared to the single energy reconstructions. A reduction in beam hardening was observed as well. In addition, a qualitative improvement in soft-tissue visualization was observed with the RANDO phantom reconstructions. The findings indicate the potential of dual energy CBCT images: material specific images as well as VMPs for improved CBCT-based image guidance. The present approach can readily be applied on existing commercial systems and a feasibility study on patients is a worthwhile investigation.
SWAD: The effect of pixel geometry on the spatial uniformity of avalanche gain
Photon counting detectors (PCDs) have the potential to improve x-ray imaging, however current crystalline materials are still hindered by high production cost and performance limitations. We are developing a novel direct conversion amorphous Selenium (a-Se) based field-Shaping multi-Well Avalanche Detector (SWAD) for photon counting breast imaging applications. SWAD’s multi-well Frisch grid design creates separate absorption and sensing regions capable of depth independent avalanche gain. The improved temporal response from unipolar time-differential (UTD) charge sensing combined with tunable avalanche gain within the well region attains the fast timing and energy resolution necessary for successful photon counting under clinical settings. The avalanche gain in a-Se sensors varies rapidly as a function of electric field, which may affect the overall energy resolution of the detector. The goal of this work is to investigate the uniformity of avalanche gain as a function carrier position within the a-Se bulk region for different SWAD design parameters. Our simulation results show that for the geometries modeled, the variation in avalanche gain along different field lines across the multi-well region can be kept below 4.2% by using multi-well pillars with a high aspect ratio. Additionally, the variation in avalanche gain was evaluated for charge clouds generated by incident x-rays with energies of 20, 40 and 60 keV. In all cases the variation in avalanche gain was found to decrease with increasing charge cloud size. For an optimized SWAD geometry, the variation in gain was negligible for each incident x-ray energy simulated.
Investigation of spatial-frequency-dependent noise in columnar CsI:Tl using depth-localized single x-ray imaging
The x-ray imaging performance of an indirect flat panel detector (I-FPD) is intrinsically limited by its scintillator. Random fluctuations in the conversion gain and spatial blur of scintillators (per detected x-ray) degrade the detective quantum efficiency (DQE) of I-FPDs. These variations are often attributed to depth-dependence in light escape efficiency and spatial spread before detection. Past investigations have used theoretical models to explore how scintillator depth effects degrade DQE(f), however such models have not been validated by direct measurements. Recently, experimental methods have been developed to localize the depth of x-ray interactions in a scintillator, and image the light burst from each interaction using an ultra-high-sensitivity optical camera. This approach, referred to as depth-localized single x-ray imaging (SXI), has enabled direct measurements of both depth-dependent and fixed-depth variations in scintillator gain and spatial resolution. SXI has been used recently to measure depth-dependence in the average gain and modulation transfer function (MTF) of columnar CsI:Tl, which is the scintillator-of-choice for medical I-FPDs. When used in a depth-dependent cascaded linear system model, these SXI measurements accurately predict the presampling MTF(f) of CsI:Tl-based I-FPDs as measured using the slanted-edge method. However, such calculations underestimate the CsI:Tl noise power spectrum (NPS), and thereby overestimate its DQE when compared to conventional measurements. We hypothesize that some of this discrepancy is caused by fixed-depth variations in CsI:Tl spatial resolution, which are not considered in current models. This work characterizes these variations directly using depth-localized SXI and examines their impact on scintillator DQE(f).
Flat panel design and performance of a new a-Si mammography imaging system
Steve Wettstein, Carlo Tognina, Isaias D. Job
Mammography systems demand high quality imaging at reduced acquisition times. The Varex 3024MX imager was designed specifically with the demands of mammographic imaging in mind: high spatial resolution, excellent low contrast resolution as well as excellent low dose performance, and acquisition speeds capable of tomography. This paper will describe the details of the next generation a-Si mammography sensor array and contrast the predicate product, PS3024M. The Varex 3024MX imager delivers a 3584x2816 matrix with a pixel pitch of 83um resulting in an active area of 297.5mm x 233.7mm., optimized for mammography applications. A 250um thick deposited columnar CsI(Tl) layer is used as the scintillator. The development of a new pixel architecture and charge amplifier ASIC allows for faster readout of the sensor array at 16 bit pixel depth. The faster readout of the Varex 3024MX enables readout speeds up to 10fps. In addition to the faster frame rates, the combination of the new pixel architecture and ASIC, result in a very low electronic noise floor and improved ghosting behavior. The results, as outlined below, will show that the 3024MX design targeted improvements to detective quantum efficiency (DQE), maximum linear dose (MLD), quantum-limited dose (QLD), ghosting, and image readout time.
Dynamic chest radiography for pulmonary function diagnosis: A validation study using 4D extended cardiac-torso (XCAT) phantom
This study was performed to investigate the detection performance of trapped air in dynamic chest radiography using 4D extended cardiac-torso (XCAT) phantom with a user-defined ground truth. An XCAT phantom of an adult male (50th percentile in height and weight) with a normal heart rate, slow-forced breathing, and diaphragm motion was generated. An air sphere was inserted into the right lung to simulate emphysema. An X-ray simulator was used to create sequential chest radiographs of the XCAT phantom over a whole respiratory cycle covering a period of 10 seconds. Respiratory changes in pixel value were measured in each grid-like region translating during respiration, and differences from a fully exhaled image were then depicted as color-mapping images, representing higher X-ray translucency (increased air) as higher color intensities. The detection performance was investigated using various sizes of air spheres, for each lung field and behind the diaphragm. In the results, respiratory changes in pixel value were decreased as the size of air sphere increased, depending on the lung fields. In color-mapping images, air spheres were depicted as color defects, however, those behind the diaphragm were not detectable. Smaller size sampling depicted the air spheres as island color defects, while larger ones yielded a limited signal. We confirmed that dynamic chest radiography was able to detect trapped air as regionally-reduced changes in pixel value during respiration. The reduction rate could be defined as a function of residual normal tissue in front and behind air spheres.
Dental and maxillofacial cone beam computed tomography absorbed dose distribution calculation by GEANT4
Shumei Jia, Hewei Gao, Li Zhang, et al.
Absorbed dose distributions in dental and maxillofacial cone beam computed tomography (dental CBCT) are essential to dental CBCT dose indices. Direct measurements by thermoluminescence detectors are laborious. We establish a valid GEANT4 based absorbed dose simulation program with mean deviation being 7.25% compared to experimental measurements. Dental CBCT absorbed dose distributions simulated by this program indicate : The distributions are not always symmetry; Non-symmetry cases are when phantom center is departured from isocenter, half-fan beam with 360° scan angle range and some case full-fan beam with less than 360° scan angle range; Dose index weights for circular symmetry absorbed dose distributions are quite different for fields of view diameter bigger and smaller than phantom diameter.
Comparisons of 6 fps volume-rendered x-ray digital tomosynthesis TumoTrak-guided to 2D-MRI-guided radiotherapy of lung cancer
Larry Partain, Stanley Benedict, Namho Kim, et al.
Retrospective kV x-ray 4DCT treatment planning for lung cancer MV linac treatment is becoming a standard-of-care for this widely used procedure for the largest cancer cause-of-death in the US. It currently provides the best estimate of a fixed-in-time but undulating and closed 3D "shell" to which a minimum curative-intent radiation dose should be delivered to provide the best estimated patient survival and the least morbidity, usually characterized by quantitative dose-volume-histograms (DVHs). Unfortunately this closed shell volume or internal target volume (ITV) currently has to be increased enough to enclose the full range of respiratory lesion motion (plus set-up etc. uncertainties) which cannot yet be accurately determined in real time during treatment delivery. With accurate motion-tracking, the planning target volume (PTV) or outer “shell” may be reduced by up to 40%. However there is no single 2D plane that precisely follows the reduced-PTV-volume’s 3D respiratory motion, currently best estimated by the retrospective hand contouring by a trained and experienced MD radiation oncology MD using the full 3D-time information of 4DCT. Once available, 3D motion tracking in real time has the potential to substantially decrease DVH doses to surrounding organs-at-risk (OARs), while maintaining or raising the curative-intent dose to the lesion itself. The assertion argued here is that, the 3D volume-rendered imaging of lung cancer lesion-trajectories in real-time from TumoTrak digital x-ray tomosythesis, has the potential to provide more accurate 3D motion tracking and improved dose delivery at lower cost than the real time, 2D single slice imaging of MRI-guided radiotherapy.
Integrating quantitative imaging and computational modeling to predict the spatiotemporal distribution of 186Re nanoliposomes for recurrent glioblastoma treatment
Glioblastoma multiforme is the most common and deadly form of primary brain cancer. Even with aggressive treatment consisting of surgical resection, chemotherapy, and external beam radiation therapy, response rates remain poor. In an attempt to improve outcomes, investigators have developed nanoliposomes loaded with 186Re, which are capable of delivering a large dose (< 1000 Gy) of highly localized β- radiation to the tumor, with minimal exposure to healthy brain tissue. Additionally, 186Re also emits gamma radiation (137 keV) so that it’s spatio-temporal distribution can be tracked through single photon emission computed tomography. Planning the delivery of these particles is challenging, especially in cases where the tumor borders the ventricles or previous resection cavities. To address this issue, we are developing a finite element model of convection enhanced delivery for nanoliposome carriers of radiotherapeutics. The model is patient specific, informed by each individual’s own diffusion-weighted and contrast-enhanced magnetic resonance imaging data. The model is then calibrated to single photon emission computed tomography data, acquired at multiple time points mid- and post-infusion, and validation is performed by comparing model predictions to imaging measurements obtained at future time points. After initial calibration to a one SPECT image, the model is capable of recapitulating the distribution volume of RNL with a DICE coefficient of 0.88 and a PCC of 0.80. We also demonstrate evidence of restricted flow due to large nanoparticle size in comparison to interstitial pore size.
Evaluation of skin-dose contribution from a new high-definition image receptor mode during neuro-interventional procedures using the Dose Tracking System
A new image receptor has recently been introduced that has a standard flat-panel detector (FPD) mode as well as a highdefinition (Hi-Def) zoom mode. The Dose Tracking System (DTS), which our group has developed, has been expanded in functionality to allow for the analysis of the skin-dose contribution of the Hi-Def mode during fluoroscopic interventional procedures. A clinical version of the DTS records all geometric and exposure technique parameters from a digital interface on the Canon Biplane Angiography System during interventional procedures in log files. Previous work on the enhancement of our group’s DTS led to the development of a replay function which facilitates playback of the log files. Within the replay feature, modifications have been made to allow for separate evaluation of exposures from each detector mode as identified by signals for the magnification (MAG) mode being used. The current work utilizes this separation method for neuro-interventional cases performed with the new image receptor to retrospectively analyze dose related contributions from the Hi-Def mode as compared to FPD usage. Peak skin dose (PSD) and dose area product (DAP) were evaluated for six clinical cases under IRB approval. Three de-identified log files were also included in order to demonstrate the method for separation of PSD as well as the variation with procedure types. Ratios of FPD PSD and DAP to Hi-Def values were determined for a subset of three cases during which the new image receptor was implemented.
Transrectal ultrasound imaging using plane-wave, fan-beam and wide-beam ultrasound: Phantom results
Plane-wave, fan-beam and wide-beam ultrasound can transmit higher ultrasound energy compared to synthetic-aperture ultrasound, leading to improved signal-to-noise ratios in ultrasound reflection/scattering signals. This is particularly useful for transrectal ultrasound imaging using end-firing transrectal ultrasound probes. We conduct a phantom study to evaluate the capabilities of plane-wave, fan-beam and wide-beam ultrasound for prostate imaging. The penetration depth decreases from plane-wave to fan-beam to wide-beam ultrasound, with increasing imaging areas. We use a transrectal ultrasound prototype consisting of a 256-channel Verasonics Vantage system and a GE intracavitary curved linear array to form plane-wave, fan-beam and wide-beam ultrasound. Our imaging results of a tissue-mimicking prostate phantom show that wide-beam ultrasound produces the best imaging among the three different beams of ultrasound when using the same number of ultrasound incident angles.
Automated lung cancer detection based on multimodal features extracting strategy using machine learning techniques
Lung Cancer is one of the leading causes of cancer-related deaths worldwide with minimal survival rate due to poor diagnostic system at the advanced cancer stage. In the past, researchers developed various tools in image processing to detect the Lung cancer of types as non-small cell lung cancer (NSCLC) and small cell lung cancer (SCLC) which are based on few features extracting methods. In this research, we extracted multimodal features such as texture, morphological, entropy based, scale invariant Fourier transform (SIFT), Ellipse Fourier Descriptors (EFDs) by considering multiple aspects and shapes morphologies. We then applied robust machine learning classification methods such as Naïve Bayes, Decision Tree and Support Vector Machine (SVM) with its kernels such as Gaussian, Radial Base Function (RBF) and Polynomial. Jack-knife 10-fold cross validation was applied for training/ validation of data. The performance was evaluated in terms of sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), total accuracy (TA), false positive rate (FPR) and area under the receiving curve (AUC). The highest detection accuracy was obtained with (TA=100%) with entropy, SIFT and texture features using Naïve Bayes, texture features using SVM Polynomial. Moreover, the highest separation was obtained using entropy, morphological, SIFT and texture features with (AUC=1.00) using Naïve Bayes classifier and texture features using Decision tree and SVM polynomial kernel.
Multiband terahertz imaging simulation of skin using freezing to enhance penetration depth
The Terahertz (THz) frequency region of the electromagnetic spectrum is defined as radiation of 0.1 to 10.0 x 1012 Hz. A unique feature of the 0.1 to 2.0 THz frequency band is that there is a high disparity between liquid water and ice absorption, with ice being 100 times more permeable to THz radiation. The high absorption by liquid water limits the deployment of 0.1 to 2.0 THz band for imaging and therapeutics to 0.2-0.3 mm in soft tissues. By freezing tissue, however, an imaging depth of 5.0 mm is achievable. Computational finite difference time domain (FDTD) modelling was undertaken using realistic tissue phantoms to explore this enhanced depth for imaging of frozen skin lesions such as melanomas. The computational modeling confirms that there is adequate contrast between normal frozen skin and pathological lesions. The imaging is enhanced by sampling the frozen tissue at both 0.45 and 1.00 THz. A method of analysing the data in a simplified, systematic way is introduced by dividing the returning signal into time regions and comparing their relative intensity. The concept will be developed into a “THz eye”, where the differences in THz absorption and refraction of tissues between individual THz frequencies are exploited for superior imaging.
Digital optical microscope (housing inside biosafety cabinet): a promising imaging technology for micro- and cell-biology, and histopathology
Conventional laboratory activities for research studies, more specifically in Biology and histopathology, involves back and forth movement of biological or tissue specimen all around the laboratory workspace - from Biosafety Cabinet (BSC) to workbench of microscopy imaging unit. This mobility imposes a serious concern of contamination of the biological specimen. We study design and develop of an optical microscopic imaging system - that can be adapted inside BSC - which remains as a challenge from technological aspects. This (automated) system with motion control in all 3-directions facilitates recording as well as observation of the biological procedure (in real time) being performed inside BSC. Experimental validation studies were carried out in a diverse biological specimen and demonstrate that this imaging technology facilitates study of miniscule biological specimen (cells and bacteria) not only easy but also enable to undertake sophisticated biological/clinical procedures.
Feasibility study of silicon photomultiplier based frequency domain diffuse optical tomography
Frequency domain diffuse optical tomography (FD-DOT) has been considered as a reliable method to quantify the absolute optical properties of tissues. In the conventional FD-DOT, PMTs coupled with optical fiber bundles were used as the detectors. Thus, the imaging system was expensive and complex in system structure. In this study, we propose to utilize the silicon photomultiplier (SiPM) to replace the PMT as the detectors in the FD-DOT system. SiPM can provide the similar level of gain as PMT. Meanwhile its price is much lower than PMT, and the use of optical fiber bundles can be avoided, which makes it possible to build a simple structure system. The feasibility of the SiPM based FD-DOT was studied in the experiment. A 660nm laser diode was utilized as the source to irradiate the phantom, and it was modulated from 10MHz to 40MHz with the step size 10MHz. The SiPM detectors with 1 mm2 detection area were used to collect the photons emitted from the phantom. We measured in several different source-detector distances for each modulation frequency, during which the bias voltage of SiPM remained constant. The results showed that we could restore the linear relationship between the phase lag and the transmission distance. We also obtained the expected linear curve of the logarithm of the product of the amplitude and distance versus transmission distance. In addition, the absorption and scattering coefficients of the phantom were calculated by the slope of the fitting curve, which showed a good consistency at different modulation frequencies. The experiments results illuminated that it is feasible to build a FD-DOT based on SiPM.
Artifact reduction in simultaneous tomosynthesis and mechanical imaging of the breast
Predrag R. Bakic, Magnus Dustler, Daniel Förnvik, et al.
Mechanical imaging (MI) uses a pressure sensor array to estimate the stiffness of lesions. Recent clinical studies have suggested that MI combined with digital mammography may reduce false positive findings and negative biopsies by over 30%. Digital breast tomosynthesis (DBT) has been adopted progressively in cancer screening. The tomographic nature of DBT improves lesion visibility by reducing tissue overlap in reconstructed images. For maximum benefit, DBT and MI data should be acquired simultaneously; however, that arrangement produces visible artifacts in DBT images due to the presence of the MI sensor array. We propose a method for reducing artifacts during the DBT image reconstruction. We modified the parameters of a commercial DBT reconstruction engine and investigated the conspicuity of artifacts in the resultant images produced with different sensor orientations. The method was evaluated using a physical anthropomorphic phantom imaged on top of the sensor. Visual assessment showed a reduction of artifacts. In a quantitative test, we calculated the artifact spread function (ASF), and compared the ratio of the mean ASF values between the proposed and conventional reconstruction (termed ASF ratio, RASF). We obtained a mean RASF of 2.74, averaged between two analyzed sensor orientations (45° and 90°). The performance varied with the orientation and the type of sensor structures causing the artifacts. RASF for wide connection lines was larger at 45° than at 90° (5.15 vs. 1.00, respectively), while for metallic contacts RASF was larger at 90° than at 45° (3.31 vs. 2.21, respectively). Future work will include a detailed quantitative assessment, and further method optimization in virtual clinical trials.
A new test method to assess the representation of spiculated mass like targets in digital mammography and breast tomosynthesis
Elisabeth Salomon, Friedrich Semturs, Lesley Cockmartin, et al.
In this work we tested different materials for 3D printing of spiculated mass models for their incorporation into an existing 3D structured phantom for performance testing of FFDM and DBT. Counting the number of spicules as a function of dose was then evaluated as a possible extra test feature expressing conspicuity next to detectability. Seven printable materials were exposed together with a PMMA step wedge and material samples with known linear attenuation coefficient to determine PMMA equivalent thickness and linear attenuation coefficient, respectively. Next, two models of spiculated masses were created each with a different complexity in terms of number of spicules. The visibility of the number of spicules of a 3D printed spiculated mass model loosely placed in the phantom or embedded into two different printing materials was assessed for FFDM and DBT. Vero White pure was chosen as the most appropriate material for the printing of masses whereas Vero Clear and Tango+ were chosen as background materials. The visibility of spicules was best in the loose mass models and better in the background material Tango+ compared to Vero Clear. While the discrimination of the different spicules could be assessed in FFDM and DBT, as expected only a limited dose sensitivity was found for the visibility of spicules evaluated for the different background materials and at different beam energies.
A framework for flexible comparison and optimization of x-ray digital tomosynthesis
Frank Smith, Ying Chen
In this paper, we developed a framework for comparison and optimization of x-ray imaging configurations for x-ray digital tomosynthesis. Digital tomosynthesis is a novel technology to reconstruct three-dimensional information with limited number of low-dose two-dimensional projection images. Breast cancer is the most common cancer among American women. Early breast cancer detection is the best hope to decrease breast cancer mortality. Breast cancer is sometimes found after symptoms appear. Many women with breast cancer have no symptoms. Mammography has long been the leading technology for breast cancer detection. Although mammography has been the primary technology for breast cancer detection, digital tomosynthesis has become increasingly popular for breast cancer detection. In digital breast tomosynthesis imaging fields, most current breast tomosynthesis systems utilize a design where a single x-ray tube moves along an arc above objects over a certain angular range. Parallel imaging configurations is utilized in a few tomosynthesis imaging area such as digital chest tomosynthesis, and multi-beam stationary breast tomosynthesis imaging field as well. In this paper we present the preliminary investigation on computational analysis of impulse response and wire simulation characterization for optimization of digital tomosynthesis imaging configurations using our framework.
Effects of angular range on lesion margin assessment in contrast-enhanced digital breast tomosynthesis
Contrast-Enhanced Digital Breast Tomosynthesis (CEDBT) provides quasi three-dimensional contrast enhancement of breast lesions and has been investigated for breast cancer detection and lesion assessment. The acquisition geometry of CEDBT may affect its ability to detect and assess contrast-enhanced lesions. In this study, we investigate the effects of angular range of CEDBT on lesion margin assessment. The CIRS BR3D phantom with iodine inserts was imaged for four angular ranges between 15 and 45 degrees with same total glandular dose using a prototype CEDBT system. The artifact spread functions of iodine objects with various sizes were measured. The detectability of iodine objects overlaid in the depth direction with different separation distances was evaluated using signal-difference-to-noise ratio. Clinical images of malignant lesions were acquired with 25 projections over approximately 50 degrees, and CEDBT for various angular ranges were generated using a subset or all of the projection images and were assessed for lesion margins. Our results show that increasing angular range of CEDBT improves the separation of overlapping iodine signals in phantom images, and the margins of malignant mass lesions are better identified. In conclusion, CEDBT with wide angular range may improve lesion characterizations, e.g. lesion size, morphology and location, and provide better performance than contrast enhanced digital mammography (CEDM) for applications such as guidance of biopsy and evaluation of treatment response.
Infuence of background trends on noise power spectrum at zero frequency in radiography imaging
Eunae Lee, Dong Sik Kim
Noise power spectrum (NPS) at zero frequency can efficiently represent noise properties of radiography detectors. However, accurately and precisely measuring the zero-frequency NPS has difficulties due to several factors. In this paper, such factors are first summarized. Among the factors, we observe unpredictable background trends, which are from unstable detector hardware, and propose a stochastic trend model. This trend inflates the measured NPS at zero frequency depending on the segment size for calculating periodograms. Influences of stochastic trends were first observed with synthetic data. A flat-panel mammography detector, which was based on a-Se photoconductor, was then used to experimentally observe the influences of background trends. We could observe that the inflated value at zero frequency increased as the segment size increased due to the stochastic trend while the true zero-frequency NPS was constant for different segment sizes.
System detective quantum efficiency (DQESYS) as an index of performance for chest radiography system (bucky and bedside) at four patient equivalent thicknesses
Sunay Rodríguez Pérez, Philippe Moussalli, Hilde Bosmans, et al.
Imaging performance of a flat panel-based chest radiography system was evaluated using a recently introduced parameter: system detective quantum efficiency (DQE), i.e. DQESYS. The DQESYScalculation includes the signal to noise (SNR) transfer efficiency of the x-ray detector (detector DQE) and of the antiscatter device (DQEASD). Posterior Anterior (PA) and bedside imaging techniques were evaluated using Poly(methyl methacrylate) (PMMA) thicknesses of 90, 130, 160 and 190 mm, equivalent to the lung and mediastinum regions covering a range of three patient sizes. Detector DQE was measured for beams without scatter using aluminum filters with similar half-value-layer (HVL) as the PMMA blocks. The grid efficiency (DQEASD) was calculated from the primary and scatter grid transmissions for the four PMMA thicknesses. Acquisition settings were 120 kV (grid in) for the bucky PA technique and 105 kV (grid out) for bedside imaging. Results showed an increase in the DQESYS for PA examinations with increasing PMMA thickness, opposite to the detector DQE. This can be attributed to the increasing efficiency of the antiscatter grid (i.e. DQEASD) as PMMA thickness is increased, consistent with the expected result that grid use is important for the thicker patients. DQESYS for bedside was lower than for PA, this is because no grid is used for bed examinations and DQESYS reverts to detector DQE. DQESYS was successfully used to evaluate the performance of the system in the presence of scatter radiation with the antiscatter device in place, results showed the importance of this type of devices for chest radiographies.
MRI-based pseudo CT generation using classification and regression random forest
We propose a method to generate patient-specific pseudo CT (pCT) from routinely-acquired MRI based on semantic information-based random forest and auto-context refinement. Auto-context model with patch-based anatomical features are integrated into classification forest to generate and improve semantic information. The concatenate of semantic information with anatomical features are then used to train a series of regression forests based on auto-context model. The pCT of new arrival MRI is generated by extracting anatomical features and feeding them into the well-trained classification and regression forests for pCT prediction. This proposed algorithm was evaluated using 11 patients’ data with brain MR and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross correlation (NCC) are 57.45±8.45 HU, 28.33±1.68 dB, and 0.97±0.01. The Dice similarity coefficient (DSC) for air, soft-tissue and bone are 97.79±0.76%, 93.32±2.35% and 84.49±5.50%, respectively. We have developed a novel machine-learning-based method to generate patient-specific pCT from routine anatomical MRI for MRI-only radiotherapy treatment planning. This pseudo CT generation technique could be a useful tool for MRI-based radiation treatment planning and MRI-based PET attenuation correction of PET/MRI scanner.
Deep learning based guidewire segmentation in x-ray images
X-ray fluoroscopy is commonly used during liver embolization procedures to guide intravascular devices (e.g. guidewire and catheter) to the branches of the hepatic arteries feeding tumors. A vascular roadmap can be created to provide a reference for the device position. Recently, techniques have been developed to create dynamic vessel masks to compensate for respiratory motion. In order to superimpose the intravascular guidewire onto the vessel mask, robust segmentation is required. Commonly used techniques often use mask subtraction to isolate the device in x-ray images. However, this is not suitable due to the motion in liver applications. The proposed method uses a deep convolutional neural network to segment the guidewire in native (unsubtracted) x-ray images. The neural network uses an encoder / decoder structure, which is based on the VGG-16 network. To create a large dataset of annotated images, simulated images were created based on 3D digital subtraction angiography acquisitions of hepatic arteries in porcine studies. Random guidewire shapes were generated within the vascular volume and superimposed on the original non-contrast projection images. The network was trained using a set of 56,768 images created from 10 acquisitions. The segmentation results of the trained network were compared to a mask-subtraction-based algorithm for an independent validation data set. The deep learning algorithm (Dice = 58.1%, false negative rate (FNR) = 9.6%) outperformed the subtraction technique (Dice = 23.7%, FNR = 40.8%). This study shows that the deep learning approach is suitable for robust segmentation of curvilinear structures such as guidewires and could be used to superimpose the segmented device on dynamic roadmaps.
Estimation of attenuator mask from region of interest (ROI) dose-reduced images for brightness equalization using convolutional neural networks
Region-of-Interest (ROI) fluoroscopy uses a differential x-ray attenuator to reduce the dose to the patient in the periphery region while maintaining regular dose within the ROI treatment area, resulting in an image with differential brightness regions. The brightness difference can be corrected by subtracting a mask of the ROI attenuator from the dose-reduced image. The purpose of this work is to implement a Convolutional-Neural-Network (CNN) capable of deriving a mask of the ROI attenuator from the dose-reduced images, which can be used to equalize the brightness in the dose-reduced images without a pre-acquired mask. A data set of 10,000 ROI dose reduced images of various objects including anthropomorphic head and chest phantoms were generated with different ROI positions and sizes. A 22 layer CNN was developed to derive a mask of the ROI attenuator from the dose reduced image. The network was trained on 30% and tested on 15% of the images from the ROI image data set. The trained CNN was used on the remaining 55% of the data set to generate the ROI mask, and the average computation time for each image was calculated to be 70 ms. The Mean-Square-Error (MSE) for the testing data set was calculated to be 2.03e-05. For the remaining data set of 5500 images the average MSE between the network output and the corresponding expected ROI mask was calculated to be 2.21e-05. The masks generated by the CNN were successfully used to restore and equalize the brightness of the dose reduced images.
Combined low-dose simulation and deep learning for CT denoising: application of ultra-low-dose cardiac CTA
This study presents a novel deep learning approach for denoising of ultra-low-dose cardiac CT angiography (CCTA) by combining a low-dose simulation technique and convolutional neural network (CNN). Twenty-five CT angiography (CTA) scans acquired with ECG gating (70 – 100 kVp, 100 – 200 mAs) were fed into the low-dose simulation tool to generate a paired set of simulated low-dose CTA and synthetic low-dose noise. A modified U-net model with 4x4 kernel size and five layers was trained with these paired dataset to predict the low-dose noise from the given low-dose CCTA image. For generation of simulation low-dose CTA, differing level of low-dose conditions from 10% to 2.5% were applied. Independent 5 ultra-low-dose CTA scans (70 – 100 kVp, 4% dose of full-dose) with ECG gating were used for testing the denoising performance of the trained U-net. A denoised CCTA image was obtained by subtracting the predicted noise image by the U-net from the ultra-low-dose CCTA images. The performance was evaluated quantitatively in terms of noise measurements in ascending aorta, left/right ventricles, and qualitatively by comparing the noise pattern and image quality. Average of image noise in ascending aorta, left/right ventricles were 149±41HU, 200±15HU, 164±21HU in ultra-low-dose, and 46±14HU, 66±9HU, 55±12HU in deep learning-denoised images. The overall noise was significantly reduced by 70%. The noise pattern was indistinguishable from that of real CCTA image, and the image quality of denoised CCTA images was much higher than that of ultra-lowdose CCTA images.
Learning from our neighbours: a novel approach on sinogram completion using bin-sharing and deep learning to reconstruct high quality 4DCBCT
Joel Beaudry, Pedro L. Esquinas, Chun-Chien Shieh
Inspired by the success of deep learning applications on restoration of low-dose and sparse CT images, we propose a novel method to reconstruct high-quality 4D cone-beam CT (4DCBCT) images from sparse datasets. Our approach combines the idea of bin-sharing with a deep convolutional neural network (CNN) model. More specifically, for each respiratory bin, an initial estimate of the patient sinogram is obtained by taking projections from adjacent bins and performing linear interpolation. Subsequently, the estimated sinogram is propagated through a CNN that predicts a full, high-quality sinogram. Lastly, the predicted sinogram is reconstructed with an iterative CBCT algorithms such as the Conjugate Gradient (CG) method. The CNN model, which we referred to as the Sino-Net, was trained under different loss functions. We assessed the performance of the proposed method in terms of image quality metrics (mean square error, mean absolute error, peak signal-to-noise ratio and structural similarity) and tumor motion accuracy (tumor centroid deviation with respect to the ground truth). Lastly, we compared our approach against other state-of-the-art methods that compensate motion and reconstruct 4DCBCTs. Overall, the presented prototype model was able to substantially improve the quality of 4DCBCT images, removing most of the streak artifacts and decreasing the noise with respect to the standard CG reconstructions.
Investigation on slice direction dependent denoising performance of convolutional neural network in cone-beam CT images
In FDK reconstruction, distribution of noise power is different along the axial (i.e., high pass noise) and coronal slice (i.e., low pass or white noise), which may results in different detectability of same objects. In this work, we examined denoising performance of convolutional neural network trained using axial and coronal slice images separately, and how the direction of image slice affects the detectability of small objects in denoised images. We used the modified version of U-Net. For network training, we used Adam optimizer with a learning rate of 0.001, batch size of 4, and VGG loss was used. The XCAT simulator was used to generate the training, validation, and test dataset. Projection data was acquired by Siddon’s method for the XCAT phantoms, and different levels of Poisson noise was added to the projection data to generate quarter dose and normal dose CT images, which were then reconstructed by FDK algorithm. The reconstructed quarter dose and normal dose CT images were used as training, validation, and test dataset for our network. The performance of denoised output images from U-Net-Axial (i.e., network trained using axial images) and U-Net-Coronal (i.e., network trained using coronal images) were evaluated using structural similarity (SSIM) and mean square error (MSE). The results showed that output images from both U-Net-Axial and U-Net-Coronal shows the improved image quality compared to quarter dose images. However, it was observed that the detectability of small objects were higher in U-Net-Coronal.
Performance comparison of convolutional neural network based denoising in low dose CT images for various loss functions
Convolutional neural network (CNN) is now the most promising denoising methods for low-dose computed tomography (CT) images. The goal of denoising is to restore original details as well as to reduce noise, and the performance is largely determined by the loss function of the CNN. In this work, we investigate the denoising performance of CNN for three different loss functions in low dose CT images: mean squared error (MSE), perception loss using the pretrained VGG network (VGG loss), and the weighted summation of MSE and VGG losses (VGGMSE loss). CNNs are trained to map the quarter dose CT images to normal dose CT images in a supervised fashion. The image quality of denoised images is evaluated by normalized root mean squared error (NRMSE), structural similarity index (SSIM), mean and standard deviation (SD) of HU values, and the task SNR of non-prewhitening eye filter observer model (NPWE). Our results show that the CNN trained with MSE loss achieves the best performance in NRMSE and SSIM despite significant image blurs. On the other hand, the CNN trained with VGG loss reports the best score in the SD with well-preserved details but has the worst accuracy in the mean HU value. CNN trained with VGGMSE loss shows the best performance in terms of tSNR and the mean HU value and consistently high performance in other metrics. In conclusion, VGGMSE loss can subside the drawbacks of MSE or VGG loss, thus much more effective than them for CT denoising tasks.
Comparison of deep learning approaches to low dose CT using low intensity and sparse view data
Recently there has been considerable interest in using deep learning to improve the quality of low dose CT (LDCT) images. LDCT may be achieved by reducing the beam intensity, or by acquiring sparse-view data at full beam intensity. Additionally, if reducing beam intensity, one can consider denoising either the raw (sinogram) data, or the reconstructed image. We compare the performance of a convolutional neural network (CNN) in improving image quality using three approaches: denoising low-intensity images, denoising low-intensity sinograms prior to reconstruction, and denoising sparse-view images. Our results indicate that images produced from low-intensity data are superior to images produced from sparse-view data, after correction by the CNN. Additionally, in the low-intensity case, denoising in the sinogram or image domain provides comparable image quality.
Deep learning-based artifact detection for diagnostic CT images
Prakhar Prakash, Sandeep Dutta
Calibrated detector response is crucial to good image quality in diagnostic CT and imaging systems in general. Defects during manufacturing, component failures and system aging can introduce shift in detector response which, if left uncorrected, can lead to image artifacts. Such artifacts reduce the image quality and can cause misdiagnosis in clinical practice. In this work a deep learning (DL)-based artifact detection method is developed to automatically screen for common imaging detector induced artifacts such as rings, streaks and bands in images. To circumvent the difficulty in obtaining and annotating the artifact images, a diagnostic CT physics simulator is utilized to generate CT images across a range of acquisition and reconstruction settings. Artifacts are introduced in the projection view data by perturbing the detector gain relative to the gain normalization scan during the simulation. The artifact images and corresponding ground truth segmentation of the artifact type and location serve as the training dataset. Linear support vector machine with squared hinge loss (L2-SVM) was used as the loss function during training as early experiments showed small but consistent improvements over the more commonly used cross-entropy loss for segmentation. The trained network achieved ~97%, ~86% and ~93% independent test accuracy for ring, streak and band artifacts respectively. Since deep learning methods learn by example, the detection method is not limited to the imaging scenarios presented in this work and can be extended to other applications.
Multi-modal MRI segmentation of sarcoma tumors using convolutional neural networks
Small animal imaging is essential in building a bridge from basic science to the clinic by providing the confidence necessary to move new cancer therapies to patients. However, there is considerable variability in preclinical imaging, including tumor volume estimations based on tumor segmentation procedures which can be clearly user-biased. Our group is engaged in developing quantitative imaging methods which will be applied in the preclinical arm of a co-clinical trial studying synergy between anti-PD-1 treatment and radiotherapy using a genetically engineered mouse model of soft tissue sarcoma. This study focuses on a convolutional neural network (CNN)-based method for automatic tumor segmentation based on multimodal MRI images, i.e. T1 weighted, T2 weighted and T1 weighted with contrast agent. Our images were acquired on a 7.0 T Bruker Biospec small animal MRI scanner. Preliminary results show that our U-net structure and 3D patch-wise approach using both Dice and cross entropy loss functions delivers strong segmentation results. We have also compared single performance using only T2 weighted versus multimodal MR images for CNN segmentation. Our results showthat Dice similarity coefficient were higher when using multimodal versus single T2 weighted data (0.84 ± 0.05 and 0.81 ± 0.03). In conclusion, we successfully established a segmentation method for preclinical MR sarcoma data based on deep learning. This approach has the advantage of reducing user bias in tumor segmentation and improving the accuracy and precision of tumor volume estimations for co-clinical cancer trials.
A machine learning algorithm for detecting abnormal respiratory cycles in thoracic dynamic MR image acquisitions
Changjian Sun, Jayaram K. Udupa, Yubing Tong, et al.
4D image construction of thoracic dynamic MRI data provides clinicians the capability of examining the dynamic function of the left and right lungs, left and right diaphragms, and left and right chest wall separately. For the method implemented based on free-breathing rapid 2D slice acquisitions, often part of the acquired data cannot be used for the 4D image reconstruction algorithm because some patients hold their breath or breathe in patterns that differ from regular tidal breathing. Manually eliminating abnormal image slices representing such abnormal breathing is very labor intensive considering that typical acquisitions contain ~3000 slices. This paper presents a novel respiratory signal classification algorithm based on optical flow techniques and a SVM classifier. The optical flow technique is used to track the speed of the diaphragm, and the motion features are extracted to train the SVM classification model. Due to the limited number of abnormal samples usually observed, 118 abnormal signals were generated by simulation by appropriately transforming the normal signals, so that the number of normal and abnormal signals reached 160 and 160, respectively. In the process of model training, our goal is to reduce the error rate of false negative abnormal signal detection (FN) as much as possible even at the cost of increasing false positive misclassification rate (FP) for normal signals. From 10 experiments we conducted, the average FN rate and FP rate reached 5% and 26%, respectively. The accuracy over all (real and simulated) samples was 85%. In all real samples, 82% of the abnormal data were correctly detected.
A framework for realistic virtual clinical trials in photon counting computed tomography
Ehsan Abadi, Brian Harrawood, Jayasai Rajagopal, et al.
Although photon counting systems have shown strong clinical potential, this technology has not yet been fully evaluated or optimized for specific clinical applications. The purpose of this study was to develop a framework for realistic virtual clinical trials (VCTs) in photon counting CT (PCCT) imaging. We developed a photon counting CT simulator based on the geometry and physics of an existing research prototype scanner. The developed simulator models primary, scatter, and noise signals, detector responses, vendor-specific bowtie filters and X-ray spectra, axial/helical trajectories, vendor-specific acquisition modes, and multiple energy thresholds per detector pixel. The simulation procedure is accelerated by parallel processing using multiple GPUs. The generated projection images can be reconstructed using generic reconstruction algorithms as well as a commercial reconstruction software (ReconCT Siemens). A computational model of a physical Mercury phantom was imaged at multiple energy thresholds (25 and 75 keV) and dose levels (36, 72, 144, and 216 mAs). Noise magnitude was measured in the simulated images and compared against noise measurements in a real scan acquired with a research prototype photon counting scanner (Siemens Healthcare). The results showed that our simulator was capable of synthesizing realistic photon counting CT data. The simulator can be combined with realistic 4D high-resolution XCAT phantoms with intra-organ heterogeneities to conduct VCTs for specific clinical applications. This framework can greatly facilitate the evaluation, optimization, and eventual clinical use of PCCT.
CZT modeling for photon counting computer tomography
Xiaochun Lai, Liang Cai, Kevin Zimmerman, et al.
Accurate physics modeling of a photon counting detector is essential for detector design and performance evaluation, Computer Tomography (CT) system-level performance investigation, material decomposition, image reconstruction. The detector response is complicated because various effects involve, including fluorescence X-rays, primary electron path, charge diffusion, charge repulsion, and charge trapping. In this paper, we will present a comprehensive detector modeling approach, which incorporates all these effect into account.
Photon-counting detector simulation: Monte-Carlo energy deposition, physics-based charge transport and current induction, and SPICE electronics
Kevin C. Zimmerman, Liang Cai, Xiaochun Lai, et al.
Photon counting detectors are an appealing approach to spectral computed tomography for their theoretical benefits over conventional detectors. Detailed modeling and simulation is important for capturing the critical aspects of the counting and spectral performance of the detector. An approach to photon counting detector simulation is presented using a custom developed software program. The software consists of Monte-Carlo energy deposition, physics-based charge transport and current induction, and SPICE electronic simulation. It utilizes behind-the-scenes Gate for the photon interactions and energy deposition and ngspice for the SPICE electronic simulations. Various sensor geometries and definitions can be defined to simulation individual detector pixels or entire anode arrays for large-scale simulations. The simulation requires the specification of x-ray planar sources and can be specified on a per-channel basis with an energy distribution and flux. Given a sensor definition and a series of x-ray sources, the program calculates the energy-bin count read-out from each anode in the sensor array. The program can be used to study the detector response of various sensor and system geometries, including in the presence of anti-scatter grids, the performance of anti-charge sharing implementations, material decomposition algorithms, etc.
Optimal acquisition setting for dual-contrast imaging with gadolinium and iodine on a research whole-body photon-counting-detector (PCD) CT
Shengzhen Tao, Yizhong Wu, Kishore Rajendran, et al.
Photon-counting detectors (PCDs) can resolve the energy of incident x-ray photons, which allows simultaneous imaging of two contrast materials, such as iodine (I) and gadolinium (Gd), with a single scan. This capability may allow reduction of patient radiation dose for clinical applications that typically require multi-phase acquisitions by injecting different contrast media at different times and scanning only once to differentiate, for example, venous and arterial phases. The material decomposition performance on PCD-CT is dependent on acquisition setup including tube potential and energy thresholds. In this work, we performed a phantom study to evaluate the optimal acquisition settings for dual-contrast imaging using I and Gd on a research PCD-CT system. We further compared our results with a clinical dual-source dual-energy (DSDE) CT. An abdomen-shaped water phantom with I and Gd inserts of different concentrations was scanned using different energy thresholds and tube potentials to identify the optimal setup for I and Gd quantification. Results demonstrated that accurate quantification of I and Gd concentration was possible using the PCD-CT system. A tube potential of 80 kV and an energy threshold close to the K-edge of Gd (50 keV) was found to yield the best performance in terms of measurement root-mean-square-error (RMSE = 4.4 mg/mL for I and RMSE = 3.3 mg/mL for Gd). Further, the performance of PCD-CT with optimized setup was found to outperform DSDE-CT (RMSE = 8.1 mg/mL for I and 5.7 mg/mL for Gd).
Accuracy and variability of radiomics in photon-counting CT: texture features and lung lesion morphology
The purpose of this study was to evaluate the potential of a prototype photon-counting CT system scanner to characterize liver texture and lung lesion morphology features. We utilized a multi-tiered phantom (Mercury Phantom 4.0) to characterize the noise power spectrum and task-transfer functions of both conventional and photoncounting modes on the scanner. Using these metrics, we blurred three textures models and fifteen model lesions for four doses (CTDIvol: 4, 8, 16, 24 mGy), and three slice thicknesses (1.6, 2.5, 4 mm), for a total of 12 imaging conditions. Twenty texture features and twenty-one morphology features were evaluated. Performance was characterized in terms of accuracy (percent bias of features across different conditions) and variability (coefficient of variation of features due to repeats and averaged across conditions). Compared to conventional CT, photon-counting CT had comparable accuracy and variability for texture features. For morphology features, photon-counting CT had comparable accuracy and less variability than conventional CT. For both imaging modes, change in dose showed slight variation in features and increasing slice thickness caused a monotonic change with feature dependent directionality. Photon-counting CT can improve the characterization of morphology features without compromising texture features.
Performance comparison of water phantom based flat field correction methods for photon-counting spectral CT images: experimental results
Donghyeok Kim, Jongduk Baek
Photon-counting detector (PCD) is one of the leading candidates for the next generation X-ray detector. The PCD enables effective multiple material decomposition and low dose CT imaging, but several correction techniques to compensate non-ideal effects of the PCD should be preceded to maximize the imaging performance. In this study, we present experimental results of water phantom based flat field correction methods to reduce energy distortion for photon-counting spectral CT images. We compared the performance of the flat field correction using single and multiple water phantoms, and examined the robustness of the correction method for various imaging tasks. The flat field correction using single water phantom was conducted by normalizing measured projection data of an object with those of the centered large water phantom. Then, ideal water phantom image was added to the reconstructed image of the water normalized projection data, which is equivalent to the conducting first order fitting of detector pixel gain. The dynamic range of detector pixel gain was increased by using multiple water phantoms. Each detector pixel gain was modeled using 2nd order polynomial, and the coefficients were estimated by comparing measured projection data of water phantoms with those of ideal water phantoms. The estimated gain of each detector pixel was tested for various imaging tasks. The results show that the single water phantom based method reduces the ring artifacts for a centered object effectively, but residual ring artifacts are introduced for the off-centered multiple water phantoms. In contrast, the multiple water phantoms based method reduces the ring artifacts effectively regardless of the size and location of the object. In addition, the multiple water phantoms based method provides improved SNR by 38.3 to 41.9% and CNR by 52.8 to 56.5%, compared to the single water phantom based method.
Multi-contrast imaging on dual-source photon-counting-detector (PCD) CT
Shengzhen Tao, Kishore Rajendran, Cynthia H. McCollough, et al.
Photon-counting-detector (PCD) CT can provide multiple energy bin data sets and allows single-acquisition, multiple-contrast-injection imaging using materials such as iodine, gadolinium and bismuth. However, due to technical limitations, PCDs can suffer from compromised energy-resolving capability, which degrades multicontrast imaging performance. In this work, we investigate the use of a dual-source (DS)-PCD system architecture with additional beam filtration to improve spectral separation among energy bin data sets, and quantify its performance for multi-contrast imaging. Experiments were performed using a CT phantom including various concentrations of iodine (I), gadolinium (Gd) and bismuth (Bi). The DS-PCD architecture was emulated by scanning the same phantom twice on a single-source (SS) PCD-CT with two different tube potentials: 80 kV (energy thresholds = 25/50 keV), and 140 kV (energy thresholds = 25/90 keV) with a 0.4-mm tin filter. We further compared material decomposition performance using the proposed DS-PCD approach with that of the current SS-PCD approach. For the SS-PCD, chess mode with 4 energy bins was used, with energy thresholds of 25/50/75/90 keV to resolve the K-edges of Gd and Bi. The mean energies of the four energy bins in SS-PCD were 72/76/93/109 keV, while those of the four energy bins using DS-PCD were 57/64/88/111 keV, denoting a better spectral separation using DS-PCD. The material quantification root mean square error (RMSE) was reduced from 4.5/3.3/1.2 mg/mL for iodine/Gd/Bi using SS-PCD, to 1.4/1.2/1.1 mg/mL using DS-PCD. These results demonstrate that the DS-PCD can improve multi-contrast imaging performance compared to a SS-PCD acquisition.
Image quality in photon-counting CT images as a function of energy threshold
In this study, we examined image quality in photon-counting CT images due to variation in energy thresholds. Images of an ACR quality control phantom were acquired using a prototype photon-counting CT scanner with two variable energy thresholds. The lower threshold, which varied between 20 to 50 keV, and the higher threshold, which varied between 50 to 90 keV, were used to separate the data into two energy bins. This produced a total of four images: threshold 1, containing signal between the lower threshold and the maximal value, threshold 2, containing signal between the higher threshold and the maximal value, bin 1, containing signal between the lower and higher threshold, and bin 2, containing signal between the higher threshold and maximal value. Thirteen pairs of energy thresholds were evaluated spanning the entire energy threshold space. An automated program was used to analyze images for standard quality control metrics including noise measurement, resolution, low contrast detectability, and contrast-to-noise ratio (CNR). Metrics were compared between image types and across energy thresholds. Threshold 1 images showed the least variation despite change in thresholds. Increasing the higher threshold degraded image quality in threshold 2 and bin 2 images, but improved performance in bin 1 images. Increasing the lower threshold decreased performance for bin 1 images. Resolution was largely unaffected by change in energy threshold.
Impact of energy threshold on material quantification of contrast agents in photon-counting CT
The purpose of this study was to examine the effect of energy threshold selection on the quantification of contrast agents in photon-counting CT (PCCT). A phantom was devised consisting of vials of iodine (4, 8, 16 mg/mL), gadolinium (4, 8, 16 mg/mL), and bismuth (5, 10, 15 mg/mL) within a cylindrical water container. The phantom was scanned on a prototype photon-counting CT scanner. The detected photons were binned into two energy bins using a fixed lower threshold of 20 keV and an upper threshold that varied between 50 to 90 keV. An image containing all the spectral information (threshold 1) was examined along with both binned images. Images were evaluated for the mean and standard deviation of CT number in each vial and contrast-to-noise ratio (CNR) for each concentration. CT number values in the threshold 1 image remained mostly unchanged as energy threshold was increased. Vials of iodine and gadolinium had slightly higher CT numbers in lower energy bin images than the threshold 1 images, but the percentage difference varied slightly (6-37% for iodine and 5-22% for gadolinium) with energy threshold. In higher energy bin images, CT numbers were lower (20-68% for iodine and 10-59% for gadolinium) than threshold 1 and the difference decreased with increasing energy threshold. For bismuth, the percentage difference in the lower bin decreased (by 11-19%) with energy level while it increased (by 18-23%) in the upper bin. CNR varied only slightly in the lower energy bins and decreased with increasing energy threshold for all materials.
An improved physics model for multi-material identification in photon counting CT
Xu Dong, Olga V. Pen, Zhicheng Zhang, et al.
Photon-counting computed tomography (PCCT) with energy discrimination capabilities hold great potentials to improve the limitations of the conventional CT, including better signal-to-noise ratio (SNR), improved contrast-to-noise ratio (CNR), lower radiation dose, and most importantly, simultaneous multiple material identification. One potential way of material identification is via calculation of effective atomic number (Zeff) and effective electron density (peeff) from PCCT image data. However, the current methods for calculating effective atomic number and effective electron density from PCCT image data are mostly based on semi-empirical models and accordingly are not sufficiently accurate. Here, we present a physics-based model to calculate the effective atomic number and effective electron density of various matters, including single element substances, molecular compounds, and multi-material mixtures as well. The model was validated over several materials under various combinations of energy bins. A PCCT system was simulated to generate the PCCT image data, and the proposed model was applied to the PCCT image data. Our model yielded a relative standard deviations for effective atomic numbers and effective electron densities at less than 1%. Our results further showed that five different materials can be simultaneously identified and well separated in a Zeff − peeff map. The model could serve as a basis for simultaneous material identification from PCCT.
Simulation of scattered radiation with various anti-scatter grid designs in a photon counting CT
Xiaohui Zhan, Kevin Zimmerman, Liang Cai, et al.
For large cone angle multi-detector CT (MDCT), the scattered radiation remains a challenging problem as it is part of the physics process in X-ray interaction. For a photon counting CT system, the scattered radiation has more profound impact to the system performance, as the scattered photons dominate the low energy regime of the measurement. Without proper corrections, the scattered radiation could introduce significant errors in the material decomposition, and degrade the material characterization and quantification accuracy. To mitigate the scatter problem, typically, hardware rejection and software correction algorithms can be both employed. The anti-scatter grids (ASG) are commonly used to absorb the scattered photons and help generate cleaner measurements. For semiconductor based photon counting detectors (CdTe/CdZnTe), due to charge sharing and cross-talk effects (k-escape, scatter), different ASG designs also change the detector spectral response by masking different detector areas. In this study, we will evaluate a CdZnTe based photon counting CT performance with various ASG designs at the low flux condition through simulations. The detector spectral responses with 2 different detector pixel sizes (250 um and 500um anode size) are generated by our internal simulation tool, using no ASG, 1-D and 2-D ASGs respectively. The scattered radiation is generated by GATE, a Geant4 based Monte Carlo simulation tool, using a large (33 cm diameter) cylindrical water phantom with concentric iodine/calcium inserts, and then added to the simulated phantom energy bin count measurements. The impact of the residual scatter with 1-D and 2-D ASGs in the basis and mono-energetic images will be evaluated and compared.
Quantitative phase retrieval of heterogeneous samples from spectral x-ray measurements
We recently proposed a method for retrieving absorption and phase properties of samples using a set of spectral x-ray measurements obtained in phase enhanced geometries. The spectral measurements can be obtained using state-of-the-art photon counting detectors (PCDs). These detectors permit the use of polychromatic sources and record accurate spectroscopic information in each pixel from a single X-ray exposure. In previous simulations and benchtop experiments we demonstrated that our method can be used to obtain quantitatively accurate absorption and phase properties of samples with effective atomic numbers (Zeff) that are close to soft tissue. This report expands on those findings to include heterogeneous samples to emulate complex composition in biological materials as well as samples with relatively high Zeff, such as bones and microcalcifications. Here we also demonstrate that excellent quantitative estimates of multiple object properties can be simultaneously obtained for these heterogeneous samples when spectral data is available. These multi-contrast estimates would allow differentiation of materials that would otherwise be indistinguishable using conventional, absorption contrast imaging. These preliminary results including phase retrieval of Aluminum rod also confirms that the slowly varying phase approximation used in PB-PCI transport of intensity models will not hinder their applicability for complex tissue imaging and small animal imaging.
Novel learning-based Moiré artifacts reduction method in x-ray Talbot-Lau differential phase contrast imaging
In this work, we present a novel convolutional neural network (CNN) enabled Moiré artifacts reduction framework for the three contrast mechanism images, i.e., the absorption image, the differential phase contrast (DPC) image, and the dark-field (DF) image, obtained from an x-ray Talbot-Lau phase contrast imaging system. By mathematically model the various potential non-ideal factors that may cause Moiré artifacts as a random fluctuation of the phase stepping position, rigorous theoretical analyses show that the Moiré artifacts on absorption images may have similar distribution frequency as of the detected phase stepping Moiré diffraction fringes, whereas, their periods on DPC and DF images may be doubled. Upon these theoretical findings, training dataset for the three different contrast mechanisms are synthesized properly using natural images. Afterwards, the three datasets are trained independently by the same modified auto-encoder type CNN. Both numerical simulations and experimental studies are performed to validate the performance of this newly developed Moiré artifacts reduction method. Results show that the CNN is able to reduce residual Moiré artifacts efficiently. With the improved signal accuracy, as a result, the radiation dose efficiency of the Talbot-Lau interferometry imaging system can be greatly enhanced.
Mesh-based and polycapillary optics-based x-ray phase imaging
Weiyuan Sun, Congxiao He, Carolyn A. MacDonald, et al.
The contrast in conventional x-ray imaging is generated by differential attenuation of x rays, which is generally very small in soft tissue. Phase imaging has been shown to improve contrast and signal to noise ratio (SNR) by factors of 100 or more. However, acquiring phase images typically requires a highly spatially coherent source (e.g. a 50 μm or smaller microfocus source or a synchrotron facility), or multiple images acquired with precisely aligned gratings. Here we demonstrate two phase imaging techniques compatible with clinical sources: polycapillary focusing optics to enhance source coherence and mesh-based structured illumination.
Visibility guided phase contrast denoising
Brandon Nelson, Thomas Koenig, Elisabeth Shanblatt, et al.
Talbot-Lau grating interferometry enables the use of clinical x-ray tubes for phase contrast imaging, greatly broadening its utility for both laboratory and preclinical applications. However, phase contrast measurements made in porous or highly heterogeneous media are negatively impacted by low visibility, the interferometer signal amplitude used to calculate relative phase shifts. While this loss in visibility is the source of dark field contrast it presents an additional source of noise in phase images. In this work, we develop a method to use normalized visibility images as the weighting matrix for denoising the corresponding phase contrast images. By using the visibility to guide filtering, the resulting denoised images are locally smoothed in regions of low visibility while maintaining spatial detail in regions of high visibility. This work demonstrates how the complementary properties of the dark field signal in grating interferometry can be leveraged to improve image quality in phase contrast images and presents an application in preclinical lung micro-CT.
Development of a compact inkjet-printed patient-specific phantom for optimization of fluoroscopic image quality in neonates
Flat panel detectors remain a new and emerging technology in under-table fluoroscopy systems. This technology is more susceptible than image intensifiers to electronic noise, which degrades image contrast resolution. Compensation for increased electronic noise is provided through proprietary vendor image processing algorithms. Lacking optimization in pediatrics, these algorithms interfere with patient anatomy particularly in neonate patients with low native anatomic contrast from bony structures, which serve as landmarks during fluoroscopic procedures. Existing phantoms do not adequately mimic the neonate anatomy making assessment and optimization of image quality for these patients difficult if not impossible. This work presents a method to inexpensively print iodine based anthropomorphic phantoms derived from patient radiographs with sufficient anatomic detail to assess system image quality. First, the attenuation of iodine ink densities (μt) was correlated to a standard pixel value grayscale map. Next, for proof-of-principle, radiographs of an anthropomorphic chest phantom were developed into a series of iodine ink printed sheets. Sheets were stacked to build a compact 2D phantom matching the x-ray attenuation of the original radiographs. The iodine ink printed phantom was imaged and attenuation values per anatomical regions of interest were compared. This study provides the fundamentals and techniques of phantom construction, enabling generation of anatomically realistic phantoms for a variety of patient age and size groups by use of clinical radiographs. Future studies will apply these techniques to generate neonatal phantoms from radiographs. These phantoms provide realistic imaging challenges to enable optimization of image quality in fluoroscopy and other projection-based x-ray modalities.
Assessment of tomosynthesis images with the new ACR phantom
Lynda Ikejimba, Andrei Makeev, Stephen Glick
Many QC phantoms are available for regular quality control (QC) of clinical full field digital mammography (FFDM) systems. In particular, the ACR (American College of Radiology) mammography phantom has been used for accreditation and routine QC for many years. Recently the FDA approved the use of the new ACR (2017) phantom to accredit DBT systems. While the old ACR phantom has been used to ensure adequate image quality in 2D systems, the extent to which the new phantom can capture deficiencies and artifacts in reconstructed 3D images is not well known. The purpose of this work is to investigate how sensitive the new ACR phantom is to degradations and failures of DBT imaging systems. The phantom was imaged on a custom-built benchtop system where physical degradations in the DBT system can be modeled. The physical factors studied were inaccuracies in reporting focal spot (FS) position and the presence of dead pixels due to detector failure. Readers scored the degraded images according to the new ACR guidelines. In general, while dead pixels were visible in the reconstructed images, the reader scores were only mildly sensitive to the errors in FS positioning.
Controlling the position-dependent contrast of 3D printed physical phantoms with a single material
Custom 3D printed physical phantoms are desired for testing the limits of medical imaging, and for providing patientspecific information. This work focuses on the development of low-cost, open source fused filament fabrication for printing of physical phantoms with the structure and contrast of human anatomy in computed tomography (CT) images. Specifically, this paper introduces the concept of using a porous 3D printed layer as a background into which additional material can be printed to control the position-dependent contrast. By using this method, eight levels of contrast were printed with a single material.
Using inkjet 3D printing to create contrast-enhanced textured physical phantoms for CT
Anthropomorphic phantoms can serve as anatomically structured tools for assessing clinical computed tomography (CT) imaging systems. The aim of this project is to create highly customized 3D inkjet-printed, contrast-enhanced physical liver phantoms for use in improving CT imaging system analysis. The capability of using voxelized print to create physical phantoms with texture was previously presented by our lab. Building on that technology, we show the feasibility of producing iodine enhanced liver phantoms with varying textures, at resolutions higher than clinical CT using inkjetprinting. We use a desktop inkjet-printer, with custom inks to print these paper phantoms. Sodium bromide (NaBr) ink is used to represent unenhanced tissue, and potassium iodide (KI) represents contrast-enhanced tissue. We have shown the feasibility of using 3D inkjet-printing to create unique, contrast-enhanced liver phantoms for use in CT. In the future, we plan to expand our methods and tools to create tissue-equivalent physical phantoms for other anatomical structures in the abdominal region.
Modeling dynamic, nutrient-access-based lesion progression using stochastic processes
Thomas J. Sauer, Ehsan Samei
Simulation methods can be used to generate realistic, computational lesions for insertion into anatomical backgrounds for use in a virtual clinical trial framework. Typically, these simulation methods rely on clinical lesion images|with resolution many times the size of a cell|to produce a lesion that is time- and anatomical- location{invariant. Though, in reality, a lesion's morphology and growth rate are dependent on both of those things. The goal of this work was to produce a lesion model starting from simple assumptions about the behavior of proliferating cells, simulate their states over time, and produce a lesion model for which the morphological features are determined by known cellular properties. Each cell of each lesion that is simulated can exist in one of several states depending on its access to nutrients and potential for proliferation at a given time. Running these simulations a sufficiently large number of times under the same conditions yields the most probable lesion for a given set of constraints|or, specifically for this work, a given anatomical environment. These lesions can be used in studies in which detection of subtle pathological features on small scales are essential to obtain meaningful results.
Filtered back-projection for digital breast tomosynthesis with 2D filtering
Sean D. Rose, Emil Y. Sidky, Ingrid S. Reiser, et al.
Designing image reconstruction algorithms for Digital Breast Tomosynthesis (DBT) has attracted much attention in recent years as the modality is increasingly employed for mammographic screening. While much recent research on this has focused on iterative image reconstruction, there may still be fundamental aspects of DBT image quality that can be addressed and improved with analytic filtered back-projection (FBP). In particular, we have been investigating conspicuity of fiber-like signals that can model blood vessels, ligaments, or spiculations. The latter structures can indicate a malignant tumor. It is known that the visual appearance of fiber-like signals varies with fiber orientation in the DBT slice images, and recently we have sought to quantify this orientation dependence with simulations involving phantoms with fibers placed at various angles with respect to the direction of the X-ray source travel (DXST). Employing FBP with a standard Hanning filter results in a marked decrease in conspicuity of fibers aligned nearly parallel with the DXST. Employing DBT-specific FBP filters proposed in the literature recovers conspicuity of such fibers. In this work, we propose a different modification to the FBP filter where the standard Hanning filter is combined with the same filter – but rotated 90° – forming a 2D filter. We illustrate the potential advantages of this new FBP filter design for DBT image reconstruction.
Prototyping optimization problems for digital breast tomosynthesis image reconstruction with a primal-dual algorithm
Emil Y. Sidky, Ingrid S. Reiser, Sean D. Rose, et al.
Digital Breast Tomosynthesis (DBT) is an emerging semi-tomographic modality that is gaining widespread popularity for mammographic screening. As the modality has only recently, in the last 10 years, been employed in the clinic, there is much variation amongst the vendors in DBT scan configuration and, accordingly, image reconstruction algorithm. In recent research there has been interest in developing iterative image reconstruction (IIR) based on gradient-sparsity regularization and for inclusion of physical modeling of detector response and noise properties. Due to the various motivations in designing IIR algorithms, there can be a great variety of optimization problems of interest. In this work, we employ a general optimization problem form where the objective function is a convex data discrepancy term and all other aspects of the imaging model are formulated as convex constraints. This general form of optimization can be efficiently solved using the primal-dual algorithm developed by Chambolle and Pock. We use the general optimization formulation together with this solver to prototype alternate imaging models for DBT; a least-squares data discrepancy with a modified total variation (TV) constraint is shown to be of particular interest in preliminary results.
EMnet: an unrolled deep neural network for PET image reconstruction
Kuang Gong, Dufan Wu, Kyungsang Kim, et al.
PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently deep neural networks have been widely applied to medical imaging denoising applications. In this work, based on the expectation maximization (EM) algorithm, we propose an unrolled neural network framework for PET image reconstruction, named EMnet. An innovative feature of the proposed framework is that the deep neural network is combined with the EM update steps in a whole graph. Thus data consistency can act as a constraint during network training. Both simulation data and real data are used to evaluate the proposed method. Quantification results show that our proposed EMnet method can outperform the neural network denoising and Gaussian denoising methods.
Backproject-filter (BPF) CT image reconstruction using convolutional neural network
In this work, we realize the image-domain backproject-filter (BPF) CT image reconstruction using the convolutional neural network (CNN) method. Within this new CT image reconstruction framework, the acquired sinogram data is backprojected first to generate the highly blurred laminogram. Afterwards, the laminogram is feed into the CNN to retrieve the desired sharp CT image. Both numerical and experimental results demonstrate that this new CNN-based image reconstruction method is feasible to reconstruct CT images with maintained high spatial resolution and accurate pixel values from the laminogram as of from the conventional FBP method. The experimental results also show that the performance of this new CT image reconstruction network does not rely on the used radiation dose level. Due to these advantages, this proposed innovative CNN-based image-domain BPF type image reconstruction strategy provides promising prospects in generating high quality CT images for future clinical applications.
Bayesian reconstruction of ultralow-dose CT images with texture prior from existing diagnostic full-dose CT database
Markov random field (MRF) has been widely used to incorporate a priori knowledge as a penalty for regional smoothing in ultralow-dose computed tomography (ULdCT) image reconstruction, while the regional smoothing does not explicitly consider the tissue-specific textures. Our previous work showed the tissue-specific textures can be enhanced by extracting the tissue-specific MRF from the to-be-reconstructed subject’s previous full-dose CT (FdCT) scans. However, the same subject’s FdCT scans might not be available in some applications. To address this limitation, we have also investigated the feasibility of extracting the tissue-specific textures from an existing FdCT database instead of the to-be-reconstructed subject. This study aims to implement a machine learning strategy to realize the feasibility. Specifically, we trained a Random Forest (RF) model to learn the intrinsic relationship between the tissue textures and subjects’ physiological features. By learning this intrinsic correlation, this model can be used to identify one MRF candidate from the database as the prior knowledge for any subject’s current ULdCT image reconstruction. Besides the conventional physiological factors (like body mass index: BMI, gender, age), we further introduced another two features LungMark and BodyAngle to address the scanning position and angle. The experimental results showed that the BMI and LungMark are two features of the most importance for the classification. Our trained model can predict 0.99 precision at the recall rate of 2%, which means that for each subject, there will be 3390*0.02 = 67.8 valid MRF candidates in the database, where 3,390 is the total number of candidates in the database. Moreover, it showed that introducing the ULdCT texture prior into the RF model can increase the recall rate by 3% while the precision remaining 0.99.
Towards deep iterative-reconstruction algorithms for computed tomography (CT) applications
We introduce a new approach for designing deep learning algorithms for computed tomography applications. Rather than training generically-structured neural network architectures to equivalently perform imaging tasks, we show how to leverage classical iterative-reconstruction algorithms such as Newton-Raphson and expectation- maximization (EM) to bootstrap network performance to a good initialization-point, with a well-understood baseline of performance. Specifically, we demonstrate a natural and systematic way to design these networks for both transmission-mode x-ray computed tomography (XRCT) and emission-mode single-photon computed tomography (SPECT), highlighting that our method is capable of preserving many of the nice properties, such as convergence and understandability, that is featured in classical approaches. The key contribution of this work is a formulation of the reconstruction task that enables data-driven improvements in image clarity and artifact reduction without sacrificing understandability. In this early work, we evaluate our method on a number of synthetic phantoms, highlighting some of the benefits and difficulties of this machine-learning approach.
Automatic regularization parameter tuning based on CT Image statistics
Regularization parameter selection is pivotal in optimizing reconstructed images which controls a balance between fidelity and penalty term. Images reconstructed with the optimal regularization parameter will keep the detail preserved and the noise restrained at the same time. In previous work, we have used CT image statistics to select the optimal regularization parameter by calculating the second order derivates of image variance (Soda-curve). But same as L-curve method, it also needs multiple reconstruction in different regularization parameters which will spend plenty of time. In this paper, we dive into the relationship between image statistics changes and regularization parameter during the iteration. Meanwhile, we propose a method based on the empirical regularity found in the iterations to tune the regularization parameter automatically in order to maintain the image quality. Experiments show that the images reconstructed with the regularization parameters tuned by the proposed method have higher image quality as well as less time when compared to L-curve based results.
A direct filtered back-projection reconstruction method for inverse geometry CT without gridding: a simulation study
Xiao Jiang, Lei Zhu
Inverse geometry computed tomography (IGCT) uses a small detector and a set of widely distributed x-ray sources. Standard filtered-backprojection (FBP) reconstruction for a conventional CT geometry cannot be directly used in IGCT, due to data truncation and redundancy of the sinograms acquired by different sources. Current IGCT algorithms use gridding or iterations during reconstruction, leading to degraded spatial resolution or increased computational cost. In this work, we propose a direct FBP reconstruction method for IGCT without gridding. A reconstruction algorithm is first derived for a full-size sinogram acquired by a single source with a known offset distance to the central line passing through the rotational axis and perpendicular to the detector. A weighting scheme is then developed on the projections to remove the data redundancy of different sinograms acquired by different sources. The final reconstruction is obtained as the summation of CT images reconstructed from different sources, in a form of FBP on weighted projections. The performance of the proposed algorithm is evaluated via simulation studies on the Shepp-Logan phantom. Results show that our algorithm substantially improves the image spatial resolution over the gridding method. The spatial resolution increases by 32.08% and 23.26% at 50% and 10% of the modulation transfer function, respectively. Finally, we demonstrate the advantages of volumetric IGCT compared with circular cone-beam CT in a pilot study.
Unbiased statistical image reconstruction in low-dose CT
John Hayes, Ran Zhang, Chengzhu Zhang, et al.
When x-ray exposure level is lowered in the attempt to reduce radiation dose in x-ray computed tomography (CT), noise level is elevated. While this is well known in the community, less attention has been paid to another critically important fact in low dose CT: the accuracy of CT number is also compromised. Namely, CT numbers for some organs are increased while the CT number may be decreased in some other organs. The application of denoising methods can reduce noise level, but the denoising method, generally speaking, does not reduce the CT number biases. This has been shown in systematic experimental studies using clinically available reconstruction methods such as the conventional filtered backprojection or the statistical model based image reconstruction method. Although it has been known that the bias can be eliminated in statistical reconstruction if the Poisson log-likelihood function is not approximated by its quadratic form, the computation cost is quite expensive and thus these types of methods are not used in currently available commercial CT products. In this paper, we present an innovative way to design the statistical weighting function to enable unbiased statistical reconstruction with a quadratic data fidelity term and a regularizer.
Bayesian reconstruction for digital breast tomosynthesis using a non-local Gaussian Markov random field a priori model
Noise is an intrinsic property of every imaging system. For imaging systems using ionizing radiation, such as digital breast tomosynthesis (DBT) or digital mammography (DM), we strive to ensure that x-ray quantum noise is the limiting noise source in images, while using the lowest radiation dose possible to achieve clinically satisfactory images. Therefore, new computer methods are being sought to help reduce the dose of these systems. In the case of DBT, this can be achieved when solving the inverse problem of tomographic reconstruction. In this work, we propose to use a Non-Local Gaussian Markov Random Field (NLGMRF) model to represent a priori knowledge in a Bayesian (Maximum a Posteriori - MAP) reconstruction approach for DBT. The main advantage of the Non-Local Markov Random Field models is that they explicitly consider two important constraints to regularize the solution of this inverse problem - smoothing and redundancy. To evaluate this new method in DBT, a number of experiments were performed to compare these methods to existing reconstruction techniques. Comparable or superior results were achieved when compared with methods in the DBT reconstruction literature in terms of structural similarity index (SSIM), artifact spread function (ASF) and visual analysis, demonstrating that the NLGMRF model is suitable to regularize the MAP solution in DBT reconstruction.
Preliminary study on optimization of the stationary inverse-geometry digital tomosynthesis: x-ray source array
Digital tomosynthesis (DT) improves the diagnostic accuracy compared with 2D radiography due to the good depth resolution. In addition, the DT can reduce radiation dose by more than 80% compared to computed tomography (CT) owing to the scans with limited angles. However, the conventional DT systems have the disadvantages such as geometric complexity and low efficiency. Moreover, the movements of source and detector cause motion artifacts in reconstructed images. Therefore, with the stationary X-ray source and detector, it is possible to reduce the artifacts by simplifying the geometry while preserving the advantages of DT imaging. Also, the geometric inversion with a small detector allows the more efficient diagnosis because fields-of-view (FOVs) can be smaller than the conventional DT systems. The purpose of this study was to develop the stationary inverse-geometry digital tomosynthesis (s-IGDT) imaging technique and compare image quality for linear and curved X-ray source arrays. The signal-to-noise ratio (SNR) of s-IGDT images obtained by using the linear X-ray source array was averagely 1.84 times higher than that using the curved X-ray source array due to low noise components, but the root-mean-square error (RMSE) was averagely 3.25 times higher. The modulation-transfer function (MTF) and radiation dose of the s-IGDT systems with the linear and curved X-ray source arrays were measured at similar levels. As a result, the s-IGDT system with the linear X-ray source array is superior in terms of SNR and noise property, and the curved X-ray array system is superior in terms of quantitative accuracy.
Investigation on lesion detectability in step-and-shoot mode and continuous mode digital tomosynthesis systems with anatomical background
Digital tomosynthesis systems promise better image quality compared to radiography, and thus these have been widely used in chest, dental, and breast imaging. Currently, two acquisition modes are used in digital tomosynthesis systems, such as step-and-shoot mode and continuous mode. The main difference between two acquisition modes is x-ray tube motion during data scanning, which affects spatial resolution and contrast. In this work, we investigate the effects of the X-ray tube motion on lesion detectability with anatomical background. We considered six spherical objects with diameters of 0.5, 0.8, 1, 2, 5, 10 mm as lesions, and anatomical background was modeled using the power law spectrum of breast anatomy. Projection data were acquired using two acquisition modes, and in-plane images are reconstructed using Feldkamp-Davis-Kress (FDK) algorithm. To show the effect of x-ray tube motion on lesion detectability, we computed task signal-to-noise ratio (SNR) of channelized Hotelling observer with Laguerre-Gauss channels for six spherical objects. Our results show that the task-SNR of step-and-shoot mode is higher than that of continuous mode for small lesion sizes (i.e., less than 1 mm diameter). This behavior indicates that tomosynthesis system with step-and-shoot mode is more beneficial to improve the detectability of small lesions than that with continuous mode.
Tomosynthesis imaging of the wrist using a CNT x-ray source array
Christina R. Inscoe, A. Billingsley, C. Puett, et al.
Tomosynthesis imaging has been demonstrated as an alternative to MRI and CT for orthopedic imaging. Current commercial tomosynthesis scanners are large in-room devices. The goal of this study was to evaluate the feasibility of designing a compact tomosynthesis device for extremity imaging at the point-of-care utilizing a carbon nanotube (CNT) x-ray source array. The feasibility study was carried out using a short linear CNT source array with limited number of x-ray emitting focal spots. The short array was mounted on a translation stage and moved linearly to mimic imaging configurations with up to 40 degrees angular coverage at a source-to-detector distance of 40cm. The receptor was a 12x12cm flat panel digital detector. An anthropomorphic phantom and cadaveric wrist specimens were imaged at 55kVp under various exposure conditions. The projection images were reconstructed with an iterative reconstruction algorithm. Image quality was assessed by musculoskeletal radiologists. Reconstructed tomosynthesis slice images were found to display a higher level of detail than projection images due to reduction of superposition. Joint spaces and abnormalities such as cysts and bone erosion were easily visualized. Radiologists considered the overall utility of the tomosynthesis images superior to conventional radiographs. This preliminary study demonstrated that the CNT x-ray source array has the potential to enable tomosynthesis imaging of extremities at the point-of-care. Further studies are necessary to optimize the system and x-ray source array configurations in order to construct a dedicated device for diagnostic and interventional applications.
Contribution of scatter and beam hardening to phase contrast imaging
X-ray phase contrast imaging is being investigated with the goal of improving the contrast of soft tissue. Enhanced edges at material boundaries are characteristic of phase contrast images. These allow better retrieval of phase maps and attenuation maps when material properties are very close to each other. Previous observations have shown that the edge contrast of a target material reduces with increasing thickness of the surrounding bulk material. In order to accurately retrieve material properties, it is important to understand the contributions from various factors that may lead to this phase degradation. We investigate this edge degradation dependence due to beam hardening and object scatter that results from the surrounding bulk material. Our results suggest that the large propagation distances used in PB-PCI are effective at reducing the scatter influence. Rather, our results indicate that the phase contrast degradation due to beam hardening is the most critical. The ability to account for these variations may be necessary for more accurate phase retrievals using polychromatic sources and large objects.
A single-shot method for X-ray grating interferometry
S. Lian, H. Kudo
X-ray phase contrast imaging (PCI) visualizes higher contrast than the conventional absorption imaging in biological soft tissues. In PCI, the grating-based interferometry has been attracting much attention because we can use the conventional laboratory X-ray tubes. However, unlike the conventional X-ray imaging, the measured fringe pattern is formed by three unknown variable-phase, absorption and visibility contrast. Therefore, we need to measure more than three fringe patterns by moving the grating, which is called as phase stepping. For a constant sub-micron accuracy of the phase stepping, the highly stable measurement system is required. In this paper we propose a single-shot method which needs only one fringe pattern. The proposed method consists of two steps. First, we compute the “rough” image using the measured fringe pattern under an assumption. Namely, a pixel and the surroundings on phase (also absorption and visibility) map have almost the same value. This assumption contributes to compensating for the lack of measured data. Second, to improve the quality of the “rough” map, we minimize the weighted least square error between the measured fringe pattern and the estimated one. The weight coefficients are computed from “rough” map using non-local approach of modern image processing in which the pixels with a similar value to the pixel of interested (where compute the phase) have larger weights. With these procedures, the discontinuous pixels are excluded from the calculation of the phase to improve the accuracy of obtained images. Some experimental results are presented to demonstrate the effectiveness of the proposed method.
Determination of 3D scattered radiation distributions from the Zubal Phantom as a function of LAO/RAO and CRA/CAU gantry angulation
The purpose of this study is to investigate how the scattered radiation distribution in the interventional procedure room varies with changes in cranial / caudal (CRA/CAU) and right anterior oblique / left anterior oblique (RAO/LAO) gantry angulation of a C-Arm fluoroscopic system to aid in staff dose management. The primary x-ray beam of a Toshiba Infinix fluoroscopy machine was modeled using EGSnrc (DOSXYZnrc) Monte Carlo code and the scattered radiation distributions were calculated using 5 x 109 photons incident on the Zubal computational phantom. The Zubal phantom is derived from a CT scan of an average adult male and is anthropomorphic with internal organs. The results show that substantial changes in the scatter dose are possible for the interventionalist next to the table with Cranial/Caudal and RAO/LAO angle variations. For frontal projections the largest change with CRA/CAU angle occurs below the table height, increasing by 50% at the position of the interventionalist next to the table for a 30 degree cranial angulation compared to a caudal angulation for a beam directed toward the abdomen. The scattered radiation distribution also is shown to change with different body regions such as the chest and abdomen. A library of 3D scatter dose-rate distributions is being developed to be implemented in a scatter display system for increased staff awareness of dose levels during procedures.
Can breast models be simplified to estimate scattered radiation in breast tomosythesis?
Oliver Diaz, Prekumar Elangovan, David R. Dance, et al.
Scattered radiation can represent a large portion of the total signal recorded at the image receptor in certain x-ray breast imaging systems, such as digital breast tomosynthesis (DBT). For many years, Monte Carlo (MC) simulations have represented the golden approach to estimate the scatter field, initially with simple models and more recently with anthropomorphic phantoms. However, it is unclear how the scattered radiation varies between such models. Further knowledge of the scatter behaviour can help to develop faster and simpler scatter field estimation approaches, which are highly demanded in virtual clinical trial (VCT) strategies. In this work, the scattered radiation estimated for several homogeneous breast models is compared against that from textured breast phantoms. By means of MC simulations, scatter fields are investigated under the same DBT scenario. Results for a quasi-realistic breast model suggest that homogeneous models with same shape and glandularity can approximate the scattered radiation produced by a heterogeneous phantom with a median error of 2%. Simpler models with semi-circular shapes, which reduces the complexity in the scatter field estimation and decrease the computational time, show good approximation in the central region of the breast although larger discrepancies are observed in the peripheral region of the breast image.
Scatter correction with a deterministic integral spherical harmonics method in computed tomography
Yujie Lu, Zhou Yu, Xiaohui Zhan, et al.
Different from Monte Carlo (MC) methods, radiative transfer equation (RTE) can precisely simulate single and multiple scattered photon distribution without simulated statistical noise in X-ray computed tomography. The simulated scattered photon distribution on the detectors can be used for scatter correction to reduce the artifacts and improve CT HU number accuracy. We have developed an integral spherical harmonics algorithm to solve the RTE and achieved good accuracy compared to MC methods. Here, we proposed a physical model-based scatter correction method with the developed RTE solution. The method includes the following steps: (1) The CT images are reconstructed from fast analytical method with scatter-contaminated projections and are segmented with HU threshold method; (2) Sparse-view scattered photon distribution is simulated with the developed RTE solver on segmented CT images; (3) Multiplicative scatter correction method makes use of interpolated full-view scattered photon distribution to remove scattering flux from measured projections; (4) The final CT images are reconstructed with corrected projections. Compared to hardware-based scattered photon rejection method with anti-scatter grid, the results show that scatter-induced artifacts are significantly reduced and HU uniformity is improved, demonstrating the efficiency of the proposed method.
Improvement of material decomposition accuracy using denoising and deblurring techniques in spectral mammography
With an increase of breast cancer patients, dual-energy mammographic techniques have been advanced for improving diagnostic accuracy. In general, conventional dual-energy techniques increase radiation dose because the techniques are based on double exposures. Dual-energy techniques with photon-counting detectors (PCDs) can be implemented by using a single exposure. However, the images obtained from the dual-energy techniques with the PCDs suffer from statistical noise because the dual-energy measurements were performed with a single exposure, causing a lack of the number of effective photons. Thus, the material decomposition accuracy is decreased, and the image quality is distorted. In this study, denoising and deblurring techniques were iteratively applied to a dual-energy mammographic technique based on a PCD, and we evaluated RMSE, noise, and CNR for the quantitative analysis of material decomposition. The results showed that the RMSE value was about 0.23 times lower for the decomposed images with the denoising and deblurring techniques than that without the denoising and deblurring techniques. The noise and CNR of the decomposed images were averagely decreased and increased by factors of 0.23 and 4.17, respectively, through the denoising and deblurring techniques. But, the iterative application of the debelurring technique slightly increased the RMSE and noise. Therefore, it is considered that the material decomposition accuracy and image quality can be improved by applying the denoising and deblurring techniques with the appropriate iterations.
Novel geometry for X-Ray diffraction mammary imaging: experimental validation on a breast phantom
Vera Feldman, Joachim Tabary, Caroline Paulus, et al.
Mammography is the first tool in breast cancer diagnosis. Its contrast relies on the difference of X-Ray attenuation in healthy and diseased tissues, which is quite limited. This leads to frequent false-positive or inconclusive results and requires further testing. X-Ray Diffraction provides information about molecular structure and can differentiate between healthy and cancerous breast tissues. It can thus be used in synergy with existing imaging methods to provide complementary diagnosis-relevant insight. We present a novel geometry of such an imaging system and its validation on a breast phantom composed of olive oil and beef muscle, imitating respectively the molecular structure of healthy and cancerous breast tissue. Our system combines energy-dispersive and angle-dispersive X-Ray diffraction by means of an energy-resolved CdZnTe detector and a multi-slit collimation in order to achieve depth-resolved imaging. The position of the tube with beef muscle inside the oil was varied in this experiment. The obtained results are satisfactory regarding the estimated position of the tube which is very promising for future ex-vivo experiments on human breast tissue samples. Further investigations are carried out on dose reduction and reliable classification algorithms in order to prepare this method for clinical applications.
Use of machine learning in CARNA proton imager
Gabriel Varney, Catherine Dema, Burak E. Gul, et al.
Proton therapy has potential for high precision dose delivery, provided that high accuracy is achieved in imaging. Currently, X-ray based techniques are preferred for imaging prior to proton therapy, and the stopping power conversion tables cause irreducible uncertainty. The proposed proton imaging methods aim to reduce this source of error, as well as lessen the radiation exposure of the patient. CARNA is a homogeneous compact calorimeter that utilizes a novel highdensity scintillating glass as an active medium. The compact design and unique geometry of the calorimeter eliminate the need for a tracker system and allow it to be directly attached to a gantry. Thus, giving CARNA potential to be used for insitu imaging during the hadron therapy, possibly to detect the prompt gammas. The novel glass development and the traditional image reconstruction studies performed with CARNA have been reported before. However, to improve the image reconstruction, a machine learning implementation with CARNA is reported. A proof-of-concept Artificial Neural Network, is shown to efficiently predict the density and the shape of the tumors.
Spot decomposition in a novel pencil beam scanning proton computed tomography
Range uncertainty is one of the most critical obstacles in proton therapy. Proton computed tomography (pCT) can potentially directly reconstruct relative stopping power (RSP) within 1%. We report the very first proton imaging technique based on spot decomposition within each spot for increasing efficiency of pCT using pencil beam scanning (PBS) technique. A 14cm-diameter cylinder water phantom was used in our simulation, embedded with three groups of cylinders (bone, muscle, and adipose, respectively). Each group of cylinders contains three different sizes (2cm, 1cm, and 3mm in diameter). A TOPAS Monte-Carlo model was developed simulating the on-board pCT image acquisition on a PBS gantry. A phase scorer was used to simulate a multi-layer pixelated proton residual energy detector. Each proton spot was divided into 19 overlapping sub-spots, and residual energy statistics were calculated for each sub-spot and were assigned to the nearest detector pixel. One hundred eighty projections were generated with 882 spots on each 10x10cm projection, using 200 MeV protons. Spot spacing and size (1-sigma) were 5.4mm and 6-8mm, respectively, on the detector. A direct path of proton transport route was assumed in an FDK-based reconstruction. pCT imaging dose was calculated, and the accuracy of RSP reconstruction was analyzed. Total dose to the phantom was 0.4mGy. The reconstructed mean RSP were 0.999(-0.1%), 0.969(-0.03%), 1.028(-0.2%), and 1.703(-0.9%) for the water, adipose, muscle, and bone, respectively. The STDs of the reconstructed RSP were <1.0%. This study has provided a new framework and offered an efficient approach for pCT acquisition and reconstruction.
Analysis of feature relevance using an image quality index applied to digital mammography
Arthur C. Costa, Bruno Barufaldi, Lucas R. Borges, et al.
In previous work, we investigated the application of the normalized anisotropic quality index (NAQI) as an image quality metric for digital mammography. The initial assessment showed that NAQI depends not only on radiation dose, but also varies based on image features such as breast anatomy. In this work, these dependencies are analyzed by assessing the contribution of a range of features on NAQI values. The generalized matrix learning vector quantization (GMLVQ) was used to evaluate feature relevance and to rank the imaging parameters and breast features that affect NAQI. The GMLVQ uses prototype vectors to segregate and to analyze the NAQI in three classes: (1) low, (2) medium, and (3) high NAQI values. We used Spearman’s correlation coefficient (ρ) to compare the results obtained by the GMLVQ method. The GMLVQ was trained using 6,076 clinical mammograms. The statistical analysis showed that NAQI is dependent on several imaging parameters and breast features; in particular, breast area (ρ = -0.65), breast density (p = 0.62) and tube current-exposure time product (mAs) (p = 0.56). The GMLVQ results show that the most relevant parameters that affect the NAQI values were breast area (approx. 31%), mAs (approx. 24%) and breast density (approx. 15%). The GMLVQ method allowed us to better understand the NAQI results and provide support for the use of this metric for image quality assessment in digital mammography.
Performance evaluation of the preclinical PET/MRI nanoScan with NEMA IQ-Phantom and self-designed 3D printed phantoms
PET/MRI combines two imaging modalities to produce hybrid images with advantageous quality. Combined images from PET/MRI present exquisite anatomical and structural details provided by MRI, and functional and molecular imaging from PET. To assess the quality of a state of the art preclinical PET/MRI nanoScan, performance evaluation was done according to National Electrical Manufacturers Association (NEMA) NU4-2008 standards [1]. Other evaluations were done with self-designed phantoms created through 3-D printing.
Feasibility of locating infarct core with 2D angiographic parametric imaging (API) using computed tomography perfusion data
Four-dimensional computed tomography perfusion (CTP) provides the capability to validate angiographic parametric imaging (API) when locating infarct core. Similar results between these two methods could indicate API can be used to determine whether infarct core has changed following reperfusion procedures. CTP data from 20 patients treated for ischemic strokes was retrospectively collected and loaded into a Vitrea software to locate cerebral infarct tissue. The CTP data was then used to simulate anteroposterior (AP), lateral, and planar digital subtraction angiograms (DSA) for each time period through the perfusion scan. These simulated DSA sequences were used to generate API maps related to mean transit time (MTT), bolus arrival time (BAT), time to peak (TTP), area under the curve (AUC), and peak height (PH) parameters throughout the brain. Contralateral hemisphere comparisons of these values were conducted to determine infarct regions. The infarct regions from the Vitrea and API software were compared using a region of interest overlay method. For all patients, contralateral hemisphere percent differences of 40% for MTT, 20% for BAT, 35% for TTP, 55% for AUC, and 50% for PH are consistent with infarct regions. Using these percentages, the accuracy of API in labeling infarct tissue for the AP, lateral, and planar views is 84%, 70%, and 78% respectively. API conducted on CTP data from stroke patients successfully identified infarct tissue using AP and planar DSA’s. Lateral DSA studies indicate future work is necessary for improved results. This validates API is a feasible method for locating infarct core after reperfusion procedures.
Quantitative evaluation of the effect of attenuation correction in SPECT images with CT-derived attenuation
Meysam Tavakoli, Marian Naij
In this study, we assessed the importance of attenuation correction by quantitative evaluation of errors associated with attenuation in myocardial SPECT in a phantom study. To do attenuation correction we use an attenuation map derived from X-ray CT data. The succession of attenuation correction highly depends on high quality of attenuation maps. CT derived attenuation map in related to non-uniform attenuation correction is used to do transmission dependent scatter correction. The OSEM algorithm with attenuation model was developed and used for attenuation correction during image reconstruction. Finally a comparison was done between reconstructed images using our OSEM code and analytical FBP method. The results of measurements show that: Our programs are capable to reconstruct SPECT images and correct the attenuation effects. Moreover to evaluate reconstructed image quality before and after attenuation correction we applied a famous approach using Image Quality Index. Attenuation correction increases the quality and quantity factors in both methods. This increasing is independent of activity in quantity factor and decrease with activity in quality factor. Both quantitative and qualitative of SPECT images were improved by attenuation correction. In both OSEM and FBP the activity ratio of heart phantom in comparison with the markers was very increased. So the attenuation correction in fat patients and low activity is recommended. Attenuation correction with CT images and OSEM reconstruction in the condition of complete registration yields superior results.
Investigation of scintillation light spread in a hemi-ellipsoid monolithic crystal for Cardiac SPECT using Geant4
Purpose: Dey group proposed a high sensitivity Cardiac SPECT system using hemi-ellipsoid detectors with pinhole collimation. To investigate detector resolution in more detail, we simulate the scintillator light spread on a monolithic hemi-ellipsoidal CsI crystal. We assume small optical light detectors will be placed on the outer surface of the crystal. Methods: We used Geant4 Monte Carlo simulation to produce the expected distribution of scintillation light on the outer surface of the crystal. This Lookup Table (LUT) is generated for 12 points, 4 at each of the apex, central region, and base of one slice of the crystal. Each set of points was situated at the corners of a “square” of side length 2mm. The light distributions were visualized using a flattened, cut hemi-ellipsoid. To test the performance of the LUT, 5 points inside each of the these “squares” were chosen as test points, and a light distribution was obtained for each. A light distribution match algorithm was developed to localize the test points. Results: Our results showed that, visually, we were able to distinguish the light distributions of points in the central region and base. Furthermore, our algorithm was able to localize the test points to within 1mm in these regions. The localization was slightly worse in the apex with a maximum error of about 1.5mm. However, the high magnification in the apex region will minimize the error in the system resolution for this region. In the future, we will simulate the full LUT and improve the search algorithm.
Cardiac CT estimability index: an ideal estimator in the presence of noise and motion
Taylor Richards, Alex Ivanov, Paul Segars, et al.
Estimating parameters of clinical significance, like coronary stenosis, accurately and precisely from cardiac CT images remains a difficult task as image noise and cardiac motion can degrade image quality and distort underlying anatomic information. The purpose of this study was to develop a computational framework to objectively quantify stenosis estimation task performance of an ideal estimator in cardiac CT. The resulting scalar figure-of-merit, the estimability index (e’), serves as a cardiac CT specific task-based measure of image quality. The developed computational framework consisted of idealized coronary vessel and plaque models, asymmetric motion point spread functions (mPSF), CT image blur (MTF) and noise operators (NPS), and an automated maximum-likelihood estimator (MLE) implemented as a matched template squared-difference operator. Using this framework, e’ values were calculated for 131 clinical case scenarios from the Prospective Multicenter Imaging Study for Evaluation of Chest Pain (PROMISE) trial. The calculated e’ results were then utilized to classify patient cases into two exclusive cohorts, high-quality and low-quality, characterized by clinically meaningful differences in image quality. An e’ based linear classifier categorized the 131 patient datasets with an AUC of 0.96 (6 false-positives and 10 false-negatives), compared to an AUC of 0.89 (4 false-positives and 20 false-negatives) for a linear classifier based on contrast-to-noise ratio (CNR). In summary, a computational framework to objectively quantify stenosis estimation task performance was successfully implemented and was reflective of clinical results in the context of subset of a large clinical trial (PROMISE) with diverse sites, readers, scanners, acquisition protocols, and patient types.
Comparison of digital mammograms obtained with the cassette-type retrofit mammography flat panel detector installed on an analog system and the conventional full-field digital mammography
Purpose: To compare the image quality of digital mammograms obtained with a cassette-type retrofit digital mammography (CRM) flat panel detector (FPD) installed on an analog system and the conventional full-field digital mammography (FFDM). Materials and Methods: Digital mammograms were prospectively obtained with a CRM FPD (RoseM 2430C, DRTECH Corp) installed on an analog system (Lorad M-IV, Hologic) in 90 women (median age, 49 years) who had previous mammograms obtained with the conventional FFDM units. Ninety pairs of mammograms were evaluated by two breast radiologists. The overall image quality (including contrast and sharpness) and visibility of normal structure were evaluated using a 5-point scale (1, poor; 5, excellent). If a lesion presents, the visibility of lesions was evaluated using a 4-point scale (1, not visible; 4: high conspicuity). Results: The contrast and sharpness of CRM FPD (mean, 4.1±0.8 and 4.0±0.9, respectively) were not significantly different from those of FFDM (mean, 4.2±0.5 and 4.2±0.5; P<0.05). Of 90 women, the overall image quality was similar between the two images in 39 (43%); FFDM showed better image quality in 33 (37%); CRM FPD showed better image quality in 18 (20%) (P=0.055). There were 54 lesions (44 calcifications, 6 masses, 3 asymmetries, and one mass with calcifications) in 33 women. The difference of lesion visibility between the FFDM (mean 3.3±0.8) and CRM FPD (mean 3.4±0.8) was not statistically significant (P=0.083). Conclusion: The image quality of the mammograms obtained with CRM FPD was comparable with that of FFDM.
Patient-informed and physiology-based modelling of contrast dynamics in cross-sectional imaging
Previous studies have shown that many factors including body habitus, sex, and age of the patient, as well as contrast injection protocol contribute to the variability in contrast-enhanced cross-sectional imaging (i.e., CT). We have previously developed a compartmentalized differential-equation physiology-based pharmacokinetics (PBPK) model incorporated into computational human models (XCAT) to estimate contrast concentration and CT number (HU) enhancement of organs over time. While input to the PBPK model requires certain attributes (height, weight, age, and sex), this still results in a generic prediction as it only cohorts patients into 4 groups. In addition, it does not account for scanning parameters which influence the quality of the image. The PBPK model also requires an estimate of patient’s major organ volumes, not readily-available before a scan, which limits its potential application in prospective personalization of contrast-enhanced protocols. To address these limitations, this study used a machine learning approach to prospectively model contrast dynamics for an organ of interest (liver), given the patient attributes, contrast administration, and imaging parameters. To evaluate its accuracy, we compared the proposed model against the PBPK model. A library of 170 clinical images, with their corresponding patient attributes and contrast and imaging protocols, was used to build the network. The developed network used 70% of the cases for training and validation and the rest for testing. The results indicated a more accurate predictive performance (higher R2), as compared to the PBPK model, in estimating hepatic HU values using patient attributes, scanning parameters, and contrast administration.
Computational-efficient cascaded neural network for CT image reconstruction
In computed tomographic (CT) image reconstruction, image prior design and parameter tuning are important to improving the image reconstruction quality from noisy or undersampled projections. In recent years, the development of deep learning in medical image reconstruction made it possible to automatically find both suitable image priors and hyperparameters. By unrolling reconstruction algorithm to finite iterations and parameterizing prior functions and hyperparameters with deep artificial neural networks, all the parameters can be learned end-to-end to reduce the difference between reconstructed images and the training ground truth. Despite of its superior performance, the unrolling scheme suffers from huge memory consumption and computational cost in the training phase, made it hard to apply to 3 dimensional applications in CT, such as cone-beam CT, helical CT, tomosynthesis, etc. In this paper, we proposed a training-time computational-efficient cascaded neural network for CT image reconstruction, which had several sequentially trained cascades of networks for image quality improvement, connected with data fidelity correction steps. Each cascade was trained purely in the image domain, so that image patches could be utilized for training, which would significantly accelerate the training process and reduce memory consumption. The proposed method was fully scalable to 3D data with current hardware. Simulation of sparse-view sampling were done and demonstrated that the proposed method could achieve similar image quality compared to the state-of-the-art unrolled networks.
Optimizing voxel-wise dose calculations in cone-beam computed tomography
An analytical algorithm for the estimation of the patient-specific dose distributions in cone-beam computed tomography is introduced. The developed dose estimation method requires the reconstructed voxel data in values of linear attenuation coefficients and the scanning protocol. The algorithm first calculates the dose distribution due to the primary beam attenuation along the beam path between the source and each reconstructed voxel in conjunction with the solid angle subtended by given voxel. Then, this primary dose voxel map becomes the source for the dose distribution due to the scattered photons. For the pre-calculated primary dose value in a given voxel, the scatter dose values to all the other voxels are similarly calculated as the primary. The developed algorithm shows a good agreement with the Monte Carlo (MC) simulation for an anthropomorphic head-and-neck phantom. The accuracy of the analytical method is investigated by comparing estimates with the MC estimates and the strategy for computational acceleration is discussed in terms of the number of projections used for reconstruction, the number of spectral bins of the incident x-ray spectrum, the number of voxels, and the extent of scattering ranges.
Toward large-area flat-panel sandwich detectors for single-shot dual-energy imaging
The authors have developed a sandwich-like multilayer detector capable of imaging a large field of view. Two detector layers use the same photodiode arrays based on complementary metal-oxide-semiconductor active pixel technology while the scintillators of the front and rear detectors are composed of Gd2O2S : Tb and CsI : Tl, respectively. The front detector is implemented on a flexible printed-circuit board (FPCB) that is electrically connected to a corresponding control/readout PCB located under the rear detector’s control/readout PCB. The energy separation between two detector layers can be improved by using additional interlayer filters. Imaging performance of each detector layer is investigated for various filter designs. The imaging performance is evaluated in terms of large-area signal and noise, modulation-transfer function, noise-power spectrum, and detective quantum efficiency.
Iterative CT image reconstruction using neural network optimization algorithms
Stochastic or model-based iterative reconstruction is able to account for the stochastic nature of the CT imaging process and some artifacts and is able to provide better reconstruction quality. It is also, however, computationally expensive. In this work, we investigated the use of some of the neural network training algorithms such as momentum and Adam for iterative CT image reconstruction. Our experimental results indicate that these algorithms provide better results and faster convergence than basic gradient descent. They also provide competitive results to coordinate descent (a leading technique for iterative reconstruction) but, unlike coordinate descent, they can be implemented as parallel computations, hence can potentially accelerate iterative reconstruction in practice.
A novel mammographic fusion imaging technique: the first results of tumor tissues detection from resected breast tissues using energy-resolved photon counting detector
Mariko Sasaki, Shuji Koyama, Yoshie Kodera, et al.
We developed a prototype photon-counting mammography unit with a cadmium zinc telluride detector, which provides a new type of image with physical analysis parameters. Using the X-ray attenuation information obtained from this device, we examined the ability of this technique in discriminating substances and estimating their compositions. To estimate the substance compositions, we used resected breast tissues immediately after a surgical operation for invasive carcinoma of no special type, and used phantoms to reproduce mammary glands and adipose tissue. In our system, the spectrum penetrating the substance was measured with three energy bins in each pixel. The products of linear-attenuation coefficient and thicknesses for each bin were calculated. Using these three values, the scatterplots displaying all the values calculated from each pixel inside the region of interest (ROI) on the image were created. The scatterplot displaying only gravity values calculated for each ROI on the image was created for evaluating the separation of plot points to discriminate between different substance compositions. The gravity points placed on the malignant tumor tissue were plotted separately from those on the normal tissue. Furthermore, a fusion image was created by overlapping an X-ray image and values of this scatterplot points represented on a 10-step color scale. The fusion image was highlighting the differences in substance compositions using color tone, such as malignant tumor or mammary gland tissue, by adjusting the color scale level.