Proceedings Volume 9783

Medical Imaging 2016: Physics of Medical Imaging

Despina Kontos, Thomas G. Flohr
cover
Proceedings Volume 9783

Medical Imaging 2016: Physics of Medical Imaging

Despina Kontos, Thomas G. Flohr
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 22 June 2016
Contents: 32 Sessions, 217 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2016
Volume Number: 9783

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9783
  • Tomosynthesis and Digital Subtraction Angiography
  • Breast Imaging
  • Keynote and Dual and Multi Energy CT
  • Cone Beam CT I: New Technologies, Corrections
  • Phase Contrast Imaging
  • Cone Beam CT II: System Optimization, Image Reconstruction
  • CT I: Technology, System Characterization, Applications
  • Detectors
  • CT II: Image Reconstruction, Artifact Reduction
  • Photon Counting CT I: Instrumentation
  • PET and MR
  • Photon Counting CT II: Spectral Imaging
  • New Systems and Technologies
  • Scatter and Diffraction Imaging
  • Task Driven Imaging, Observers, Detectability, Phantom Studies
  • Poster Session: Breast Imaging
  • Poster Session: Cone Beam CT
  • Poster Session: CT: Artifact Corrections
  • Poster Session: CT: Technology, System Characterization, Dose, Applications
  • Poster Session: Detectors
  • Poster Session: Dual and Multi Energy CT
  • Poster Session: Image Reconstruction
  • Poster Session: Measurements
  • Poster Session: New Systems and Technologies
  • Poster Session: PET, SPECT, MR, Ultrasound
  • Poster Session: Phase Contrast Imaging
  • Poster Session: Photon Counting CT
  • Poster Session: Radiation Therapy
  • Poster Session: Scatter and Diffraction Imaging
  • Poster Session: Task Driven Imaging, Observers, Detectability, Phantom Studies
  • Poster Session: Tomosynthesis and Digital Radiography
Front Matter: Volume 9783
icon_mobile_dropdown
Front Matter: Volume 9783
This PDF file contains the front matter associated with SPIE Proceedings Volume 9783, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Tomosynthesis and Digital Subtraction Angiography
icon_mobile_dropdown
Feasibility of reduced-dose 3D/4D-DSA using a weighted edge preserving filter
A conventional 3D/4D digital subtraction angiogram (DSA) requires two rotational acquisitions (mask and fill) to compute the log-subtracted projections that are used to reconstruct a 3D/4D volume. Since all of the vascular information is contained in the fill acquisition, it is hypothesized that it is possible to reduce the x-ray dose of the mask acquisition substantially and still obtain subtracted projections adequate to reconstruct a 3D/4D volume with noise level comparable to a full dose acquisition. A full dose mask and fill acquisition were acquired from a clinical study to provide a known full dose reference reconstruction. Gaussian noise was added to the mask acquisition to simulate a mask acquisition acquired at 10% relative dose. Noise in the low-dose mask projections was reduced with a weighted edge preserving (WEP) filter designed to preserve bony edges while suppressing noise. 2D log-subtracted projections were computed from the filtered low-dose mask and full-dose fill projections, and then 3D/4D-DSA reconstruction algorithms were applied. Additional bilateral filtering was applied to the 3D volumes. The signal-to-noise ratio measured in the filtered 3D/4D-DSA volumes was compared to the full dose case. The average ratio of filtered low-dose SNR to full-dose SNR was 1.07 for the 3D-DSA and 1.05 for the 4D-DSA, indicating the method is a feasible approach to restoring SNR in DSA scans acquired with a low-dose mask. The method was also tested in a phantom study with full dose fill and 22% dose mask.
Quantification of resolution in multiplanar reconstructions for digital breast tomosynthesis
Trevor L. Vent, Raymond J. Acciavatti, Young Joon Kwon, et al.
Multiplanar reconstruction (MPR) in digital breast tomosynthesis (DBT) allows tomographic images to be portrayed in various orientations. We have conducted research to determine the resolution of tomosynthesis MPR. We built a phantom that houses a star test pattern to measure resolution. This phantom provides three rotational degrees of freedom. The design consists of two hemispheres with longitudinal and latitudinal grooves that reference angular increments. When joined together, the hemispheres form a dome that sits inside a cylindrical encasement. The cylindrical encasement contains reference notches to match the longitudinal and latitudinal grooves that guide the phantom’s rotations. With this design, any orientation of the star-pattern can be analyzed. Images of the star-pattern were acquired using a DBT mammography system at the Hospital of the University of Pennsylvania. Images taken were reconstructed and analyzed by two different methods. First, the maximum visible frequency (in line pairs per millimeter) of the star test pattern was measured. Then, the contrast was calculated at a fixed spatial frequency. These analyses confirm that resolution decreases with tilt relative to the breast support. They also confirm that resolution in tomosynthesis MPR is dependent on object orientation. Current results verify that the existence of super-resolution depends on the orientation of the frequency; the direction parallel to x-ray tube motion shows super-resolution. In conclusion, this study demonstrates that the direction of the spatial frequency relative to the motion of the x-ray tube is a determinant of resolution in MPR for DBT.
A new generation of stationary digital breast tomosynthesis system with wider angular span and faster scanning time
Jabari Calliste, Gongting Wu, Philip E. Laganis, et al.
We have developed a clinically ready first generation stationary breast tomosynthesis system (s-DBT). In the s-DBT system, focal spot blur associated with x-ray source motion is completely eliminated, allowing for rapid acquisition of projection images over a larger angular span without changing the acquisition time.

In the phantom studies the 1st generation s‐DBT system has demonstrated 30% higher spatial resolution than the corresponding continuous motion DBT systems. The system is currently being evaluated for its diagnostic performance in 100 patient clinical evaluation against FFDM. Initial results indicate that the s­‐DBT system can produce increased lesion conspicuity and comparable MC visibility. However due to x­‐ray flux limitations, certain large size patients have to be excluded. Recent studies have shown that increasing the angular span beyond 30° can be beneficial for enhanced depth resolution. We report the preliminary characterization of the 2nd generation s-­DBT system with a new CNT x-­ray source array, increased tube flux and a larger angular span. Increasing x‐ray tube flux allows for a larger patient population and dual energy imaging. Results indicate that the system delivers more than twice the flux, allowing for imaging of all size patients with acquisition time of 2­‐4 seconds. A 7° increase in angular span over 1st generation decreased the ASF by 37%. Additionally, the 2nd generation s‐DBT system utilizing a specific AFVR reconstruction method resulted in a 92% increase in the in plane resolution over CM DBT system, and a 37% increase in spatial resolution over the 1st generation s-‐DBT system.
Stationary digital chest tomosynthesis for coronary artery calcium scoring
Gongting Wu, Jiong Wang, Marci Potuzko, et al.
The coronary artery calcium score (CACS) measures the buildup of calcium on the coronary artery wall and has been shown to be an important predictor of the risk of coronary artery diseases (CAD). Currently CACS is measured using CT, though the relatively high cost and high radiation dose has limited its adoption as a routine screening procedure. Digital Chest Tomosynthesis (DCT), a low dose and low cost alternative to CT, and has been shown to achieve 90% of sensitivity of CT in lung disease screening. However commercial DCT requires long scanning time and cannot be adapted for high resolution gated cardiac imaging, necessary for CACS. The stationary DCT system (s- DCT), developed in our lab, has the potential to significantly shorten the scanning time and enables high resolution cardiac gated imaging. Here we report the preliminary results of using s-DCT to estimate the CACS. A phantom heart model was developed and scanned by the s-DCT system and a clinical CT in a phantom model with realistic coronary calcifications. The adapted fan-beam volume reconstruction (AFVR) method, developed specifically for stationary tomosynthesis systems, is used to obtain high resolution tomosynthesis images. A trained cardiologist segmented out the calcifications and the CACS was obtained. We observed a strong correlation between the tomosynthesis derived CACS and CT CACS (r2 = 0.88). Our results shows s-DCT imaging has the potential to estimate CACS, thus providing a possible low cost and low dose imaging protocol for screening and monitoring CAD.
Detection of microcalcification clusters by 2D-mammography and narrow and wide angle digital breast tomosynthesis
Andria Hadjipanteli, Premkumar Elangovan, Padraig T. Looney, et al.
The aim of this study was to compare the detection of microcalcification clusters by human observers in breast images using 2D-mammography and narrow (15°/15 projections) and wide (50°/25 projections) angle digital breast tomosynthesis (DBT). Simulated microcalcification clusters with a range of microcalcification diameters (125 μm-275 μm) were inserted into 6 cm thick simulated compressed breasts. Breast images were produced with and without inserted microcalcification clusters using a set of image modelling tools, which were developed to represent clinical imaging by mammography and tomosynthesis. Commercially available software was used for image processing and image reconstruction. The images were then used in a series of 4-alternative forced choice (4AFC) human observer experiments conducted for signal detection with the microcalcification clusters as targets. The minimum detectable calcification diameter was found for each imaging modality: (i) 2D-mammography: 164±5 μm (ii) narrow angle DBT: 210±5 μm, (iii) wide angle DBT: 255±4 μm. A statistically significant difference was found between the minimum detectable calcification diameters that can be detected by the three imaging modalities. Furthermore, it was found that there was not a statistically significant difference between the results of the five observers that participated in this study. In conclusion, this study presents a method that quantifies the threshold diameter required for microcalcification detection, using high resolution, realistic images with observers, for the comparison of DBT geometries with 2D-mammography. 2Dmammography can visualise smaller detail diameter than both DBT imaging modalities and narrow-angle DBT can visualise a smaller detail diameter than wide-angle DBT.
Breast Imaging
icon_mobile_dropdown
Estimating breast thickness for dual-energy subtraction in contrast-enhanced digital mammography using calibration phantoms
Kristen C. Lau, Young Joon Kwon, Moez Karim Aziz, et al.
Dual-energy contrast-enhanced digital mammography (DE CE-DM) uses an iodinated contrast agent to image the perfusion and vasculature of the breast. DE images are obtained by a weighted logarithmic subtraction of the high-energy (HE) and low-energy (LE) image pairs. We hypothesized that the optimal DE subtraction weighting factor is thickness-dependent, and developed a method for determining breast tissue composition and thickness in DE CE-DM. Phantoms were constructed using uniform blocks of 100% glandular-equivalent and 100% adipose-equivalent material. The thickness of the phantoms ranged from 3 to 8 cm, in 1 cm increments. For a given thickness, the glandular-adipose composition of the phantom was varied using different combinations of blocks. The logarithmic LE and logarithmic HE signal intensities were measured; they decrease linearly with increasing glandularity for a given thickness. The signals decrease with increasing phantom thickness and the x-ray signal decreases linearly with thickness for a given glandularity. As the thickness increases, the attenuation difference per additional glandular block decreases, indicating beam hardening. From the calibration mapping, we have demonstrated that we can predict percent glandular tissue and thickness when given two distinct signal intensities. Our results facilitate the subtraction of tissue at the boundaries of the breast, and aid in discriminating between contrast agent uptake in glandular tissue and subtraction artifacts.
Generation of 3D synthetic breast tissue
Premkumar Elangovan, David R. Dance, Kenneth C. Young, et al.
Virtual clinical trials are an emergent approach for the rapid evaluation and comparison of various breast imaging technologies and techniques using computer-based modeling tools. A fundamental requirement of this approach for mammography is the use of realistic looking breast anatomy in the studies to produce clinically relevant results. In this work, a biologically inspired approach has been used to simulate realistic synthetic breast phantom blocks for use in virtual clinical trials. A variety of high and low frequency features (including Cooper’s ligaments, blood vessels and glandular tissue) have been extracted from clinical digital breast tomosynthesis images and used to simulate synthetic breast blocks. The appearance of the phantom blocks was validated by presenting a selection of simulated 2D and DBT images interleaved with real images to a team of experienced readers for rating using an ROC paradigm. The average areas under the curve for 2D and DBT images were 0.53±.04 and 0.55±.07 respectively; errors are the standard errors of the mean. The values indicate that the observers had difficulty in differentiating the real images from simulated images. The statistical properties of simulated images of the phantom blocks were evaluated by means of power spectrum analysis. The power spectrum curves for real and simulated images closely match and overlap indicating good agreement.
A new, open-source, multi-modality digital breast phantom
An anthropomorphic digital breast phantom has been developed with the goal of generating random voxelized breast models that capture the anatomic variability observed in vivo. This is a new phantom and is not based on existing digital breast phantoms or segmentation of patient images. It has been designed at the outset to be modality agnostic (i.e., suitable for use in modeling x-ray based imaging systems, magnetic resonance imaging, and potentially other imaging systems) and open source so that users may freely modify the phantom to suit a particular study. In this work we describe the modeling techniques that have been developed, the capabilities and novel features of this phantom, and study simulated images produced from it. Starting from a base quadric, a series of deformations are performed to create a breast with a particular volume and shape. Initial glandular compartments are generated using a Voronoi technique and a ductal tree structure with terminal duct lobular units is grown from the nipple into each compartment. An additional step involving the creation of fat and glandular lobules using a Perlin noise function is performed to create more realistic glandular/fat tissue interfaces and generate a Cooper’s ligament network. A vascular tree is grown from the chest muscle into the breast tissue. Breast compression is performed using a neo-Hookean elasticity model. We show simulated mammographic and T1-weighted MRI images and study properties of these images.
Rayleigh imaging in spectral mammography
Spectral imaging is the acquisition of multiple images of an object at different energy spectra. In mammography, dual-energy imaging (spectral imaging with two energy levels) has been investigated for several applications, in particular material decomposition, which allows for quantitative analysis of breast composition and quantitative contrast-enhanced imaging. Material decomposition with dual-energy imaging is based on the assumption that there are two dominant photon interaction effects that determine linear attenuation: the photoelectric effect and Compton scattering. This assumption limits the number of basis materials, i.e. the number of materials that are possible to differentiate between, to two. However, Rayleigh scattering may account for more than 10% of the linear attenuation in the mammography energy range. In this work, we show that a modified version of a scanning multi-slit spectral photon-counting mammography system is able to acquire three images at different spectra and can be used for triple-energy imaging. We further show that triple-energy imaging in combination with the efficient scatter rejection of the system enables measurement of Rayleigh scattering, which adds an additional energy dependency to the linear attenuation and enables material decomposition with three basis materials. Three available basis materials have the potential to improve virtually all applications of spectral imaging.
Reproducing 2D breast mammography images with 3D printed phantoms
Matthew Clark, Bahaa Ghammraoui, Andreu Badal
Mammography is currently the standard imaging modality used to screen women for breast abnormalities and, as a result, it is a tool of great importance for the early detection of breast cancer. Physical phantoms are commonly used as surrogates of breast tissue to evaluate some aspects of the performance of mammography systems. However, most phantoms do not reproduce the anatomic heterogeneity of real breasts. New fabrication technologies, such as 3D printing, have created the opportunity to build more complex, anatomically realistic breast phantoms that could potentially assist in the evaluation of mammography systems. The primary objective of this work is to present a simple, easily reproducible methodology to design and print 3D objects that replicate the attenuation profile observed in real 2D mammograms. The secondary objective is to evaluate the capabilities and limitations of the competing 3D printing technologies, and characterize the x-ray properties of the different materials they use. Printable phantoms can be created using the open-source code introduced in this work, which processes a raw mammography image to estimate the amount of x-ray attenuation at each pixel, and outputs a triangle mesh object that encodes the observed attenuation map. The conversion from the observed pixel gray value to a column of printed material with equivalent attenuation requires certain assumptions and knowledge of multiple imaging system parameters, such as x-ray energy spectrum, source-to-object distance, compressed breast thickness, and average breast material attenuation. A detailed description of the new software, a characterization of the printed materials using x-ray spectroscopy, and an evaluation of the realism of the sample printed phantoms are presented.
Breast ultrasound tomography with two parallel transducer arrays
Breast ultrasound tomography is an emerging imaging modality to reconstruct the sound speed, density, and ultrasound attenuation of the breast in addition to ultrasound reflection/beamforming images for breast cancer detection and characterization. We recently designed and manufactured a new synthetic-aperture breast ultrasound tomography prototype with two parallel transducer arrays consisting of a total of 768 transducer elements. The transducer arrays are translated vertically to scan the breast in a warm water tank from the chest wall/axillary region to the nipple region to acquire ultrasound transmission and reflection data for whole-breast ultrasound tomography imaging. The distance of these two ultrasound transducer arrays is adjustable for scanning breasts with different sizes. We use our breast ultrasound tomography prototype to acquire phantom and in vivo patient ultrasound data to study its feasibility for breast imaging. We apply our recently developed ultrasound imaging and tomography algorithms to ultrasound data acquired using our breast ultrasound tomography system. Our in vivo patient imaging results demonstrate that our breast ultrasound tomography can detect breast lesions shown on clinical ultrasound and mammographic images.
Keynote and Dual and Multi Energy CT
icon_mobile_dropdown
Limited-angle multi-energy CT using joint clustering prior and sparsity regularization
In this article, we present an easy-to-implement Multi-energy CT scanning strategy and a corresponding reconstruction method, which facilitate spectral CT imaging by improving the data efficiency the number-of-energy- channel fold without introducing visible limited-angle artifacts caused by reducing projection views. Leveraging the structure coherence at different energies, we first pre-reconstruct a prior structure information image using projection data from all energy channels. Then, we perform a k-means clustering on the prior image to generate a sparse dictionary representation for the image, which severs as a structure information constraint. We com- bine this constraint with conventional compressed sensing method and proposed a new model which we referred as Joint Clustering Prior and Sparsity Regularization (CPSR). CPSR is a convex problem and we solve it by Alternating Direction Method of Multipliers (ADMM). We verify our CPSR reconstruction method with a numerical simulation experiment. A dental phantom with complicate structures of teeth and soft tissues is used. X-ray beams from three spectra of different peak energies (120kVp, 90kVp, 60kVp) irradiate the phantom to form tri-energy projections. Projection data covering only 75◦ from each energy spectrum are collected for reconstruction. Independent reconstruction for each energy will cause severe limited-angle artifacts even with the help of compressed sensing approaches. Our CPSR provides us with images free of the limited-angle artifact. All edge details are well preserved in our experimental study.
Dictionary-based image denoising for dual energy computed tomography
Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.
Cone Beam CT I: New Technologies, Corrections
icon_mobile_dropdown
Twin robotic x-ray system for 2D radiographic and 3D cone-beam CT imaging
Andreas Fieselmann, Jan Steinbrener, Anna K. Jerebko, et al.
In this work, we provide an initial characterization of a novel twin robotic X-ray system. This system is equipped with two motor-driven telescopic arms carrying X-ray tube and flat-panel detector, respectively. 2D radiographs and fluoroscopic image sequences can be obtained from different viewing angles. Projection data for 3D cone-beam CT reconstruction can be acquired during simultaneous movement of the arms along dedicated scanning trajectories. We provide an initial evaluation of the 3D image quality based on phantom scans and clinical images. Furthermore, initial evaluation of patient dose is conducted. The results show that the system delivers high image quality for a range of medical applications. In particular, high spatial resolution enables adequate visualization of bone structures. This system allows 3D X-ray scanning of patients in standing and weight-bearing position. It could enable new 2D/3D imaging workflows in musculoskeletal imaging and improve diagnosis of musculoskeletal disorders.
Five-dimensional motion compensation for respiratory and cardiac motion with cone-beam CT of the thorax region
Sebastian Sauppe, Andreas Hahn, Marcus Brehm, et al.
We propose an adapted method of our previously published five-dimensional (5D) motion compensation (MoCo) algorithm1, developed for micro-CT imaging of small animals, to provide for the first time motion artifact-free 5D cone-beam CT (CBCT) images from a conventional flat detector-based CBCT scan of clinical patients. Image quality of retrospectively respiratory- and cardiac-gated volumes from flat detector CBCT scans is deteriorated by severe sparse projection artifacts. These artifacts further complicate motion estimation, as it is required for MoCo image reconstruction. For high quality 5D CBCT images at the same x-ray dose and the same number of projections as todays 3D CBCT we developed a double MoCo approach based on motion vector fields (MVFs) for respiratory and cardiac motion. In a first step our already published four-dimensional (4D) artifact-specific cyclic motion-compensation (acMoCo) approach is applied to compensate for the respiratory patient motion. With this information a cyclic phase-gated deformable heart registration algorithm is applied to the respiratory motion-compensated 4D CBCT data, thus resulting in cardiac MVFs. We apply these MVFs on double-gated images and thereby respiratory and cardiac motion-compensated 5D CBCT images are obtained. Our 5D MoCo approach processing patient data acquired with the TrueBeam 4D CBCT system (Varian Medical Systems). Our double MoCo approach turned out to be very efficient and removed nearly all streak artifacts due to making use of 100% of the projection data for each reconstructed frame. The 5D MoCo patient data show fine details and no motion blurring, even in regions close to the heart where motion is fastest.
Shifted detector super short scan reconstruction for the rotate-plus-shift trajectories and its application to C-arm CT systems
Jan Kuntz, Michael Knaup, Christof Fleischmann, et al.
Mobile and compact C-arm systems are routinely used in interventional procedures for fluoroscopic CT imaging. The mechanical requirements guarantee for a maximum of flexibility and mobility but restrict the mechanical rotation range (e.g. 165°) and the lateral size of the field of measurement (FOM), typically about 160 mm. Recently, the rotate-plus-shift trajectory for the acquisition of complete datasets from 180° minus fan-angle has been published.1, 2 Here, we combine the rotate-plus-shift trajectory with a shifted detector approach for a fully motorized C-arm system. As the isocenter in non-centric C-arms can be freely chosen, the shifted detector can be equally well absorbed with an offset of the C parallel to the transaxial detector direction. The typical rotation range of 360° used in shifted detector trajectories is replaced by a double rotate-plus-shift scan requiring a rotation range of at least 180° minus fan-angle. The trajectory increasing the diameter of the FOM by up to a factor of two is presented and the practical application of variations with an asymmetric FOM is shown. For image reconstruction we use our modified FDK algorithm that is equipped with a generalized redundancy weight. The presented trajectory can increase the applicability and flexibility of C-arm systems and has the potential to perform intra-operative large volume control or overview scans and thus reduce the patient’s risk.
Striped ratio grids for scatter estimation
Striped ratio grids are a new concept for scatter management in cone-beam CT. These grids are a modification of conventional anti-scatter grids and consist of stripes which alternate between high grid ratio and low grid ratio. Such a grid is related to existing hardware concepts for scatter estimation such as blocker-based methods or primary modulation, but rather than modulating the primary, the striped ratio grid modulates the scatter. The transitions between adjacent stripes can be used to estimate and subtract the remaining scatter. However, these transitions could be contaminated by variation in the primary radiation. We describe a simple nonlinear image processing algorithm to estimate scatter, and proceed to validate the striped ratio grid on experimental data of a pelvic phantom. The striped ratio grid is emulated by combining data from two scans with different grids. Preliminary results are encouraging and show a significant reduction of scatter artifact.
Image-based motion compensation for high-resolution extremities cone-beam CT
Purpose: Cone-beam CT (CBCT) of the extremities provides high spatial resolution, but its quantitative accuracy may be challenged by involuntary sub-mm patient motion that cannot be eliminated with simple means of external immobilization. We investigate a two-step iterative motion compensation based on a multi-component metric of image sharpness. Methods: Motion is considered with respect to locally rigid motion within a particular region of interest, and the method supports application to multiple locally rigid regions. Motion is estimated by maximizing a cost function with three components: a gradient metric encouraging image sharpness, an entropy term that favors high contrast and penalizes streaks, and a penalty term encouraging smooth motion. Motion compensation involved initial coarse estimation of gross motion followed by estimation of fine-scale displacements using high resolution reconstructions. The method was evaluated in simulations with synthetic motion (1–4 mm) applied to a wrist volume obtained on a CMOS-based CBCT testbench. Structural similarity index (SSIM) quantified the agreement between motion-compensated and static data. The algorithm was also tested on a motion contaminated patient scan from dedicated extremities CBCT. Results: Excellent correction was achieved for the investigated range of displacements, indicated by good visual agreement with the static data. 10-15% improvement in SSIM was attained for 2-4 mm motions. The compensation was robust against increasing motion (4% decrease in SSIM across the investigated range, compared to 14% with no compensation). Consistent performance was achieved across a range of noise levels. Significant mitigation of artifacts was shown in patient data. Conclusion: The results indicate feasibility of image-based motion correction in extremities CBCT without the need for a priori motion models, external trackers, or fiducials.
Over-exposure correction in knee cone-beam CT imaging with automatic exposure control using a partial low dose scan
Jang-Hwan Choi, Kerstin Muller, Scott Hsieh, et al.
C-arm-based cone-beam CT (CBCT) systems with flat-panel detectors are suitable for diagnostic knee imaging due to their potentially flexible selection of CT trajectories and wide volumetric beam coverage. In knee CT imaging, over-exposure artifacts can occur because of limitations in the dynamic range of the flat panel detectors present on most CBCT systems. We developed a straightforward but effective method for correction and detection of over-exposure for an Automatic Exposure Control (AEC)-enabled standard knee scan incorporating a prior low dose scan. The radiation dose associated with the low dose scan was negligible (0.0042mSv, 2.8% increase) which was enabled by partially sampling the projection images considering the geometry of the knees and lowering the dose further to be able to just see the skin-air interface. We combined the line integrals from the AEC and low dose scans after detecting over-exposed regions by comparing the line profiles of the two scans detector row-wise. The combined line integrals were reconstructed into a volumetric image using filtered back projection. We evaluated our method using in vivo human subject knee data. The proposed method effectively corrected and detected over-exposure, and thus recovered the visibility of exterior tissues (e.g., the shape and density of the patella, and the patellar tendon), incorporating a prior low dose scan with a negligible increase in radiation exposure.
Phase Contrast Imaging
icon_mobile_dropdown
X-ray differential phase contrast imaging using a grating interferometer and a single photon counting detector
Yongshuai Ge, Ran Zhang, Ke Li, et al.
For grating interferometer-based x-ray differential phase contrast (DPC) imaging systems, their noise performance is strongly dependent on both the visibility of the interference fringe pattern and the total number of photons used to acquire and extract the DPC signal. For a given interferometer, it is usually designed to work at a specific x-ray energy, therefore any deviation from the designed energy may result in certain visibility loss. In this work, a single photon counting detector (PCD) was incorporated into a DPC imaging system, which enabled photons with energies close to the designed operation energy of the interferometer to be selectively used for DPC signal extraction. This approach led to significant boost in the fringe visibility, but it also discarded x-ray photons with other energies incident on the detector and might result in degradations of the overall radiation dose efficiency of the DPC imaging systems. This work presents a novel singular value decomposition (SVD)-based method to leverage the entire spectrum of x-ray photons detected by the PCD, enabling both fringe visibility improvement and reduction in image noise. As evidenced by the results of experimental phantom studies, the contrast-to-noise ratio of the final DPC images could be effectively improved by the proposed method.
Potential use of microbubbles (MBs) as contrast material in x-ray dark field (DF) imaging: How does the DF signal change with the characteristic parameters of the MBs?
Ran Zhang, Bin Qin, Yongshuai Ge, et al.
One of the most exciting aspects of the grating based x-ray differential phase contrast (DPC) acquisition method is the concurrent generation of the so-called dark field (DF) signal, along with the classical absorption signal and the novel DPC signal. The DF signal is associated with local distribution of small angle scatterers in an image object, while the absorption signal and DPC signal are often used to characterize the relatively uniform structure of the image object. Besides the endogenous image contrast, exogenous contrast media are often used in x-ray imaging to locally enhance the image signal. This paper proposes a potential contrast medium for DF signal enhancement: microbubbles (MBs). MBs have already been developed for clinical use in ultrasound imaging, and recent experimental studies have shown that MBs may also enhance the DF signal, although it remained unclear how the physical characteristics of the MBs quantitatively impact the DF signal. In this paper, a systematic study was performed to investigate the quantitative relationships between the DF signal and the following properties of MBs: size, concentration, shell thickness, size uniformity, and whether gold nanoparticles were attached. The experimental results demonstrated that, an increased MB size (about 4 microns) may generate a stronger DF signal for our DPC imaging system; additionally, a moderately increased shell thickness and the use of gold nanoparticles on the shell surface also resulted in further enhancement of the DF signal. These findings may provide critical information needed for using MBs as the contrast agent of x-ray DF imaging.
High-energy x-ray grating-based phase-contrast radiography of human anatomy
Florian Horn, Christian Hauke, Sebastian Lachner, et al.
X-ray grating-based phase-contrast Talbot-Lau interferometry is a promising imaging technology that has the potential to raise soft tissue contrast in comparison to conventional attenuation-based imaging. Additionally, it is sensitive to attenuation, refraction and scattering of the radiation and thus provides complementary and otherwise inaccessible information due to the dark-field image, which shows the sub-pixel size granularity of the measured object. Until recent progress the method has been mainly limited to photon energies below 40 keV. Scaling the method to photon energies that are sufficient to pass large and spacious objects represents a challenging task. This is caused by increasing demands regarding the fabrication process of the gratings and the broad spectra that come along with the use of polychromatic X-ray sources operated at high acceleration voltages. We designed a setup that is capable to reach high visibilities in the range from 50 to 120 kV. Therefore, spacious and dense parts of the human body with high attenuation can be measured, such as a human knee. The authors will show investigations on the resulting attenuation, differential phase-contrast and dark-field images. The images experimentally show that X-ray grating-based phase-contrast radiography is feasible with highly absorbing parts of the human body containing massive bones.
Single-shot x-ray phase contrast imaging with an algorithmic approach using spectral detection
X-ray phase contrast imaging has been investigated during the last two decades for potential benefits in soft tissue imaging. Long imaging time, high radiation dose and general measurement complexity involving motion of x-ray optical components have prevented the clinical translation of these methods. In all existing popular phase contrast imaging methods, multiple measurements per projection angle involving motion of optical components are required to achieve quantitatively accurate estimation of absorption, phase and differential phase. Recently we proposed an algorithmic approach to use spectral detection data in a phase contrast imaging setup to obtain absorption, phase and differential phase in a single-step. Our generic approach has been shown via simulations in all three types of phase contrast imaging: propagation, coded aperture and grating interferometry. While other groups have used spectral detector in phase contrast imaging setups, our proposed method is unique in outlining an approach to use this spectral data to simplify phase contrast imaging. In this abstract we show the first experimental proof of our single-shot phase retrieval using a Medipix3 photon counting detector in an edge illumination aperture (also referred to as coded aperture) phase contrast set up as well as for a free space propagation setup. Our preliminary results validate our new transport equation for edge illumination PCI and our spectral phase retrieval algorithm for both PCI methods being investigated. Comparison with simulations also point to excellent performance of Medipix3 built-in charge sharing correction mechanism.
Cone Beam CT II: System Optimization, Image Reconstruction
icon_mobile_dropdown
Nonlinear statistical reconstruction for flat-panel cone-beam CT with blur and correlated noise models
Steven Tilley II, Jeffrey H. Siewerdsen, Wojciech Zbijewski, et al.
Flat-panel cone-beam CT (FP-CBCT) is a promising imaging modality, partly due to its potential for high spatial resolution reconstructions in relatively compact scanners. Despite this potential, FP-CBCT can face difficulty resolving important fine scale structures (e.g, trabecular details in dedicated extremities scanners and microcalcifications in dedicated CBCT mammography). Model-based methods offer one opportunity to improve high-resolution performance without any hardware changes. Previous work, based on a linearized forward model, demonstrated improved performance when both system blur and spatial correlations characteristics of FP-CBCT systems are modeled. Unfortunately, the linearized model relies on a staged processing approach that complicates tuning parameter selection and can limit the finest achievable spatial resolution. In this work, we present an alternative scheme that leverages a full nonlinear forward model with both system blur and spatially correlated noise. A likelihood-based objective function is derived from this forward model and we derive an iterative optimization algorithm for its solution. The proposed approach is evaluated in simulation studies using a digital extremities phantom and resolution-noise trade-offs are quantitatively evaluated. The correlated nonlinear model outperformed both the uncorrelated nonlinear model and the staged linearized technique with up to a 86% reduction in variance at matched spatial resolution. Additionally, the nonlinear models could achieve finer spatial resolution (correlated: 0.10 mm, uncorrelated: 0.11 mm) than the linear correlated model (0.15 mm), and traditional FDK (0.40 mm). This suggests the proposed nonlinear approach may be an important tool in improving performance for high-resolution clinical applications.
Automatic intrinsic cardiac and respiratory gating from cone-beam CT scans of the thorax region
Andreas Hahn, Sebastian Sauppe, Michael Lell, et al.
We present a new algorithm that allows for raw data-based automated cardiac and respiratory intrinsic gating in cone-beam CT scans. It can be summarized in three steps: First, a median filter is applied to an initially reconstructed volume. The forward projection of this volume contains less motion information and is subtracted from the original projections. This results in new raw data that contain only moving and not static anatomy like bones, that would otherwise impede the cardiac or respiratory signal acquisition. All further steps are applied to these modified raw data. Second, the raw data are cropped to a region of interest (ROI). The ROI in the raw data is determined by the forward projection of a binary volume of interest (VOI) that includes the diaphragm for respiratory gating and most of the edge of the heart for cardiac gating. Third, the mean gray value in this ROI is calculated for every projection and the respiratory/cardiac signal is acquired using a bandpass filter. Steps two and three are carried out simultaneously for 64 or 1440 overlapping VOI inside the body for the respiratory or cardiac signal respectively. The signals acquired from each ROI are compared and the most consistent one is chosen as the desired cardiac or respiratory motion signal. Consistency is assessed by the standard deviation of the time between two maxima. The robustness and efficiency of the method is evaluated using simulated and measured patient data by computing the standard deviation of the mean signal difference between the ground truth and the intrinsic signal.
Design and characterization of a dedicated cone-beam CT scanner for detection of acute intracranial hemorrhage
J. Xu, A. Sisniega, W. Zbijewski, et al.
Purpose: Prompt and reliable detection of intracranial hemorrhage (ICH) has substantial clinical impact in diagnosis and treatment of stroke and traumatic brain injury. This paper describes the design, development, and preliminary performance characterization of a dedicated cone-beam CT (CBCT) head scanner prototype for imaging of acute ICH. Methods: A task-based image quality model was used to analyze the detectability index as a function of system configuration, and hardware design was guided by the results of this model-based optimization. A robust artifact correction pipeline was developed using GPU-accelerated Monte Carlo (MC) scatter simulation, beam hardening corrections, detector veiling glare, and lag deconvolution. An iterative penalized weighted least-squares (PWLS) reconstruction framework with weights adjusted for artifact-corrected projections was developed. Various bowtie filters were investigated for potential dose and image quality benefits, with a MC-based tool providing estimates of spatial dose distribution. Results: The initial prototype will feature a source-detector distance of 1000 mm and source-axis distance of 550 mm, a 43x43 cm2 flat panel detector, and a 15° rotating anode x-ray source with 15 kW power and 0.6 focal spot size. Artifact correction reduced image nonuniformity by ~250 HU, and PWLS reconstruction with modified weights improved the contrast to noise ratio by 20%. Inclusion of a bowtie filter can potentially reduce dose by 50% and improve CNR by 25%. Conclusions: A dedicated CBCT system capable of imaging millimeter-scale acute ICH was designed. Preliminary findings support feasibility of point-of-care applications in TBI and stroke imaging, with clinical studies beginning on a prototype.
C-arm cone beam CT perfusion imaging using the SMART-RECON algorithm to improve temporal sampling density and temporal resolution
In this work, a newly developed reconstruction algorithm, Synchronized MultiArtifact Reduction with Tomographic RECONstruction (SMART-RECON), was applied to C-arm cone beam CT perfusion (CBCTP) imaging. This algorithm contains a special rank regularizer, designed to reduce limited-view artifacts associated with super- short scan reconstructions. As a result, high temporal sampling and temporal resolution image reconstructions were achieved using an interventional C-arm x-ray system. The algorithm was evaluated in terms of the fidelity of the dynamic contrast update curves and the accuracy of perfusion parameters through numerical simulation studies. Results shows that, not only were the dynamic curves accurately recovered (relative root mean square error ∈ [3%, 5%] compared with [13%, 22%] for FBP), but also the noise in the final perfusion maps was dramatically reduced. Compared with filtered backprojection, SMART-RECON generated CBCTP maps with much improved capability in differentiating lesions with perfusion deficits from the surrounding healthy brain tissues.
Mask free intravenous 3D digital subtraction angiography (IV 3D-DSA) from a single C-arm acquisition
Yinsheng Li, Kai Niu, Pengfei Yang, et al.
Currently, clinical acquisition of IV 3D-DSA requires two separate scans: one mask scan without contrast medium and a filled scan with contrast injection. Having two separate scans adds radiation dose to the patient and increases the likelihood of suffering inadvertent patient motion induced mis-registration and the associated mis-registraion artifacts in IV 3D-DSA images. In this paper, a new technique, SMART-RECON is introduced to generate IV 3D-DSA images from a single Cone Beam CT (CBCT) acquisition to eliminate the mask scan. Potential benefits of eliminating mask scan would be: (1) both radiation dose and scan time can be reduced by a factor of 2; (2) intra-sweep motion can be eliminated; (3) inter-sweep motion can be mitigated. Numerical simulations were used to validate the algorithm in terms of contrast recoverability and the ability to mitigate limited view artifacts.
Reduction of beam hardening artifacts in cone-beam CT imaging via SMART-RECON algorithm
When an automatic exposure control is introduced in C-arm cone beam CT data acquisition, the spectral inconsistencies between acquired projection data are exacerbated. As a result, conventional water/bone correction schemes are not as effective as in conventional diagnostic x-ray CT acquisitions with a fixed tube potential. In this paper, a new method was proposed to reconstruct several images with different degrees of spectral consistency and thus different levels of beam hardening artifacts. The new method relies neither on prior knowledge of the x-ray beam spectrum nor on prior compositional information of the imaging object. Numerical simulations were used to validate the algorithm.
CT I: Technology, System Characterization, Applications
icon_mobile_dropdown
Fluence-field modulated x-ray CT using multiple aperture devices
J. Webster Stayman, Aswin Mathews, Wojciech Zbijewski, et al.
We introduce a novel strategy for fluence field modulation (FFM) in x-ray CT using multiple aperture devices (MADs). MAD filters permit FFM by blocking or transmitting the x-ray beam on a fine (0.1-1 mm) scale. The filters have a number of potential advantages over other beam modulation strategies including the potential for a highly compact design, modest actuation speed and acceleration requirements, and spectrally neutral filtration due to their essentially binary action. In this work, we present the underlying MAD filtration concept including a design process to achieve a specific class of FFM patterns. A set of MAD filters is fabricated using a tungsten laser sintering process and integrated into an x-ray CT test bench. A characterization of the MAD filters is conducted and compared to traditional attenuating bowtie filters and the ability to flatten the fluence profile for a 32 cm acrylic phantom is demonstrated. MAD-filtered tomographic data was acquired on the CT test bench and reconstructed without artifacts associated with the MAD filter. These initial studies suggest that MAD-based FFM is appropriate for integration in clinical CT system to create patient-specific fluence field profile and reduce radiation exposures.
Development of a realistic, dynamic digital brain phantom for CT perfusion validation
Sarah E. Divel, W. Paul Segars, Soren Christensen, et al.
Physicians rely on CT Perfusion (CTP) images and quantitative image data, including cerebral blood flow, cerebral blood volume, and bolus arrival delay, to diagnose and treat stroke patients. However, the quantification of these metrics may vary depending on the computational method used. Therefore, we have developed a dynamic and realistic digital brain phantom upon which CTP scans can be simulated based on a set of ground truth scenarios. Building upon the previously developed 4D extended cardiac-torso (XCAT) phantom containing a highly detailed brain model, this work consisted of expanding the intricate vasculature by semi-automatically segmenting existing MRA data and fitting nonuniform rational B-spline surfaces to the new vessels. Using time attenuation curves input by the user as reference, the contrast enhancement in the vessels changes dynamically. At each time point, the iodine concentration in the arteries and veins is calculated from the curves and the material composition of the blood changes to reflect the expected values. CatSim, a CT system simulator, generates simulated data sets of this dynamic digital phantom which can be further analyzed to validate CTP studies and post-processing methods. The development of this dynamic and realistic digital phantom provides a valuable resource with which current uncertainties and controversies surrounding the quantitative computations generated from CTP data can be examined and resolved.
Experimental characterization of extra-focal radiation in CT scanners
Bruce R. Whiting, Mariela A. Porras-Chaverri, Joshua D. Evans, et al.
Quantitative computed tomography (CT) applications based on statistical iterative reconstruction algorithms require accurate models of the CT acquisition process, with a key component being the x-ray fan beam intensity. We present a method to experimentally determine the extra-focal radiation profile incident on individual CT detectors. Using a tungsten cylinder as a knife edge, a super-sampled signal was created from sinogram data, which traced the “occlusion” of the x-ray source as seen by a detector. By differentiating this signal and correcting for finite detector size and motion blur, the effective source profile can be recovered. Extra-focal scatter was found to be on the order of 1-3 percent of the focal beam intensity, with lower relative magnitude at the isocenter and increasing towards the edge of the fan beam, with its profile becoming asymmetric at large angles. The implications for reconstruction algorithms and QCT applications will be discussed.
Noise characteristics of CT perfusion imaging: how does noise propagate from source images to final perfusion maps?
Cerebral CT perfusion (CTP) imaging is playing an important role in the diagnosis and treatment of acute ischemic strokes. Meanwhile, the reliability of CTP-based ischemic lesion detection has been challenged due to the noisy appearance and low signal-to-noise ratio of CTP maps. To reduce noise and improve image quality, a rigorous study on the noise transfer properties of CTP systems is highly desirable to provide the needed scientific guidance. This paper concerns how noise in the CTP source images propagates to the final CTP maps. Both theoretical deviations and subsequent validation experiments demonstrated that, the noise level of background frames plays a dominant role in the noise of the cerebral blood volume (CBV) maps. This is in direct contradiction with the general belief that noise of non-background image frames is of greater importance in CTP imaging. The study found that when radiation doses delivered to the background frames and to all non-background frames are equal, lowest noise variance is achieved in the final CBV maps. This novel equality condition provides a practical means to optimize radiation dose delivery in CTP data acquisition: radiation exposures should be modulated between background frames and non-background frames so that the above equality condition is satisïnAed. For several typical CTP acquisition protocols, numerical simulations and in vivo canine experiment demonstrated that noise of CBV can be effectively reduced using the proposed exposure modulation method.
Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm
Taly Gilat-Schmidt, Adam Wang, Thomas Coradi, et al.
The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.
Dual-source multi-energy CT with triple or quadruple x-ray beams
Energy-resolved photon-counting CT (PCCT) is promising for material decomposition with multi-contrast agents. However, corrections for non-idealities of PCCT detectors are required, which are still active research areas. In addition, PCCT is associated with very high cost due to lack of mass production. In this work, we proposed an alternative approach to performing multi-energy CT, which was achieved by acquiring triple or quadruple x-ray beam measurements on a dual-source CT scanner. This strategy was based on a “Twin Beam” design on a single-source scanner for dual-energy CT. Examples of beam filters and spectra for triple and quadruple x-ray beam were provided. Computer simulation studies were performed to evaluate the accuracy of material decomposition for multi-contrast mixtures using both tri-beam and quadruple-beam configurations. The proposed strategy can be readily implemented on a dual-source scanner, which may allow material decomposition of multi-contrast agents to be performed on clinical CT scanners with energy-integrating detector.
Optimized projection binning for improved helical amplitude- and phase-based 4DCT reconstruction in the presence of breathing irregularity
René Werner, Christian Hofmann, Tobias Gauer
Respiration-correlated CT (4DCT) forms the basis of clinical 4D radiotherapy workflows for patients with thoracic and abdominal lesions. 4DCT image data, however, often suffers from motion artifacts due to unfulfilled assumptions during reconstruction and image/projection data sorting. In this work and focusing on low-pitch helical scanning protocols, two questionable assumptions are addressed: (1) the need for regular breathing patterns and (2) a constant correlation between the external breathing signal acquired for image/projection sorting and internal motion patterns. To counteract (1), a patient-specific upper breathing signal amplitude threshold is introduced to avoid artifacts due to unusual deep inspiration (helpful for both amplitude- and phase-based reconstruction). In addition, a projection data binning algorithm based on a statistical analysis of the patient's breathing signal is proposed to stabilize phase-based sorting. To further alleviate the need for (2), an image artifact metric is incorporated into and minimized during the reconstruction process. The optimized reconstruction is evaluated using 30 clinical 4DCT data sets and demonstrated to significantly reduce motion artifacts.
Detectors
icon_mobile_dropdown
DQE simulation of a-Se x-ray detectors using ARTEMIS
Yuan Fang, Aldo Badano
Detective Quantum Efficiency (DQE) is one of the most important image quality metrics for evaluating the spatial resolution performance of flat-panel x-ray detectors. In this work, we simulate the DQE of amorphous selenium (a-Se) xray detectors with a detailed Monte Carlo transport code (ARTEMIS) for modeling semiconductor-based direct x-ray detectors. The transport of electron-hole pairs is achieved with a spatiotemporal model that accounts for recombination and trapping of carriers and Coulombic effects of space charge and external applied electric field. A range of x-ray energies has been simulated from 10 to 100 keV. The DQE results can be used to study the spatial resolution characteristics of detectors at different energies.
Improving detector spatial resolution using pixelated scintillators with a barrier rib structure
Indirect conversion flat panel detectors (FPDs) based on amorphous silicon (a-Si) technology are widely used in digital X-ray imaging. In such FPDs a scintillator layer is used for converting X-rays into visible light photons. However, the lateral spread of these photons inside the scintillator layer reduces spatial resolution of the FPD. In this study, FPDs incorporating pixelated scintillators with a barrier rib structure were developed to limit lateral spread of light photons thereby improving spatial resolution. For the pixelated scintillator, a two-dimensional barrier rib structure was first manufactured on a substrate layer, coated with reflective materials, and filled to the rim with the scintillating material of gadolinium oxysulfide (GOS). Several scintillator samples were fabricated, with pitch size varying from 160 to 280 μm and rib height from 200 to 280 μm. The samples were directly coupled to an a-Si flat panel photodiode array with a pitch of 200 μm to convert optical photons to electronic signals. With the pixelated scintillator, the detector modulation transfer function was shown to improve significantly (by 94% at 2 cycle/mm) compared to a detector using an unstructured GOS layer. However, the prototype does show lower sensitivity due to the decrease in scintillator fill factor. The preliminary results demonstrated the feasibility of using the barrier-rib structure to improve the spatial resolution of FPDs. Such an improvement would greatly benefit nondestructive testing applications where the spatial resolution is the most important parameter. Further investigation will focus on improving the detector sensitivity and exploring its medical applications.
High dynamic range CMOS-based mammography detector for FFDM and DBT
Inge M. Peters, Chiel Smit, James J. Miller, et al.
Digital Breast Tomosynthesis (DBT) requires excellent image quality in a dynamic mode at very low dose levels while Full Field Digital Mammography (FFDM) is a static imaging modality that requires high saturation dose levels. These opposing requirements can only be met by a dynamic detector with a high dynamic range. This paper will discuss a wafer-scale CMOS-based mammography detector with 49.5 μm pixels and a CsI scintillator. Excellent image quality is obtained for FFDM as well as DBT applications, comparing favorably with a-Se detectors that dominate the X-ray mammography market today. The typical dynamic range of a mammography detector is not high enough to accommodate both the low noise and the high saturation dose requirements for DBT and FFDM applications, respectively. An approach based on gain switching does not provide the signal-to-noise benefits in the low-dose DBT conditions. The solution to this is to add frame summing functionality to the detector. In one X-ray pulse several image frames will be acquired and summed. The requirements to implement this into a detector are low noise levels, high frame rates and low lag performance, all of which are unique characteristics of CMOS detectors. Results are presented to prove that excellent image quality is achieved, using a single detector for both DBT as well as FFDM dose conditions. This method of frame summing gave the opportunity to optimize the detector noise and saturation level for DBT applications, to achieve high DQE level at low dose, without compromising the FFDM performance.
Solid-state flat panel imager with avalanche amorphous selenium
Active matrix flat panel imagers (AMFPI) have become the dominant detector technology for digital radiography and fluoroscopy. For low dose imaging, electronic noise from the amorphous silicon thin film transistor (TFT) array degrades imaging performance. We have fabricated the first prototype solid-state AMFPI using a uniform layer of avalanche amorphous selenium (a-Se) photoconductor to amplify the signal to eliminate the effect of electronic noise. We have previously developed a large area solid-state avalanche a-Se sensor structure referred to as High Gain Avalanche Rushing Photoconductor (HARP) capable of achieving gains of 75. In this work we successfully deposited this HARP structure onto a 24 x 30 cm2 TFT array with a pixel pitch of 85 μm. An electric field (ESe) up to 105 Vμm-1 was applied across the a-Se layer without breakdown. Using the HARP layer as a direct detector, an X-ray avalanche gain of 15 ± 3 was achieved at ESe = 105 Vμm-1. In indirect mode with a 150 μm thick structured CsI scintillator, an optical gain of 76 ± 5 was measured at ESe = 105 Vμm-1. Image quality at low dose increases with the avalanche gain until the electronic noise is overcome at a constant exposure level of 0.76 mR. We demonstrate the success of a solid-state HARP X-ray imager as well as the largest active area HARP sensor to date.
A novel x-ray detector design with higher DQE and reduced aliasing: Theoretical analysis of x-ray reabsoprtion in detector converter material
Tomi Nano, Terenz Escartin, Karim S. Karim, et al.
The ability to improve visualization of structural information in digital radiography without increasing radiation exposures requires improved image quality across all spatial frequencies, especially at high frequencies. The detective quantum efficiency (DQE) as a function of spatial frequency quantifies image quality given by an x-ray detector. We present a method of increasing DQE at high spatial frequencies by improving the modulation transfer function (MTF) and reducing noise aliasing. The Apodized Aperature Pixel (AAP) design uses a detector with micro-elements to synthesize desired pixels and provide higher DQE than conventional detector designs. A cascaded system analysis (CSA) that incorporates x-ray interactions is used for comparison of the theoretical MTF, noise power spectrum (NPS), and DQE. Signal and noise transfer through the converter material is shown to consist of correlated an uncorrelated terms. The AAP design was shown to improve the DQE of both material types that have predominantly correlated transfer (such as CsI) and predominantly uncorrelated transfer (such as Se). Improvement in the MTF by 50% and the DQE by 100% at the sampling cut-off frequency is obtained when uncorrelated transfer is prevalent through the converter material. Optimizing high-frequency DQE results in improved image contrast and visualization of small structures and fine-detail.
CT II: Image Reconstruction, Artifact Reduction
icon_mobile_dropdown
A generalized Fourier penalty in prior-image-based reconstruction for cross-platform imaging
A. Pourmorteza, J. H. Siewerdsen, J. W. Stayman
Sequential CT studies present an excellent opportunity to apply prior-image-based reconstruction (PIBR) methods that leverage high-fidelity prior imaging studies to improve image quality and/or reduce x-ray exposure in subsequent studies. One major obstacle in using PIBR is that the initial and subsequent studies are often performed on different scanners (e.g. diagnostic CT followed by CBCT for interventional guidance); this results in mismatch in attenuation values due to hardware and software differences. While improved artifact correction techniques can potentially mitigate such differences, the correction is often incomplete. Here, we present an alternate strategy where the PIBR itself is used to mitigate these differences. We define a new penalty for the previously introduced PIBR called Reconstruction of Difference (RoD). RoD differs from many other PIBRs in that it reconstructs only changes in the anatomy (vs. reconstructing the current anatomy). Direct regularization of the difference image in RoD provides an opportunity to selectively penalize spatial frequencies of the difference image (e.g. low frequency differences associated with attenuation offsets and shading artifacts) without interfering with the variations in unchanged background image. We leverage this flexibility and introduce a novel regularization strategy using a generalized Fourier penalty within the RoD framework and develop the modified reconstruction algorithm. We evaluate the performance of the new approach in both simulation studies and in physical CBCT test-bench data. We find that generalized Fourier penalty can be highly effective in reducing low-frequency x-ray artifacts through selective suppression of spatial frequencies in the reconstructed difference image.
Reduction of motion artifacts in cardiac CT based on partial angle reconstructions from short scan data
Juliane Hahn, Herbert Bruder, Thomas Allmendinger, et al.
Until today, several software-based approaches to increase the temporal resolution in cardiac computed tomography by estimating motion vector fields (MVFs) have been developed. Thereunder, the majority are motion compensation algorithms, which estimate the MVFs employing a three-dimensional registration routine working on reconstructions of multiple cardiac phases.2, 6, 7, 12 We present an algorithm that requires nothing more than the data needed for a short scan reconstruction for motion estimation and motion-compensated reconstruction, which both are based on the reconstruction of volumes from a limited angular range.2, 3, 7, 8 Those partial angle reconstructions are centered at different time points during the short scan and have a temporal resolution of about 10ms each. The MVFs are estimated by a constrained cost function optimization routine employing a motion artifact measuring cost function. During optimization, the MVFs are applied directly by warping the partial angle reconstructions, and the motion compensation is established by simply adding the shifted images. In order to enforce smooth vector fields and keep the number of parameters low, the motion is modeled by a low degree polynomial. Furthermore, to find a good estimation of the MVFs even in phases with rapid cardiac motion, the constrained optimization is re-initialized multiple times. The algorithm is validated with the help of a simulation study and applied to patient data, where motion- compensated reconstructions are performed in various cardiac phases. We show that the image quality can be improved, also in more rapid cardiac phases due to re-initialization of the optimization routine.
An open library of CT patient projection data
Lack of access to projection data from patient CT scans is a major limitation for development and validation of new reconstruction algorithms. To meet this critical need, we are building a library of CT patient projection data in an open and vendor-neutral format, DICOM-CT-PD, which is an extended DICOM format that contains sinogram data, acquisition geometry, patient information, and pathology identification. The library consists of scans of various types, including head scans, chest scans, abdomen scans, electrocardiogram (ECG)-gated scans, and dual-energy scans. For each scan, three types of data are provided, including DICOM-CT-PD projection data at various dose levels, reconstructed CT images, and a free-form text file. Several instructional documents are provided to help the users extract information from DICOM-CT-PD files, including a dictionary file for the DICOM-CT-PD format, a DICOM-CT-PD reader, and a user manual. Radiologist detection performance based on the reconstructed CT images is also provided. So far 328 head cases, 228 chest cases, and 228 abdomen cases have been collected for potential inclusion. The final library will include a selection of 50 head, chest, and abdomen scans each from at least two different manufacturers, and a few ECG-gated scans and dual-source, dual-energy scans. It will be freely available to academic researchers, and is expected to greatly facilitate the development and validation of CT reconstruction algorithms.
Limits to dose reduction from iterative reconstruction and the effect of through-slice blurring
Iterative reconstruction methods have become very popular and show the potential to reduce dose. We present a limit to the maximum dose reduction possible with new reconstruction algorithms obtained by analyzing the information content of the raw data, assuming the reconstruction algorithm does not have a priori knowledge about the object or correlations between pixels. This limit applies to the task of estimating the density of a lesion embedded in a known background object, where the shape of the lesion is known but its density is not. Under these conditions, the density of the lesion can be estimated directly from the raw data in an optimal manner. This optimal estimate will meet or outperform the performance of any reconstruction method operating on the raw data, under the condition that the reconstruction method does not introduce a priori information. The raw data bound can be compared to the lesion density estimate from FBP in order to produce a limit on the dose reduction possible from new reconstruction algorithms. The possible dose reduction from iterative reconstruction varies with the object, but for a lesion embedded in the center of a water cylinder, it is less than 40%. Additionally, comparisons between iterative reconstruction and filtered backprojection are sometimes confounded by the effect of through-slice blurring in the iterative reconstruction. We analyzed the magnitude of the variance reduction brought about by through-slice blurring on scanners from two different vendors and found it to range between 11% and 48%.
Reduction of truncation artifacts in CT images via a discriminative dictionary representation method
When the scan field of view (SFOV) of a CT system is not large enough to enclose the entire cross-section of a patient, or the patient needs to be intentionally positioned partially outside the SFOV for certain clinical CT scans, truncation artifacts are often observed in the reconstructed CT images. Conventional wisdom to reduce truncation artifacts is to complete the truncated projection data via data extrapolation with different a priori assumptions. This paper presents a novel truncation artifact reduction method that directly works in the CT image domain. Specifically, a discriminative dictionary that includes a sub-dictionary of truncation artifacts and a sub-dictionary of non-artifact image information was used to separate a truncation artifact-contaminated image into two sub-images, one with reduced truncation artifacts, and the other one containing only the truncation artifacts. Both experimental phantom and retrospective human subject studies have been performed to characterize the performance of the proposed truncation artifact reduction method.
Compensation of skull motion and breathing motion in CT using data-based and image-based metrics, respectively
H. Bruder, C. Rohkohl, K. Stierstorfer, et al.
We present a novel reconstruction for motion correction of non-cardiac organs. With non-cooperative patients or in emergency case, breathing motion or motion of the skull may compromise image quality. Our algorithm is based on the optimization of either motion artefact metrics or data-driven metrics. This approach was successfully applied in cardiac CTA [1]. While motion correction of the coronary vessels requires a local motion model, global motion models are sufficient for organs like the lung or the skull. The parameter vector for the global affine motion is estimated iteratively, using the open source optimization library NLOPT. The image is updated using motion compensated reconstruction in each of the iterations. Evaluation of the metric value, e.g. the image entropy, provides information for the next iteration loop. After reaching the fixed point of the iteration, the final motion parameters are used for a motion-compensated full quality reconstruction. In head imaging the motion model is based on translation and rotation, in thoracic imaging the rotation is replaced by non-isotropic scaling in all three dimensions. We demonstrate the efficiency of the method in thoracic imaging by evaluating PET-CT data from free-breathing patients. In neuro imaging, data from stroke patients showing skull tremor were analyzed. It was shown that motion artefacts can be largely reduced and spatial resolution was restored. In head imaging, similar results can be obtained using motion artefact metrics or data-driven metrics. In case of image-based metrics, the entropy of the image proved to be superior. Breathing motion could also be significantly reduced using entropy metric. However, in this case data driven metrics cannot be applied because the line integrals associated to the ROI of the lung have to be computed using the local ROI mechanism [2] It was shown that the lung signal is corrupted by signals originating from the complement of the lung. Thus a meaningful optimization of a data-driven cost function is not possible.
Photon Counting CT I: Instrumentation
icon_mobile_dropdown
Estimation of signal and noise for a whole-body photon counting research CT system
Photon-counting CT (PCCT) may yield potential value for many clinical applications due to its relative immunity to electronic noise, increased geometric efficiency relative to current scintillating detectors, and the ability to resolve energy information about the detected photons. However, there are a large number of parameters that require optimization, particularly the energy thresholds configuration. Fast and accurate estimation of signal and noise in PCCT can benefit the optimization of acquisition parameters for specific diagnostic tasks. Based on the acquisition parameters and detector response of our research PCCT system, we derived mathematical models for both signal and noise. The signal model took the tube spectrum, beam filtration, object attenuation, water beam hardening, and detector response into account. The noise model considered the relationship between noise and radiation dose, as well as the propagation of noise as threshold data are subtracted to yield energy bin data. To determine the absolute noise value, a noise look-up table (LUT) was acquired using a limited number of calibration scans. The noise estimation algorithm then used the noise LUT to estimate noise for scans with a variety of combination of energy thresholds, dose levels, and object attenuation. Validation of the estimation algorithms was performed on our whole-body research PCCT system using semianthropomorphic water phantoms and solutions of calcium and iodine. The algorithms achieved accurate estimation of signal and noise for a variety of scanning parameter combinations. The proposed method can be used to optimize energy thresholds configuration for many clinical applications of PCCT.
Material decomposition and virtual non-contrast imaging in photon counting computed tomography: an animal study
R. Gutjahr, C. Polster, S. Kappler, et al.
The energy resolving capabilities of Photon Counting Detectors (PCD) in Computed Tomography (CT) facilitate energy-sensitive measurements. The provided image-information can be processed with Dual Energy and Multi Energy algorithms. A research PCD-CT firstly allows acquiring images with a close to clinical configuration of both the X-ray tube and the CT-detector. In this study, two algorithms (Material Decomposition and Virtual Non-Contrast-imaging (VNC)) are applied on a data set acquired from an anesthetized rabbit scanned using the PCD-CT system. Two contrast agents (CA) are applied: A gadolinium (Gd) based CA used to enhance contrasts for vascular imaging, and xenon (Xe) and air as a CA used to evaluate local ventilation of the animal's lung. Four different images are generated: a) A VNC image, suppressing any traces of the injected Gd imitating a native scan, b) a VNC image with a Gd-image as an overlay, where contrast enhancements in the vascular system are highlighted using colored labels, c) another VNC image with a Xe-image as an overlay, and d) a 3D rendered image of the animal's lung, filled with Xe, indicating local ventilation characteristics. All images are generated from two images based on energy bin information. It is shown that a modified version of a commercially available dual energy software framework is capable of providing images with diagnostic value obtained from the research PCD-CT system.
On the analogy between pulse-pile-up in energy-sensitive, photon-counting detectors and level-crossing of shot noise
Ewald Roessl, Matthias Bartels, Heiner Daerr, et al.
Shot noise processes are omnipresent in physics and many of their properties have been extensively studied in the past, including the particular problem of level crossing of shot noise. Energy-sensitive, photon-counting detectors using comparators to discriminate pulse-heights are currently heavily investigated for medical applications, e.g. for x-ray computed tomography and x-ray mammography. Surprisingly, no mention of the close relation between the two topics can be found in the literature on photon-counting detectors. In this paper, we point out the close analogy between level crossing of shot noise and the problem of determining count rates of photon- counting detectors subject to pulse pile-up. The latter is very relevant for obtaining precise forward models for photon-counting detectors operated under conditions of very high x-ray flux employed in clinical x-ray computed tomography. Although several attempts have been made to provide reasonably accurate, approximative models for the registered number of counts in x-ray detectors under conditions of high flux and arbitrary x-ray spectra, see, e.g., no exact, analytic solution is given in the literature for general continuous pulse shapes. In this paper we present such a solution for arbitrary response functions, x-ray spectra and continuous pulse shapes based on a result from the theory of level crossing. We briefly outline the theory of level crossing including the famous Rice theorem and translate from the language of level crossing to the language of photon-counting detection.
A high-resolution imaging technique using a whole-body, research photon counting detector CT system
S. Leng, Z. Yu, A. Halaweish, et al.
A high-resolution (HR) data collection mode has been introduced to a whole-body, research photon-counting-detector CT system installed in our laboratory. In this mode, 64 rows of 0.45 mm x 0.45 mm detector pixels were used, which corresponded to a pixel size of 0.25 mm x 0.25 mm at the iso-center. Spatial resolution of this HR mode was quantified by measuring the MTF from a scan of a 50 micron wire phantom. An anthropomorphic lung phantom, cadaveric swine lung, temporal bone and heart specimens were scanned using the HR mode, and image quality was subjectively assessed by two experienced radiologists. High spatial resolution of the HR mode was evidenced by the MTF measurement, with 15 lp/cm and 20 lp/cm at 10% and 2% modulation. Images from anthropomorphic phantom and cadaveric specimens showed clear delineation of small structures, such as lung vessels, lung nodules, temporal bone structures, and coronary arteries. Temporal bone images showed critical anatomy (i.e. stapes superstructure) that was clearly visible in the PCD system. These results demonstrated the potential application of this imaging mode in lung, temporal bone, and vascular imaging. Other clinical applications that require high spatial resolution, such as musculoskeletal imaging, may also benefit from this high resolution mode.
Lossless compression of projection data from photon counting detectors
With many attractive attributes, photon counting detectors with many energy bins are being considered for clinical CT systems. In practice, a large amount of projection data acquired for multiple energy bins must be transferred in real time through slip rings and data storage subsystems, causing a bandwidth bottleneck problem. The higher resolution of these detectors and the need for faster acquisition additionally contribute to this issue. In this work, we introduce a new approach to lossless compression, specifically for projection data from photon counting detectors, by utilizing the dependencies in the multi-energy data. The proposed predictor estimates the value of a projection data sample as a weighted average of its neighboring samples and an approximation from other energy bins, and the prediction residuals are then encoded. Context modeling using three or four quantized local gradients is also employed to detect edge characteristics of the data. Using three simulated phantoms including a head phantom, compression of 2.3:1-2.4:1 was achieved. The proposed predictor using zero, three, and four gradient contexts was compared to JPEG-LS and the ideal predictor (noiseless projection data). Among our proposed predictors, three-gradient context is preferred with a compression ratio from Golomb coding 7% higher than JPEG-LS and only 3% lower than the ideal predictor. In encoder efficiency, the Golomb code with the proposed three-gradient contexts has higher compression than block floating point. We also propose a lossy compression scheme, which quantizes the prediction residuals with scalar uniform quantization using quantization boundaries that limit the ratio of quantization error variance to quantum noise variance. Applying our proposed predictor with three-gradient context, the lossy compression achieved a compression ratio of 3.3:1 but inserted a 2.1% standard deviation of error compared to that of quantum noise in reconstructed images. From the initial simulation results, the proposed algorithm shows good control over the bits needed to represent multienergy projection data.
PET and MR
icon_mobile_dropdown
Initial experience in primal-dual optimization reconstruction from sparse-PET patient data
Zheng Zhang, Jinghan Ye, Buxin Chen, et al.
There exists interest in designing a PET system with reduced detectors due to cost concerns, while not significantly compromising the PET utility. Recently developed optimization-based algorithms, which have demonstrated the potential clinical utility in image reconstruction from sparse CT data, may be used for enabling such design of innovative PET systems. In this work, we investigate a PET configuration with reduced number of detectors, and carry out preliminary studies from patient data collected by use of such sparse-PET configuration. We consider an optimization problem combining Kullback-Leibler (KL) data fidelity with an image TV constraint, and solve it by using a primal-dual optimization algorithm developed by Chambolle and Pock. Results show that advanced algorithms may enable the design of innovative PET configurations with reduced number of detectors, while yielding potential practical PET utilities.
Short term reproducibility of a high contrast 3-D isotropic optic nerve imaging sequence in healthy controls
Robert L. Harrigan, Alex K. Smith, Louise A. Mawn, et al.
The optic nerve (ON) plays a crucial role in human vision transporting all visual information from the retina to the brain for higher order processing. There are many diseases that affect the ON structure such as optic neuritis, anterior ischemic optic neuropathy and multiple sclerosis. Because the ON is the sole pathway for visual information from the retina to areas of higher level processing, measures of ON damage have been shown to correlate well with visual deficits. Increased intracranial pressure has been shown to correlate with the size of the cerebrospinal fluid (CSF) surrounding the ON. These measures are generally taken at an arbitrary point along the nerve and do not account for changes along the length of the ON. We propose a high contrast and high-resolution 3-D acquired isotropic imaging sequence optimized for ON imaging. We have acquired scan-rescan data using the optimized sequence and a current standard of care protocol for 10 subjects. We show that this sequence has superior contrast-to-noise ratio to the current standard of care while achieving a factor of 11 higher resolution. We apply a previously published automatic pipeline to segment the ON and CSF sheath and measure the size of each individually. We show that these measures of ON size have lower short- term reproducibility than the population variance and the variability along the length of the nerve. We find that the proposed imaging protocol is (1) useful in detecting population differences and local changes and (2) a promising tool for investigating biomarkers related to structural changes of the ON.
Crystal timing offset calibration method for time of flight PET scanners
Jinghan Ye, Xiyun Song
In time-of-flight (TOF) positron emission tomography (PET), precise calibration of the timing offset of each crystal of a PET scanner is essential. Conventionally this calibration requires a specially designed tool just for this purpose. In this study a method that uses a planar source to measure the crystal timing offsets (CTO) is developed. The method uses list mode acquisitions of a planar source placed at multiple orientations inside the PET scanner field-of-view (FOV). The placement of the planar source in each acquisition is automatically figured out from the measured data, so that a fixture for exactly placing the source is not required. The expected coincidence time difference for each detected list mode event can be found from the planar source placement and the detector geometry. A deviation of the measured time difference from the expected one is due to CTO of the two crystals. The least squared solution of the CTO is found iteratively using the list mode events. The effectiveness of the crystal timing calibration method is evidenced using phantom images generated by placing back each list mode event into the image space with the timing offset applied to each event. The zigzagged outlines of the phantoms in the images become smooth after the crystal timing calibration is applied. In conclusion, a crystal timing calibration method is developed. The method uses multiple list mode acquisitions of a planar source to find the least squared solution of crystal timing offsets.
Solving outside-axial-field-of-view scatter correction problem in PET via digital experimentation
Andriy Andreyev, Yang-Ming Zhu, Jinghan Ye, et al.
Unaccounted scatter impact from unknown outside-axial-field-of-view (outside-AFOV) activity in PET is an important degrading factor for image quality and quantitation. Resource consuming and unpopular way to account for the outside- AFOV activity is to perform an additional PET/CT scan of adjacent regions. In this work we investigate a solution to the outside-AFOV scatter problem without performing a PET/CT scan of the adjacent regions. The main motivation for the proposed method is that the measured random corrected prompt (RCP) sinogram in the background region surrounding the measured object contains only scattered events, originating from both inside- and outside-AFOV activity. In this method, the scatter correction simulation searches through many randomly-chosen outside-AFOV activity estimates along with known inside-AFOV activity, generating a plethora of scatter distribution sinograms. This digital experimentation iterates until a decent match is found between a simulated scatter sinogram (that include supposed outside-AFOV activity) and the measured RCP sinogram in the background region. The combined scatter impact from inside- and outside-AFOV activity can then be used for scatter correction during final image reconstruction phase. Preliminary results using measured phantom data indicate successful phantom length estimate with the method, and, therefore, accurate outside-AFOV scatter estimate.
Photon Counting CT II: Spectral Imaging
icon_mobile_dropdown
Improving material decomposition by spectral optimization of photon counting computed tomography
C. Polster, K. Hahn, R. Gutjahr, et al.
Photon counting detectors in computed tomography facilitate measurements of spectral distributions of detected X-ray quanta in discrete energy bins. Along with the dependency on wavelength and atomic number of the mass attenuation coefficient, this information allows for reconstruction of CT images of different material bases. Decomposition of two materials is considered standard in today’s dual-energy techniques. With photon-counting detectors the decomposition of more than two materials becomes achievable. Efficient detection of CT-typical X-ray spectra is a hard requirement in a clinical environment. This is fulfilled by only a few sensor materials such as CdTe or CdZnTe. In contrast to energy integrating CT-detectors, the pixel dimensions must be reduced to avoid pulse pile-up problems at clinically relevant count rates. However, reducing pixel sizes leads to increased K-escape and charge sharing effects. As a consequence, the correlation between incident and detected X-ray energy is reduced. This degradation is quantified by the detector response function. The goal of this study is to improve the achievable material decomposition by adapting the incident X-ray spectrum with respect to the properties (i.e. the detector response function) of a photon counting detector. A significant improvement of a material decomposition equivalent metric is achievable when using specific materials as X-ray pre-filtration (K-edge filtering) while maintaining the applied patient dose and image quality.
Optimal selection of thresholds for photon counting CT
Thomas O'Donnell, Friederike Schoeck, Rabee Cheheltani, et al.
Recent advances in Photon Counting CT (PCCT) have facilitated the simultaneous acquisition of multiple image volumes with differing energy thresholds. This presents the user with several choices for energy threshold combinations. As compared to standard clinical Dual kVp CT , where the user typically has only three choices of kVp pairings (e.g., 80/150Sn, 90/150Sn, 100/150Sn), a “quad” PCCT system with 14 threshold settings has Choose(14,4)= 1001 possible threshold combinations (assuming no restrictions). In this paper we describe a computationally tractable means to order, from best (most accurate) to worst (least accurate), threshold combinations for the task of discriminating pure materials of assumed approximate concentrations using the Bhattacharyya Coefficient. We observe that this ordering is not necessarily identical to the ordering for the task of decomposing material mixtures into their components. We demonstrate our approach on phantom data.
“Conventional” CT images from spectral measurements
Spectral imaging systems need to be able to produce "conventional" images, and it's been shown that systems with energy discriminating detectors can achieve higher CNR than conventional systems by optimal weighting. Combining measured data in energy bins (EBs) and also combining basis material images have previously been proposed, but there are no studies systematically comparing the two methods. In this paper, we analytically evaluate the two methods for systems with ideal photon counting detectors using CNR and beam hardening (BH) artifact as metrics. For a 120-kVp polychromatic simulations of a water phantom with low contrast inserts, the difference of the optimal CNR between the two methods for the studied phantom is within 2%. For a polychromatic spectrum, beam-hardening artifacts are noticeable in EB weighted images (BH artifact of 3.8% for 8 EB and 6.9% for 2 EB), while weighted basis material images are free of such artifacts.
Spatio-energetic cross-talks in photon counting detectors: detector model and correlated Poisson data generator
Katsuyuki Taguchi, Christoph Polster, Okkyun Lee, et al.
An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs near pixel boundaries, producing a count at both of the two pixels. This is called double-counting with charge sharing. The output of individual PCD pixel is Poisson distributed integer counts; however, the outputs of adjacent pixels are correlated due to double-counting. Major problems are the lack of detector noise model for the spatio-energetic crosstalk and the lack of an efficient simulation tool. Monte Carlo simulation can accurately simulate these phenomena and produce noisy data; however, it is not computationally efficient. In this study, we developed a new detector model and implemented into an efficient software simulator which uses a Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following effects into account effects: (1) detection efficiency and incomplete charge collection; (2) photoelectric effect with total absorption; (3) photoelectric effect with fluorescence x-ray emission and re-absorption; (4) photoelectric effect with fluorescence x-ray emission which leaves PCD completely; and (5) electric noise. The model produced total detector spectrum similar to previous MC simulation data. The model can be used to predict spectrum and correlation with various different settings. The simulated noisy data demonstrated the expected performance: (a) data were integers; (b) the mean and covariance matrix was close to the target values; (c) noisy data generation was very efficient
Comparison of quantitative k-edge empirical estimators using an energy-resolved photon-counting detector
Using an energy-resolving photon counting detector, the amount of k-edge material in the x-ray path can be estimated using a process known as material decomposition. However, non-ideal effects within the detector make it difficult to accurately perform this decomposition. This work evaluated the k-edge material decomposition accuracy of two empirical estimators. A neural network estimator and a linearized maximum likelihood estimator with error look-up tables (A-table method) were evaluated through simulations and experiments. Each estimator was trained on system-specific calibration data rather than specific modeling of non-ideal detector effects or the x-ray source spectrum. Projections through a step-wedge calibration phantom consisting of different path lengths through PMMA, aluminum, and a k-edge material was used to train the estimators. The estimators were tested by decomposing data acquired through different path lengths of the basis materials. The estimators had similar performance in the chest phantom simulations with gadolinium. They estimated four of the five densities of gadolinium with less than 2mg/mL bias. The neural networks estimates demonstrated lower bias but higher variance than the A-table estimates in the iodine contrast agent simulations. The neural networks had an experimental variance lower than the CRLB indicating it is a biased estimator. In the experimental study, the k-edge material contribution was estimated with less than 14% bias for the neural network estimator and less than 41% bias for the A-table method.
New Systems and Technologies
icon_mobile_dropdown
Multi-gamma-source CT imaging system: a feasibility study with the Poisson noise
This study was performed to test the feasibility of multi-gamma-source CT imaging system. Gamma-source CT employs radioisotopes that emit monochromatic energy gamma-rays. The advantages of gamma-source CT include its immunity to beam hardening artifacts, its capacity of quantitative CT imaging, and its higher performance in low contrast imaging compared to the conventional x-ray CT. Radioisotope should be shielded by use of a pin-hole collimator so as to make a fine focal spot. Due to its low gamma-ray flux in general, the reconstructed image from a single gamma-source CT would suffer from high noise in data. To address this problem, we proposed a multi-gamma source CT imaging system and developed an iterative image reconstruction algorithm accordingly in this work. Conventional imaging model assumes a single linear imaging system typically represented by Mf = g. In a multi-gamma-source CT system however, the inversion problem is not any more based on a single linear system since one cannot separate a detector pixel value into multiple ones that are corresponding to each rays from the sources. Instead, the imaging model can be constructed by a set of linear system models each of which assumes an estimated measurement g. Based on this model, the proposed algorithm has a weighting step which distributes each projection data into multiple estimated measurements. We used two gamma sources at various positions and with varying intensities in this numerical study to demonstrate its feasibility. Therefore, the measured projection data(g) is separated into each estimated projection data(g1, g2) in this study. The proposed imaging protocol is believed to contribute to both medical and industrial applications.
High-spatial-resolution nanoparticle x-ray fluorescence tomography
Jakob C. Larsson, William Vågberg, Carmen Vogt, et al.
X-ray fluorescence tomography (XFCT) has potential for high-resolution 3D molecular x-ray bio-imaging. In this technique the fluorescence signal from targeted nanoparticles (NPs) is measured, providing information about the spatial distribution and concentration of the NPs inside the object. However, present laboratory XFCT systems typically have limited spatial resolution (>1 mm) and suffer from long scan times and high radiation dose even at high NP concentrations, mainly due to low efficiency and poor signal-to-noise ratio. We have developed a laboratory XFCT system with high spatial resolution (sub-100 μm), low NP concentration and vastly decreased scan times and dose, opening up the possibilities for in-vivo small-animal imaging research. The system consists of a high-brightness liquid-metal-jet microfocus x-ray source, x-ray focusing optics and an energy-resolving photon-counting detector. By using the source’s characteristic 24 keV line-emission together with carefully matched molybdenum nanoparticles the Compton background is greatly reduced, increasing the SNR. Each measurement provides information about the spatial distribution and concentration of the Mo nanoparticles. A filtered back-projection method is used to produce the final XFCT image.
Investigation of noise and contrast sensitivity of an electron multiplying charge-coupled device (EMCCD) based cone beam micro-CT system
A small animal micro-CT system was built using an EMCCD detectors having complex pre-digitization amplification technology, high-resolution, high-sensitivity and low-noise. Noise in CBCT reconstructed images when using predigitization amplification behaves differently than commonly used detectors and warrants a detailed investigation. In this study, noise power and contrast sensitivity were estimated for the newly built system. Noise analysis was performed by scanning a water phantom. Tube voltage was lowered to minimum delivered by the tube (20 kVp and 0.5 mA) and detector gain was varied. Contrast sensitivity was analyzed by using a phantom containing different iodine contrast solutions (20% to 70%) filled in six different tubes. First, we scanned the phantom using various x-ray exposures at 40 kVp while changing the gain to maintain the background air value of the projection images constant. Next, the exposure was varied while the detector gain was maintained constant. Radial NPS plots show that noise power level increases as gain increases. Contrast sensitivity was analyzed by calculating ratio of signal-to-noise ratios (SNR) for increased gain with those of low constant gain at each exposure. The SNR value at low constant gain was always lower than SNR of high detector gain at all x-ray settings and iodine contrast. The largest increase of SNR approached 1.3 for low contrast feature for an iodine concentration of 20%. Despite an increase in noise level as gain increases, the SNR improvement shows that signal level also increases because of the unique on-chip gain of the detector.
800-MeV magnetic-focused flash proton radiography for high-contrast imaging of low-density biologically-relevant targets using an inverse-scatter collimator
Matthew S. Freeman, Jason Allison, Camilo Espinoza, et al.
Proton radiography shows great promise as a tool to guide proton beam therapy (PBT) in real time. Here, we demonstrate two ways in which the technology may progress towards that goal. Firstly, with a proton beam that is 800 MeV in energy, target tissue receives a dose of radiation with very tight lateral constraint. This could present a benefit over the traditional treatment energies of ~200 MeV, where up to 1 cm of lateral tissue receives scattered radiation at the target. At 800 MeV, the beam travels completely through the object with minimal deflection, thus constraining lateral dose to a smaller area. The second novelty of this system is the utilization of magnetic quadrupole refocusing lenses that mitigate the blur caused by multiple Coulomb scattering within an object, enabling high resolution imaging of thick objects, such as the human body. This system is demonstrated on ex vivo salamander and zebrafish specimens, as well as on a realistic hand phantom. The resulting images provide contrast sufficient to visualize thin tissue, as well as fine detail within the target volumes, and the ability to measure small changes in density. Such a system, combined with PBT, would enable the delivery of a highly specific dose of radiation that is monitored and guided in real time.
Method for dose-reduced 3D catheter tracking on a scanning-beam digital x-ray system using dynamic electronic collimation
Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3D catheter tracking. This work proposes a method of dose-reduced 3D tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. Positions in the 2D focal spot array are selectively activated to create a regionof- interest (ROI) x-ray field around the tracked catheter. The ROI position is updated for each frame based on a motion vector calculated from the two most recent 3D tracking results. The technique was evaluated with SBDX data acquired as a catheter tip inside a chest phantom was pulled along a 3D trajectory. DEC scans were retrospectively generated from the detector images stored for each focal spot position. DEC imaging of a catheter tip in a volume measuring 11.4 cm across at isocenter required 340 active focal spots per frame, versus 4473 spots in full-FOV mode. The dose-area-product (DAP) and peak skin dose (PSD) for DEC versus full field-of-view (FOV) scanning were calculated using an SBDX Monte Carlo simulation code. DAP was reduced to 7.4% to 8.4% of the full-FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full-FOV value. The root-mean-squared-deviation between DEC-based 3D tracking coordinates and full-FOV 3D tracking coordinates was less than 0.1 mm. The 3D distance between the tracked tip and the sheath centerline averaged 0.75 mm. Dynamic electronic collimation can reduce dose with minimal change in tracking performance.
Scatter and Diffraction Imaging
icon_mobile_dropdown
Method to study sample object size limit of small-angle x-ray scattering computed tomography
Mina Choi, Bahaa Ghammraoui, Andreu Badal, et al.
Small-angle x-ray scattering (SAXS) imaging is an emerging medical tool that can be used for in vivo detailed tissue characterization and has the potential to provide added contrast to conventional x-ray projection and CT imaging. We used a publicly available MC-GPU code to simulate x-ray trajectories in a SAXS-CT geometry for a target material embedded in a water background material with varying sample sizes (1, 3, 5, and 10 mm). Our target materials were water solution of gold nanoparticle (GNP) spheres with a radius of 6 nm and a water solution with dissolved serum albumin (BSA) proteins due to their well-characterized scatter profiles at small angles and highly scattering properties. The background material was water. Our objective is to study how the reconstructed scatter profile degrades at larger target imaging depths and increasing sample sizes. We have found that scatter profiles of the GNP in water can still be reconstructed at depths up to 5 mm embedded at the center of a 10 mm sample. Scatter profiles of BSA in water were also reconstructed at depths up to 5 mm in a 10 mm sample but with noticeable signal degradation as compared to the GNP sample. This work presents a method to study the sample size limits for future SAXS-CT imaging systems.
Coded aperture coherent scatter imaging for breast cancer detection: a Monte Carlo evaluation
It is known that conventional x-ray imaging provides a maximum contrast between cancerous and healthy fibroglandular breast tissues of 3% based on their linear x-ray attenuation coefficients at 17.5 keV, whereas coherent scatter signal provides a maximum contrast of 19% based on their differential coherent scatter cross sections. Therefore in order to exploit this potential contrast, we seek to evaluate the performance of a coded- aperture coherent scatter imaging system for breast cancer detection and investigate its accuracy using Monte Carlo simulations. In the simulations we modeled our experimental system, which consists of a raster-scanned pencil beam of x-rays, a bismuth-tin coded aperture mask comprised of a repeating slit pattern with 2-mm periodicity, and a linear-array of 128 detector pixels with 6.5-keV energy resolution. The breast tissue that was scanned comprised a 3-cm sample taken from a patient-based XCAT breast phantom containing a tomosynthesis- based realistic simulated lesion. The differential coherent scatter cross section was reconstructed at each pixel in the image using an iterative reconstruction algorithm. Each pixel in the reconstructed image was then classified as being either air or the type of breast tissue with which its normalized reconstructed differential coherent scatter cross section had the highest correlation coefficient. Comparison of the final tissue classification results with the ground truth image showed that the coded aperture imaging technique has a cancerous pixel detection sensitivity (correct identification of cancerous pixels), specificity (correctly ruling out healthy pixels as not being cancer) and accuracy of 92.4%, 91.9% and 92.0%, respectively. Our Monte Carlo evaluation of our experimental coded aperture coherent scatter imaging system shows that it is able to exploit the greater contrast available from coherently scattered x-rays to increase the accuracy of detecting cancerous regions within the breast.
Coded aperture x-ray diffraction imaging with transmission computed tomography side-information
Ikenna Odinaka, Joel A. Greenberg, Yan Kaganovsky, et al.
Coded aperture X-ray diffraction (coherent scatter spectral) imaging provides fast and dose-efficient measurements of the molecular structure of an object. The information provided is spatially-dependent and material-specific, and can be utilized in medical applications requiring material discrimination, such as tumor imaging. However, current coded aperture coherent scatter spectral imaging system assume a uniformly or weakly attenuating object, and are plagued by image degradation due to non-uniform self-attenuation. We propose accounting for such non-uniformities in the self-attenuation by utilizing an X-ray computed tomography (CT) image (reconstructed attenuation map). In particular, we present an iterative algorithm for coherent scatter spectral image reconstruction, which incorporates the attenuation map, at different stages, resulting in more accurate coherent scatter spectral images in comparison to their uncorrected counterpart. The algorithm is based on a spectrally grouped edge-preserving regularizer, where the neighborhood edge weights are determined by spatial distances and attenuation values.
Task Driven Imaging, Observers, Detectability, Phantom Studies
icon_mobile_dropdown
Task-driven tube current modulation and regularization design in computed tomography with penalized-likelihood reconstruction
Purpose: This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. Methods: We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index (d’). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. Results: The task-driven design yielded the best performance, improving d’ by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d’ by 16% and 9%, respectively. Conclusions: This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
Comparison of model and human observer performance in FFDM, DBT, and synthetic mammography
Lynda Ikejimba, Stephen J. Glick, Ehsan Samei, et al.
Reader studies are important in assessing breast imaging systems. The purpose of this work was to assess task-based performance of full field digital mammography (FFDM), digital breast tomosynthesis (DBT), and synthetic mammography (SM) using different phantom types, and to determine an accurate observer model for human readers.

Images were acquired on a Hologic Selenia Dimensions system with a uniform and anthropomorphic phantom. A contrast detail insert of small, low-contrast disks was created using an inkjet printer with iodine-doped ink and inserted in the phantoms. The disks varied in diameter from 210 to 630 μm, and in contrast from 1.1% contrast to 2.2% in regular increments. Human and model observers performed a 4-alternative forced choice experiment. The models were a non-prewhitening matched filter with eye model (NPWE) and a channelized Hotelling observer with either Gabor channels (Gabor-CHO) or Laguerre-Gauss channels (LG-CHO).

With the given phantoms, reader scores were higher in FFDM and DBT than SM. The structure in the phantom background had a bigger impact on outcome for DBT than for FFDM or SM. All three model observers showed good correlation with humans in the uniform background, with ρ between 0.89 and 0.93. However, in the structured background, only the CHOs had high correlation, with ρ=0.92 for Gabor-CHO, 0.90 for LG-CHO, and 0.77 for NPWE.

Because results of any analysis can depend on the phantom structure, conclusions of modality performance may need to be taken in the context of an appropriate model observer and a realistic phantom.
Predicting detection performance with model observers: Fourier domain or spatial domain?
The use of Fourier domain model observer is challenged by iterative reconstruction (IR), because IR algorithms are nonlinear and IR images have noise texture different from that of FBP. A modified Fourier domain model observer, which incorporates nonlinear noise and resolution properties, has been proposed for IR and needs to be validated with human detection performance. On the other hand, the spatial domain model observer is theoretically applicable to IR, but more computationally intensive than the Fourier domain method. The purpose of this study is to compare the modified Fourier domain model observer to the spatial domain model observer with both FBP and IR images, using human detection performance as the gold standard. A phantom with inserts of various low contrast levels and sizes was repeatedly scanned 100 times on a third-generation, dual-source CT scanner at 5 dose levels and reconstructed using FBP and IR algorithms. The human detection performance of the inserts was measured via a 2-alternative-forced-choice (2AFC) test. In addition, two model observer performances were calculated, including a Fourier domain non-prewhitening model observer and a spatial domain channelized Hotelling observer. The performance of these two mode observers was compared in terms of how well they correlated with human observer performance. Our results demonstrated that the spatial domain model observer correlated well with human observers across various dose levels, object contrast levels, and object sizes. The Fourier domain observer correlated well with human observers using FBP images, but overestimated the detection performance using IR images.
Investigation of optimal parameters for penalized maximum-likelihood reconstruction applied to iodinated contrast-enhanced breast CT
Andrey Makeev, Lynda Ikejimba, Joseph Y. Lo, et al.
Although digital mammography has reduced breast cancer mortality by approximately 30%, sensitivity and specificity are still far from perfect. In particular, the performance of mammography is especially limited for women with dense breast tissue. Two out of every three biopsies performed in the U.S. are unnecessary, thereby resulting in increased patient anxiety, pain, and possible complications. One promising tomographic breast imaging method that has recently been approved by the FDA is dedicated breast computed tomography (BCT). However, visualizing lesions with BCT can still be challenging for women with dense breast tissue due to the minimal contrast for lesions surrounded by fibroglandular tissue. In recent years there has been renewed interest in improving lesion conspicuity in x-ray breast imaging by administration of an iodinated contrast agent. Due to the fully 3-D imaging nature of BCT, as well as sub-optimal contrast enhancement while the breast is under compression with mammography and breast tomosynthesis, dedicated BCT of the uncompressed breast is likely to offer the best solution for injected contrast-enhanced x-ray breast imaging. It is well known that use of statistically-based iterative reconstruction in CT results in improved image quality at lower radiation dose. Here we investigate possible improvements in image reconstruction for BCT, by optimizing free regularization parameter in method of maximum likelihood and comparing its performance with clinical cone-beam filtered backprojection (FBP) algorithm.
Design, fabrication, and implementation of voxel-based 3D printed textured phantoms for task-based image quality assessment in CT
Justin Solomon, Alexandre Ba, Andrew Diao, et al.
In x-ray computed tomography (CT), task-based image quality studies are typically performed using uniform background phantoms with low-contrast signals. Such studies may have limited clinical relevancy for modern non-linear CT systems due to possible influence of background texture on image quality. The purpose of this study was to design and implement anatomically informed textured phantoms for task-based assessment of low-contrast detection. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find the CLB parameters that were most reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, a cylinder phantom (165 mm in diameter and 30 mm height) was designed, containing 20 low-contrast spherical signals (6 mm in diameter at targeted contrast levels of ~3.2, 5.2, 7.2, 10, and 14 HU, 4 repeats per signal). The phantom was voxelized and input into a commercial multi-material 3D printer (Object Connex 350), with custom software for voxel-based printing. Using principles of digital half-toning and dithering, the 3D printer was programmed to distribute two base materials (VeroWhite and TangoPlus, nominal voxel size of 42x84x30 microns) to achieve the targeted spatial distribution of x-ray attenuation properties. The phantom was used for task-based image quality assessment of a clinically available iterative reconstruction algorithm (Sinogram Affirmed Iterative Reconstruction, SAFIRE) using a channelized Hotelling observer paradigm. Images of the textured phantom and a corresponding uniform phantom were acquired at six dose levels and observer model performance was estimated for each condition (5 contrasts x 6 doses x 2 reconstructions x 2 backgrounds = 120 total conditions). Based on the observer model results, the dose reduction potential of SAFIRE was computed and compared between the uniform and textured phantom. The dose reduction potential of SAFIRE was found to be 23% based on the uniform phantom and 17% based on the textured phantom. This discrepancy demonstrates the need to consider background texture when assessing non-linear reconstruction algorithms.
Detectable change of lung nodule volume with CT in a phantom study with high and low signal to background contrast
In previous work we developed a method for predicting the minimum detectable change (MDC) in nodule volume based on volumetric CT measurements. MDC was defined as the minimum increase/decrease in a nodule volume distinguishable from the baseline measurement at a specified level of detection performance, assessed using the area under the ROC curve (AUC). In this work we derived volume estimates of a set of synthetic nodules and calculated the detection performance for distinguishing them from baseline measurements. Eight spherical objects of 100HU radio density ranging in diameter from 5.0mm to 5.75mm and 8.0mm to 8.75mm with 0.25mm increments were placed in an anthropomorphic phantom with either no background (high-contrast task) or gelatin background (low-contrast task). The baseline was defined as 5.0mm for the first set of nodules and 8.0mm for the second set. The phantom was scanned using varying exposures, and reconstructed with slice thickness of 0.75, 1.5, and 3.0mm and two reconstruction kernels (standard and smooth). Volume measurements were derived using a previously developed matched- filter approach. Results showed that nodule size, slice thickness, and nodule-to-background contrast affected detectable change in nodule volume when using our volume estimator and the acquisition settings from our study. We also compared our experimental results to the values estimated by our previously-developed MDC prediction method. We found that experimental data for the 8mm baseline nodules matched very well with our predicted values of MDC. These results support considering the use of this metric when standardizing imaging protocols for lung nodule size change assessment.
Poster Session: Breast Imaging
icon_mobile_dropdown
Eigenbreasts for statistical breast phantoms
Gregory M. Sturgeon, Daniel J. Tward, M. Ketcha, et al.
To facilitate rigorous virtual clinical trials using model observers for breast imaging optimization and evaluation, we demonstrated a method of defining statistical models, based on 177 sets of breast CT patient data, in order to generate tens of thousands of unique digital breast phantoms. In order to separate anatomical texture from variation in breast shape, each training set of breast phantoms were deformed to a consistent atlas compressed geometry. Principal component analysis (PCA) was then performed on the shape-matched breast CT volumes to capture the variation of patient breast textures. PCA decomposes the training set of N breast CT volumes into an N-1-dimensional space of eigenvectors, which we call eigenbreasts. By summing weighted combinations of eigenbreasts, a large ensemble of different breast phantoms can be newly created. Different training sets can be used in eigenbreast analysis for designing basis models to target sub-populations defined by breast characteristics, such as size or density. In this work, we plan to generate ensembles of 30,000 new phantoms based on glandularity for an upcoming virtual trial of lesion detectability in digital breast tomosynthesis. Our method extends our series of digital and physical breast phantoms based on human subject anatomy, providing the capability to generate new, unique ensembles consisting of tens of thousands or more virtual subjects. This work represents an important step towards conducting future virtual trials for tasks-based assessment of breast imaging, where it is vital to have a large ensemble of realistic phantoms for statistical power as well as clinical relevance.
Conceptual detector development and Monte Carlo simulation of a novel 3D breast computed tomography system
Jens Ziegle, Bernhard H. Müller, Bernd Neumann, et al.
A new 3D breast computed tomography (CT) system is under development enabling imaging of microcalcifications in a fully uncompressed breast including posterior chest wall tissue. The system setup uses a steered electron beam impinging on small tungsten targets surrounding the breast to emit X-rays. A realization of the corresponding detector concept is presented in this work and it is modeled through Monte Carlo simulations in order to quantify first characteristics of transmission and secondary photons. The modeled system comprises a vertical alignment of linear detectors hold by a case that also hosts the breast. Detectors are separated by gaps to allow the passage of X-rays towards the breast volume. The detectors located directly on the opposite side of the gaps detect incident X-rays. Mechanically moving parts in an imaging system increase the duration of image acquisition and thus can cause motion artifacts. So, a major advantage of the presented system design is the combination of the fixed detectors and the fast steering electron beam which enable a greatly reduced scan time. Thereby potential motion artifacts are reduced so that the visualization of small structures such as microcalcifications is improved. The result of the simulation of a single projection shows high attenuation by parts of the detector electronics causing low count levels at the opposing detectors which would require a flat field correction, but it also shows a secondary to transmission ratio of all counted X-rays of less than 1 percent. Additionally, a single slice with details of various sizes was reconstructed using filtered backprojection. The smallest detail which was still visible in the reconstructed image has a size of 0.2mm.
Simulation of spiculated breast lesions
Premkumar Elangovan, Faisal Alrehily, R. Ferrari Pinto, et al.
Virtual clinical trials are a promising new approach increasingly used for the evaluation and comparison of breast imaging modalities. A key component in such an assessment paradigm is the use of simulated pathology, in particular, simulation of lesions. Breast mass lesions can be generally classified into two categories based on their appearance; nonspiculated masses and spiculated masses. In our previous work, we have successfully simulated non-spiculated masses using a fractal growth process known as diffusion limited aggregation. In this new work, we have extended the DLA model to simulate spiculated lesions by using features extracted from patient DBT images containing spiculated lesions. The features extracted included spicule length, width, curvature and distribution. This information was used to simulate realistic looking spicules which were attached to the surface of a DLA mass to produce a spiculated mass. A batch of simulated spiculated masses was inserted into normal patient images and presented to an experienced radiologist for review. The study yielded promising results with the radiologist rating 60% of simulated lesions in 2D and 50% of simulated lesions in DBT as realistic.
Estimation of mammary gland composition using CdTe series detector developed for photon-counting mammography
Akiko Ihori, Chizuru Okamoto, Tsutomu Yamakawa, et al.
Energy resolved photon-counting mammography is a new technology, which counts the number of photons that passes through an object, and presents it as a pixel value in an image of the object. Silicon semiconductor detectors are currently used in commercial mammography. However, the disadvantage of silicon is the low absorption efficiency for high X-ray energies. A cadmium telluride (CdTe) series detector has a high absorption efficiency over a wide energy range. In this study, we proposed a method to estimate the composition of the mammary gland using a CdTe series detector as a photon-counting detector. The fact that the detection rate of breast cancer in mammography is affected by mammary gland composition is now widely accepted. Assessment of composition of the mammary gland has important implications. An important advantage of our proposed technique is its ability to discriminate photons using three energy bins. We designed the CdTe series detector system using the MATLAB simulation software. The phantom contains nine regions with the ratio of glandular tissue and adipose varying in increments of 10%. The attenuation coefficient for each bin’s energy was calculated from the number of input and output photons possessed by each. The evaluation results obtained by plotting the attenuation coefficient μ in a three-dimensional (3D) scatter plot show that the plots had a regular composition order congruent with that of the mammary gland. Consequently, we believe that our proposed method can be used to estimate the composition of the mammary gland.
Discrimination between normal breast tissue and tumor tissue using CdTe series detector developed for photon-counting mammography
Chizuru Okamoto, Akiko Ihori, Tsutomu Yamakawa, et al.
We propose a new mammography system using a cadmium telluride (CdTe) series photon-counting detector, having high absorption efficiency over a wide energy range. In a previous study, we showed that the use of high X-ray energy in digital mammography is useful from the viewpoint of exposure dose and image quality. In addition, the CdTe series detector can acquire X-ray spectrum information following transmission through a subject. This study focused on the tissue composition identified using spectral information obtained by a new photon-counting detector. Normal breast tissue consists entirely of adipose and glandular tissues. However, it is very difficult to find tumor tissue in the region of glandular tissue via a conventional mammogram, especially in dense breast because the attenuation coefficients of glandular tissue and tumor tissue are very close. As a fundamental examination, we considered a simulation phantom and showed the difference between normal breast tissue and tumor tissue of various thicknesses in a three-dimensional (3D) scatter plot. We were able to discriminate between both types of tissues. In addition, there was a tendency for the distribution to depend on the thickness of the tumor tissue. Thinner tumor tissues were shown to be closer in appearance to normal breast tissue. This study also demonstrated that the difference between these tissues could be made obvious by using a CdTe series detector. We believe that this differentiation is important, and therefore, expect this technology to be applied to new tumor detection systems in the future.
Biopsy system guided by positron emission tomography in real-time
L. Moliner, J. Álamo, D. Hellingman, et al.
In this work we present the MAMMOCARE prototype, a biopsy guided system based on PET. The system is composed by an examination table where the patient is situated in prone position, a PET detector and a biopsy device. The PET detector is composed by two rings. These rings can be separated mechanically in order to allow the needle insertion. The first acquisition is performed with the closed ring configuration in order to obtain a high quality image to locate the lesion. Then, the software calculates the optimum path for the biopsy and moves the biopsy and PET systems to the desired position. At this point, two compression pallets are used to hold the breast. Then, the PET system opens and the biopsy procedure starts. The images are obtained at several steps to ensure the correct location of the needle during the procedure. The performance of the system is evaluated measuring the spatial resolution and sensitivity according the NEMA standard. The uniformity of the reconstructed images is also estimated. The radial resolution is 1.62mm in the center of the FOV and 3.45mm at 50mm off the center in the radial direction using the closed configuration. In the open configuration the resolution reaches 1.85mm at center and 3.65mm at 50mm. The sensitivity using an energy window of 250keV-750keV is 3.6% for the closed configuration and 2.5% for the open configuration. The uniformity measured in the center of the FOV is 14% and 18% for the closed and open configurations respectively.
Feasibility of generating quantitative composition images in dual energy mammography: a simulation study
Breast cancer is one of the most common malignancies in women. For years, mammography has been used as the gold standard for localizing breast cancer, despite its limitation in determining cancer composition. Therefore, the purpose of this simulation study is to confirm the feasibility of obtaining tumor composition using dual energy digital mammography. To generate X-ray sources for dual energy mammography, 26 kVp and 39 kVp voltages were generated for low and high energy beams, respectively. Additionally, the energy subtraction and inverse mapping functions were applied to provide compositional images. The resultant images showed that the breast composition obtained by the inverse mapping function with cubic fitting achieved the highest accuracy and least noise. Furthermore, breast density analysis with cubic fitting showed less than 10% error compare to true values. In conclusion, this study demonstrated the feasibility of creating individual compositional images and capability of analyzing breast density effectively.
Comparison of contrast enhancement methods using photon counting detector in spectral mammography
Hyemi Kim, Su-Jin Park, Byungdu Jo, et al.
The photon counting detector with energy discrimination capabilities provides the spectral information and energy of each photon with single exposure. The energy-resolved photon counting detector makes it possible to improve the visualization of contrast agent by selecting the appropriate energy window. In this study, we simulated the photon counting spectral mammography system using a Monte Carlo method and compared three contrast enhancement methods (K-edge imaging, projection-based energy weighting imaging, and dual energy subtraction imaging). For the quantitative comparison, we used the homogeneous cylindrical breast phantom as a reference and the heterogeneous XCAT breast phantom. To evaluate the K-edge imaging methods, we obtained images by increasing the energy window width based on K-edge absorption energy of iodine. The iodine which has the K-edge discontinuity in the attenuation coefficient curve can be separated from the background. The projection-based energy weighting factor was defined as the difference in the transmissions between the contrast agent and the background. Each weighting factor as a function of photon energy was calculated and applied to the each energy bin. For the dual energy subtraction imaging, we acquired two images with below and above the iodine K-edge energy using single exposure. To suppress the breast tissue in high energy images, the weighting factor was applied as the ratio of the linear attenuation coefficients of the breast tissue at high and low energies. Our results demonstrated the CNR improvement of the K-edge imaging was the highest among the three methods. These imaging techniques based on the energy-resolved photon counting detector improved image quality with the spectral information.
Grid-less imaging with antiscatter correction software in 2D mammography: the effects on image quality and MGD under a partial virtual clinical validation study
Nelis Van Peteghem, Frédéric Bemelmans, Xenia Bramaje Adversalo, et al.
This work investigated the effect of the grid-less acquisition mode with scatter correction software developed by Siemens Healthcare (PRIME mode) on image quality and mean glandular dose (MGD) in a comparative study against a standard mammography system with grid. Image quality was technically quantified with contrast-detail (c-d) analysis and by calculating detectability indices (d’) using a non-prewhitening with eye filter model observer (NPWE). MGD was estimated technically using slabs of PMMA and clinically on a set of 11439 patient images. The c-d analysis gave similar results for all mammographic systems examined, although the d’ values were slightly lower for the system with PRIME mode when compared to the same system in standard mode (-2.8% to -5.7%, depending on the PMMA thickness). The MGD values corresponding to the PMMA measurements with automatic exposure control indicated a dose reduction from 11.0% to 20.8% for the system with PRIME mode compared to the same system without PRIME mode. The largest dose reductions corresponded to the thinnest PMMA thicknesses. The results from the clinical dosimetry study showed an overall population-averaged dose reduction of 11.6% (up to 27.7% for thinner breasts) for PRIME mode compared to standard mode for breast thicknesses from 20 to 69 mm. These technical image quality measures were then supported using a clinically oriented study whereby simulated clusters of microcalcifications and masses were inserted into patient images and read by radiologists in an AFROC study to quantify their detectability. In line with the technical investigation, no significant difference was found between the two imaging modes (p-value 0.95).
DICOM organ dose does not accurately represent calculated dose in mammography
This study aims to analyze the agreement between the mean glandular dose estimated by the mammography unit (organ dose) and mean glandular dose calculated using Dance et al published method (calculated dose). Anonymised digital mammograms from 50 BreastScreen NSW centers were downloaded and exposure information required for the calculation of dose was extracted from the DICOM header along with the organ dose estimated by the system. Data from quality assurance annual tests for the included centers were collected and used to calculate the mean glandular dose for each mammogram. Bland-Altman analysis and a two-tailed paired t-test were used to study the agreement between calculated and organ dose and the significance of any differences. A total of 27,869 dose points from 40 centers were included in the study, mean calculated dose and mean organ dose (± standard deviation) were 1.47 (±0.66) and 1.38 (±0.56) mGy respectively. A statistically significant 0.09 mGy bias (t = 69.25; p<0.0001) with 95% limits of agreement between calculated and organ doses ranging from -0.34 and 0.52 were shown by Bland-Altman analysis, which indicates a small yet highly significant difference between the two means. The use of organ dose for dose audits is done at the risk of over or underestimating the calculated dose, hence, further work is needed to identify the causal agents for differences between organ and calculated doses and to generate a correction factor for organ dose.
Analysis of the scatter effect on detective quantum efficiency of digital mammography
Jiwoong Park, Seungman Yun, Dong Woon Kim, et al.
The scatter effect on detective quantum efficiency (DQE) of digital mammography is investigated using the cascaded-systems model. The cascaded-systems model includes a scatter-reduction device as a binomial selection stage. Quantum-noise-limited operation approximates the system DQE into the multiplication form of the scatter-reduction device DQE and the conventional detector DQE. The developed DQE model is validated in comparisons with the measured results using a CMOS flat-panel detector under scatter environments. For various scatter-reduction devices, the slot-scan method shows the best scatter-cleanup performance in terms of DQE, and the scatter-cleanup performance of the conventional one-dimensional grid is rather worse than the air gap. The developed model can also be applied to general radiography and will be very useful for a better design of imaging chain.
Optimal exposure techniques for iodinated contrast enhanced breast CT
Stephen J. Glick, Andrey Makeev
Screening for breast cancer using mammography has been very successful in the effort to reduce breast cancer mortality, and its use has largely resulted in the 30% reduction in breast cancer mortality observed since 1990 [1]. However, diagnostic mammography remains an area of breast imaging that is in great need for improvement. One imaging modality proposed for improving the accuracy of diagnostic workup is iodinated contrast-enhanced breast CT [2]. In this study, a mathematical framework is used to evaluate optimal exposure techniques for contrast-enhanced breast CT. The ideal observer signal-to-noise ratio (i.e., d’) figure-of-merit is used to provide a task performance based assessment of optimal acquisition parameters under the assumptions of a linear, shift-invariant imaging system. A parallel-cascade model was used to estimate signal and noise propagation through the detector, and a realistic lesion model with iodine uptake was embedded into a structured breast background. Ideal observer performance was investigated across kVp settings, filter materials, and filter thickness. Results indicated many kVp spectra/filter combinations can improve performance over currently used x-ray spectra.
Estimation of adipose compartment volumes in CT images of a mastectomy specimen
Anthropomorphic software breast phantoms have been utilized for preclinical quantitative validation of breast imaging systems. Efficacy of the simulation-based validation depends on the realism of phantom images. Anatomical measurements of the breast tissue, such as the size and distribution of adipose compartments or the thickness of Cooper’s ligaments, are essential for the realistic simulation of breast anatomy. Such measurements are, however, not readily available in the literature. In this study, we assessed the statistics of adipose compartments as visualized in CT images of a total mastectomy specimen. The specimen was preserved in formalin, and imaged using a standard body CT protocol and high X-ray dose. A human operator manually segmented adipose compartments in reconstructed CT images using ITK-SNAP software, and calculated the volume of each compartment. In addition, the time needed for the manual segmentation and the operator’s confidence were recorded. The average volume, standard deviation, and the probability distribution of compartment volumes were estimated from 205 segmented adipose compartments. We also estimated the potential correlation between the segmentation time, operator’s confidence, and compartment volume. The statistical tests indicated that the estimated compartment volumes do not follow the normal distribution. The compartment volumes are found to be correlated with the segmentation time; no significant correlation between the volume and the operator confidence. The performed study is limited by the mastectomy specimen position. The analysis of compartment volumes will better inform development of more realistic breast anatomy simulation.
A new paradigm of dielectric relaxation spectroscopy for non-invasive detection of breast abnormalities: a preliminary feasibility analysis
Sreeram Dhurjaty, Yuchen Qiu, Maxine Tan, et al.
In order to improve efficacy of screening mammography, in recent years, we have been investigating the feasibility of applying a resonance-frequency based electrical impedance spectroscopy (REIS) technology to noninvasively detect breast abnormalities that may lead to the development of cancer in the near-term. Despite promising study-results, we found that REIS suffered from relatively poor reproducibility due to perturbations in electrode placement, contact pressure variation on the breast, as well as variation of the resonating inductor. To overcome this limitation, in this study, we propose and analyze a new paradigm of Dielectric Relaxation Spectroscopy (DRS) that measures polarization-lag of dielectric signals in breast-capacitance when excited by the pulses or sine waves. Unlike conventional DRS that operates using the signals at very high frequencies (GHz) to examine changes in polarization, our new method detects and characterizes the dielectric properties of tissue at low frequencies (≤10 MHz) due to the advent of inexpensive oscillators that are accurate to 1 pico-second (used in GPS receivers) as well as measurement of amplitudes of 1 ppm or better. From theoretical analysis, we have proved that the sensitivity of new DRS in detecting permittivity of water increased by ≥80 times as compared to conventional DRS, which operates at frequencies around 4GHz. By analyzing and comparing the relationship between the new DRS and REIS, we found that this DRS has potential advantages in enhancing repeatability from various readings, including temperature-insensitive detection, and yielding higher resolution or sensitivity (up to 100 Femtofarads).
Reduction of artifacts in computer simulation of breast Cooper's ligaments
Anthropomorphic software breast phantoms have been introduced as a tool for quantitative validation of breast imaging systems. Efficacy of the validation results depends on the realism of phantom images. The recursive partitioning algorithm based upon the octree simulation has been demonstrated as versatile and capable of efficiently generating large number of phantoms to support virtual clinical trials of breast imaging. Previously, we have observed specific artifacts, (here labeled “dents”) on the boundaries of simulated Cooper’s ligaments. In this work, we have demonstrated that these “dents” result from the approximate determination of the closest simulated ligament to an examined subvolume (i.e., octree node) of the phantom. We propose a modification of the algorithm that determines the closest ligament by considering a pre-specified number of neighboring ligaments selected based upon the functions that govern the shape of ligaments simulated in the subvolume. We have qualitatively and quantitatively demonstrated that the modified algorithm can lead to elimination or reduction of dent artifacts in software phantoms. In a proof-of concept example, we simulated a 450 ml phantom with 333 compartments at 100 micrometer resolution. After the proposed modification, we corrected 148,105 dents, with an average size of 5.27 voxels (5.27nl). We have also qualitatively analyzed the corresponding improvement in the appearance of simulated mammographic images. The proposed algorithm leads to reduction of linear and star-like artifacts in simulated phantom projections, which can be attributed to dents. Analysis of a larger number of phantoms is ongoing.
Radiation dose differences between digital mammography and digital breast tomosynthesis are dependent on breast thickness
Maram M. Alakhras, Claudia Mello-Thoms, Roger Bourne, et al.
Purpose To evaluate the radiation dose derived from digital mammography (DM) and digital breast tomosynthesis (DBT) at different tube current-exposure time product (mAs) and at 6 phantom thicknesses from 10 to 60 mm. Materials and Methods A total of 240 DM and DBT cranio-caudal (CC) phantom images were acquired at each thickness and at four exposure levels (the baseline mAs, 50%, 25% and 12.5% the baseline mAs). The incident Air Kerma (K) at the surface of the phantoms was measured using a solid state dosimeter. Mean Glandular Doses (MGD) were calculated for both modalities (DM and DBT). Results DBT dose was greater than that of DM for all mAs at each phantom thickness. For a breast thickness of 50 mm (close to average sized breast), the dose for DBT (2.32 mGy) was 13% higher than that for DM (2.05 mGy). The results also show that the difference in MGD between DM and DBT was less for the thicker compared with the thinner phantom, this difference being approximately a factor of 2.58 at 10 mm compared with a factor of 1.08 at 60 mm. While the MGD increased with increasing phantom thickness for both modalities, the dose increase with DBT was less than for DM, with the difference between 10 and 60 mm being a factor of 7 for DM and 3 for DBT. Conclusion The radiation dose from DBT was higher than that of DM and the difference in dose between DM and DBT decreases as phantom thickness increases.
Poster Session: Cone Beam CT
icon_mobile_dropdown
Scatter correction in CBCT with an offset detector through a deconvolution method using data consistency
Changhwan Kim, Miran Park, Hoyeon Lee, et al.
Our earlier work has demonstrated that the data consistency condition can be used as a criterion for scatter kernel optimization in deconvolution methods in a full-fan mode cone-beam CT [1]. However, this scheme cannot be directly applied to CBCT system with an offset detector (half-fan mode) because of transverse data truncation in projections. In this study, we proposed a modified scheme of the scatter kernel optimization method that can be used in a half-fan mode cone-beam CT, and have successfully shown its feasibility. Using the first-reconstructed volume image from half-fan projection data, we acquired full-fan projection data by forward projection synthesis. The synthesized full-fan projections were partly used to fill the truncated regions in the half-fan data. By doing so, we were able to utilize the existing data consistency-driven scatter kernel optimization method. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by an experimental study using the ACS head phantom.
Deblurring in iterative reconstruction of half CBCT for image guided brain radiosurgery
SayedMasoud Hashemi, Young Lee, Markus Eriksson, et al.
A high spatial resolution iterative reconstruction algorithm is proposed for a half cone beam CT (HCBCT) geometry. The proposed algorithm improves spatial resolution by explicitly accounting for image blurriness caused by different factors, such as extended X-ray source and detector response. The blurring kernel is estimated using the MTF slice of the Catphan phantom. The proposed algorithm is specifically optimized for the new Leksell Gamma Knife Icon (Elekta AB, Stockholm, Sweden) which incorporates the HCBCT geometry to accommodate the existing treatment couch while covering down to the base-of-skull in the reconstructed field-of-view. Image reconstruction involves a Fourier-based scaling simultaneous algebraic reconstruction technique (SART) coupled with total variation (TV) minimization and non-local mean denoising, solved using a split Bregman separation technique that splits the reconstruction problem into a gradient based updating step and a TV-based deconvolution algorithm. This formulation preserves edges and reduces the staircase effect caused by regular TV-penalized iterative algorithms. Our experiments indicate that our proposed method outperforms the conventional filtered back projection and TV penalized SART methods in terms of line pair resolution and retains the favorable properties of the standard TV-penalized reconstruction.
Imaging characteristics of distance-driven method in a prototype cone-beam computed tomography (CBCT)
Cone-beam computed tomography (CBCT) has widely been used and studied in both medical imaging and radiation therapy. The aim of this study was to evaluate our newly developed CBCT system by implementing a distance-driven system modeling technique in order to produce excellent and accurate cross-sectional images. For the purpose of comparing the performance of the distance-driven methods, we also performed pixel-driven and ray-driven techniques when conducting forward- and back-projection schemes. We conducted the Feldkamp-Davis-Kress (FDK) algorithm and simultaneous algebraic reconstruction technique (SART) to retrieve a volumetric information of scanned chest phantom. The results indicated that contrast-to-noise (CNR) of the reconstructed images by using FDK and SART showed 8.02 and 15.78 for distance-driven, whereas 4.02 and 5.16 for pixel-driven scheme and 7.81 and 13.01 for ray-driven scheme, respectively. This could demonstrate that distance-driven method described more closely the chest phantom compared to pixel- and ray-driven. However, both elapsed time for modeling a system matrix and reconstruction time took longer time when performing the distance-driven scheme. Therefore, future works will be directed toward reducing computational time to acceptable limits for real applications.
Lens of the eye dose calculation for neuro-interventional procedures and CBCT scans of the head
The aim of this work is to develop a method to calculate lens dose for fluoroscopically-guided neuro-interventional procedures and for CBCT scans of the head. EGSnrc Monte Carlo software is used to determine the dose to the lens of the eye for the projection geometry and exposure parameters used in these procedures. This information is provided by a digital CAN bus on the Toshiba Infinix C-Arm system which is saved in a log file by the real-time skin-dose tracking system (DTS) we previously developed. The x-ray beam spectra on this machine were simulated using BEAMnrc. These spectra were compared to those determined by SpekCalc and validated through measured percent-depth-dose (PDD) curves and half-value-layer (HVL) measurements. We simulated CBCT procedures in DOSXYZnrc for a CTDI head phantom and compared the surface dose distribution with that measured with Gafchromic film, and also for an SK150 head phantom and compared the lens dose with that measured with an ionization chamber. Both methods demonstrated good agreement. Organ dose calculated for a simulated neuro-interventional-procedure using DOSXYZnrc with the Zubal CT voxel phantom agreed within 10% with that calculated by PCXMC code for most organs. To calculate the lens dose in a neuro-interventional procedure, we developed a library of normalized lens dose values for different projection angles and kVp’s. The total lens dose is then calculated by summing the values over all beam projections and can be included on the DTS report at the end of the procedure.
A fast GPU-based approach to branchless distance-driven projection and back-projection in cone beam CT
Daniel Schlifske, Henry Medeiros
Modern CT image reconstruction algorithms rely on projection and back-projection operations to refine an image estimate in iterative image reconstruction. A widely-used state-of-the-art technique is distance-driven projection and back-projection. While the distance-driven technique yields superior image quality in iterative algorithms, it is a computationally demanding process. This has a detrimental effect on the relevance of the algorithms in clinical settings. A few methods have been proposed for enhancing the distance-driven technique in order to take advantage of modern computer hardware. This paper explores a two-dimensional extension of the branchless method proposed by Samit Basu and Bruno De Man. The extension of the branchless method is named “pre-integration” because it achieves a significant performance boost by integrating the data before the projection and back-projection operations. It was written with Nvidia’s CUDA platform and carefully designed for massively parallel GPUs. The performance and the image quality of the pre-integration method were analyzed. Both projection and back-projection are significantly faster with preintegration. The image quality was analyzed using cone beam image reconstruction algorithms within Jeffrey Fessler’s Image Reconstruction Toolbox. Images produced from regularized, iterative image reconstruction algorithms using the pre-integration method show no significant impact to image quality.
A system to track skin dose for neuro-interventional cone-beam computed tomography (CBCT)
The skin-dose tracking system (DTS) provides a color-coded illustration of the cumulative skin-dose distribution on a closely-matching 3D graphic of the patient during fluoroscopic interventions in real-time for immediate feedback to the interventionist. The skin-dose tracking utility of DTS has been extended to include cone-beam computed tomography (CBCT) of neurointerventions. While the DTS was developed to track the entrance skin dose including backscatter, a significant part of the dose in CBCT is contributed by exit primary radiation and scatter due to the many overlapping projections during the rotational scan. The variation of backscatter inside and outside the collimated beam was measured with radiochromic film and a curve was fit to obtain a scatter spread function that could be applied in the DTS. Likewise, the exit dose distribution was measured with radiochromic film for a single projection and a correction factor was determined as a function of path length through the head. Both of these sources of skin dose are added for every projection in the CBCT scan to obtain a total dose mapping over the patient graphic. Results show the backscatter to follow a sigmoidal falloff near the edge of the beam, extending outside the beam as far as 8 cm. The exit dose measured for a cylindrical CTDI phantom was nearly 10 % of the entrance peak skin dose for the central ray. The dose mapping performed by the DTS for a CBCT scan was compared to that measured with radiochromic film and a CTDI-head phantom with good agreement.
Regularization design for high-quality cone-beam CT of intracranial hemorrhage using statistical reconstruction
Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging.
Properties of the ellipse-line-ellipse trajectory with asymmetrical variations
Zijia Guo, Frédéric Noo, Andreas Maier, et al.
Three-dimensional cone-beam (CB) imaging using a multi-axis floor-mounted (or ceiling-mounted) C-arm system has become an important tool in interventional radiology. This success motivates new developments to improve image quality. One direction in which advancement is sought is the data acquisition geometry and related CB artifacts. Currently, data acquisition is performed using the circular short-scan trajectory, which yields limited axial coverage and also provides incomplete data for accurate reconstruction. To improve the image quality, as well as to increase the coverage in the longitudinal direction of the patient, we recently introduced the ellipse- line-ellipse trajectory and showed that this trajectory provides full R-line coverage within the field-of-view, which is a key property for accurate reconstruction from truncated data. An R-line is any segment of line that connects two source positions. Here, we examine how the application of asymmetrical variations to the definition of the ELE trajectory impacts the R-line coverage. This question is significant to understand how much flexibility can be used in the implementation of the ELE trajectory, particularly to adapt the scan to patient anatomy and imaging task of interest. Two types of asymmetrical variations, called axial and angular variations, are investigated.
Library-based scatter correction for dedicated cone beam breast CT: a feasibility study
Purpose: Scatter errors are detrimental to cone-beam breast CT (CBBCT) accuracy and obscure the visibility of calcifications and soft-tissue lesions. In this work, we propose practical yet effective scatter correction for CBBCT using a library-based method and investigate its feasibility via small-group patient studies. Method: Based on a simplified breast model with varying breast sizes, we generate a scatter library using Monte-Carlo (MC) simulation. Breasts are approximated as semi-ellipsoids with homogeneous glandular/adipose tissue mixture. On each patient CBBCT projection dataset, an initial estimate of scatter distribution is selected from the pre-computed scatter library by measuring the corresponding breast size on raw projections and the glandular fraction on a first-pass CBBCT reconstruction. Then the selected scatter distribution is modified by estimating the spatial translation of the breast between MC simulation and the clinical scan. Scatter correction is finally performed by subtracting the estimated scatter from raw projections. Results: On two sets of clinical patient CBBCT data with different breast sizes, the proposed method effectively reduces cupping artifact and improves the image contrast by an average factor of 2, with an efficient processing time of 200ms per conebeam projection. Conclusion: Compared with existing scatter correction approaches on CBBCT, the proposed library-based method is clinically advantageous in that it requires no additional scans or hardware modifications. As the MC simulations are pre-computed, our method achieves a high computational efficiency on each patient dataset. The library-based method has shown great promise as a practical tool for effective scatter correction on clinical CBBCT.
Ring artifacts removal via spatial sparse representation in cone beam CT
Zhongyuan Li, Guang Li, Yi Sun, et al.
This paper is about the ring artifacts removal method in cone beam CT. Cone beam CT images often suffer from disturbance of ring artifacts which caused by the non-uniform responses of the elements in detectors. Conventional ring artifacts removal methods focus on the correlation of the elements and the ring artifacts’ structural characteristics in either sinogram domain or cross-section image. The challenge in the conventional methods is how to distinguish the artifacts from the intrinsic structures; hence they often give rise to the blurred image results due to over processing. In this paper, we investigate the characteristics of the ring artifacts in spatial space, different from the continuous essence of 3D texture feature of the scanned objects, the ring artifacts are displayed discontinuously in spatial space, specifically along z-axis. Thus we can easily recognize the ring artifacts in spatial space than in cross-section. As a result, we choose dictionary representation for ring artifacts removal due to its high sensitivity to structural information. We verified our theory both in spatial space and coronal-section, the experimental results demonstrate that our methods can remove the artifacts efficiently while maintaining image details.
Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies
Wenlei Liu, Junyan Rong, Peng Gao, et al.
X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging.
Poster Session: CT: Artifact Corrections
icon_mobile_dropdown
MADR: metal artifact detection and reduction
Metal in CT-imaged objects drastically reduces the quality of these images due to the severe artifacts it can cause. Most metal artifacts reduction (MAR) algorithms consider the metal-affected sinogram portions as the corrupted data and replace them via sophisticated interpolation methods. While these schemes are successful in removing the metal artifacts, they fail to recover some of the edge information. To address these problems, the frequency shift metal artifact reduction algorithm (FSMAR) was recently proposed. It exploits the information hidden in the uncorrected image and combines the high frequency (edge) components of the uncorrected image with the low frequency components of the corrected image. Although this can effectively transfer the edge information of the uncorrected image, it also introduces some unwanted artifacts. The essential problem of these algorithms is that they lack the capability of detecting the artifacts and as a result cannot discriminate between desired and undesired edges. We propose a scheme that does better in these respects. Our Metal Artifact Detection and Reduction (MADR) scheme constructs a weight map which stores whether a pixel in the uncorrected image belongs to an artifact region or a non-artifact region. This weight matrix is optimal in the Linear Minimum Mean Square Sense (LMMSE). Our results demonstrate that MADR outperforms the existing algorithms and ensures that the anatomical structures close to metal implants are better preserved.
Metal artifact reduction in CT via ray profile correction
In computed tomography (CT), metal implants increase the inconsistencies between the measured data and the linear assumption made by the analytical CT reconstruction algorithm. The inconsistencies appear in the form of dark and bright bands and streaks in the reconstructed image, collectively called metal artifacts. The standard method for metal artifact reduction (MAR) replaces the inconsistent data with the interpolated data. However, sinogram interpolation not only introduces new artifacts but it also suffers from the loss of detail near the implanted metals. With the help of a prior image that is usually estimated from the metal artifact-degraded image via computer vision techniques, improvements are feasible but still no MAR method exists that is widely accepted and utilized. We propose a technique that utilizes a prior image from a CT scan taken of the patient before implanting the metal objects. Hence there is a sufficient amount of structural similarity to cover the loss of detail around the metal implants. Using the prior scan and a segmentation or model of the metal implant our method then replaces sinogram interpolation with ray profile matching and estimation which yields much more reliable data estimates for the affected sinogram regions. As preliminary work, we built a new MAR framework on fan-beam geometry and tested it to remove simulated metal artifacts on a thorax phantom. The comparison with two representative sinogram correction based MAR methods shows very promising results.
Line-ratio based ring artifact correction method using transfer function
Daejoong Oh, Dosik Hwang, Younguk Kim
Computed tomography (CT) has been used for medical purposes. However there are many artifacts at CT images and that makes distorted image. Ring artifact is caused by non-uniform sensitivity of detectors and makes ring shape artifact. Line-ratio method was proposed to solve the problem however there are some problem at specific case. Therefore we propose advanced method to correct ring artifact using transfer function. As a result, ring artifacts can be removed at more global cases. Simulation data shows the proposed method outperforms the conventional line-ratio method.
Beam hardening and motion artifacts in cardiac CT: evaluation and iterative correction method
For myocardial perfusion CT exams, beam hardening (BH) artifacts may degrade the accuracy of myocardial perfusion defect detection. Meanwhile, cardiac motion may make BH process inconsistent, which makes conventional BH correction (BHC) methods ineffective. The aims of this study were to assess the severity of BH artifacts and motion artifacts and propose a projection-based iterative BHC method which has a potential to handle the motion-induced inconsistency better than conventional methods. In this study, four sets of forward projection data were first acquired using both cylindrical phantoms and cardiac images as objects: (1) with monochromatic x-rays without motion; (2) with polychromatic x-rays without motion; (3) with monochromatic x-rays with motion; and (4) with polychromatic x-rays with motion. From each dataset, images were reconstructed using filtered back projection; for datasets 2 and 4, one of the following BHC methods was also performed: (A) no BHC; (B) BHC that concerns water only; and (C) BHC that takes both water and iodine into account, which is an iterative method we developed in this work. Biases of images were quantified by the mean absolute difference (MAD). The MAD of images with BH artifacts alone (dataset 2, without BHC) was comparable or larger than that of images with motion artifacts alone (dataset 3): In the study of cardiac image, BH artifacts account for over 80% of the total artifacts. The use of BHC was effective: with dataset 4, MAD values were 170 HU with no BHC, 54 HU with water BHC, and 42 HU with the proposed BHC. Qualitative improvements in image quality were also noticeable in reconstructed images.
Poster Session: CT: Technology, System Characterization, Dose, Applications
icon_mobile_dropdown
A geometric calibration method for inverse geometry computed tomography using P-matrices
Accurate and artifact free reconstruction of tomographic images requires precise knowledge of the imaging system geometry. This work proposes a novel projection matrix (P-matrix) based calibration method to enable C-arm inverse geometry CT (IGCT). The method is evaluated for scanning-beam digital x-ray (SBDX), a C-arm mounted inverse geometry fluoroscopic technology. A helical configuration of fiducials is imaged at each gantry angle in a rotational acquisition. For each gantry angle, digital tomosynthesis is performed at multiple planes and a composite image analogous to a cone-beam projection is generated from the plane stack. The geometry of the C-arm, source array, and detector array is determined at each angle by constructing a parameterized 3D-to-2D projection matrix that minimizes the sum-of-squared deviations between measured and projected fiducial coordinates. Simulations were used to evaluate calibration performance with translations and rotations of the source and detector. In a geometry with 1 mm translation of the central ray relative to the axis-of-rotation and 1 degree yaw of the detector and source arrays, the maximum error in the recovered translational parameters was 0.4 mm and maximum error in the rotation parameter was 0.02 degrees. The relative rootmean- square error in a reconstruction of a numerical thorax phantom was 0.4% using the calibration method, versus 7.7% without calibration. Changes in source-detector-distance were the most challenging to estimate. Reconstruction of experimental SBDX data using the proposed method eliminated double contour artifacts present in a non-calibrated reconstruction. The proposed IGCT geometric calibration method reduces image artifacts when uncertainties exist in system geometry.
CT dose minimization using personalized protocol optimization and aggressive bowtie
In this study, we propose to use patient-specific x-ray fluence control to reduce the radiation dose to sensitive organs while still achieving the desired image quality (IQ) in the region of interest (ROI). The mA modulation profile is optimized view by view, based on the sensitive organs and the ROI, which are obtained from an ultra-low-dose volumetric CT scout scan [1]. We use a clinical chest CT scan to demonstrate the feasibility of the proposed concept: the breast region is selected as the sensitive organ region while the cardiac region is selected as IQ ROI. Two groups of simulations are performed based on the clinical CT dataset: (1) a constant mA scan adjusted based on the patient attenuation (120 kVp, 300 mA), which serves as baseline; (2) an optimized scan with aggressive bowtie and ROI centering combined with patient-specific mA modulation. The results shows that the combination of the aggressive bowtie and the optimized mA modulation can result in 40% dose reduction in the breast region, while the IQ in the cardiac region is maintained. More generally, this paper demonstrates the general concept of using a 3D scout scan for optimal scan planning.
Segmentation-free x-ray energy spectrum estimation for computed tomography
X-ray energy spectrum plays an essential role in imaging and related tasks. Due to the high photon flux of clinical CT scanners, most of the spectrum estimation methods are indirect and are usually suffered from various limitations. The recently proposed indirect transmission measurement-based method requires at least the segmentation of one material, which is insufficient for CT images of highly noisy and with artifacts. To combat for the bottleneck of spectrum estimation using segmented CT images, in this study, we develop a segmentation-free indirect transmission measurement based energy spectrum estimation method using dual-energy material decomposition. The general principle of the method is to compare polychromatic forward projection with raw projection to calibrate a set of unknown weights which are used to express the unknown spectrum together with a set of model spectra. After applying dual-energy material decomposition using high-and low-energy raw projection data, polychromatic forward projection is conducted on material-specific images. The unknown weights are then iteratively updated to minimize the difference between the raw projection and estimated projection. Both numerical simulations and experimental head phantom are used to evaluate the proposed method. The results indicate that the method provides accurate estimate of the spectrum and it may be attractive for dose calculations, artifacts correction and other clinical applications.
Noise power spectrum studies of CT systems with off-centered image object and bowtie filter
Daniel Gomez-Cardona, Juan P. Cruz-Bastida, Ke Li, et al.
In previous studies of the noise power spectrum (NPS) of multi-detector CT (MDCT) systems, the image object was usually placed at the iso-center of the CT system; therefore, the bowtie filter had negligible impact on the shape of the two-dimensional (2D) NPS of MDCT. This work characterized the NPS of off-centered objects when a bowtie filter is present. It was found that the interplay between the bowtie filter and object position has significant impact on the rotational symmetry of the 2D NPS. Depending on the size of the bowtie filter, the degree of object off-centering, and the location of the region of interest (ROI) used for the NPS measurements, the symmetry of the 2D NPS can be classified as circular, dumbbell, and a peculiar cloverleaf symmetry. An anisotropic NPS corresponds to structured noise texture, which may directly influence the detection performance of certain low contrast detection tasks.
Prototype adaptive bow-tie filter based on spatial exposure time modulation
In recent years, there has been an increased interest in the development of dynamic bow-tie filters that are able to provide patient-specific x-ray beam shaping. We introduce the first physical prototype of a new adaptive bow-tie filter design based on the concept of “spatial exposure time modulation.” While most existing bow-tie filters operate by attenuating the radiation beam differently in different locations using partially attenuating objects, the presented filter shapes the radiation field using two movable completely radio-opaque collimators. The aperture and speed of the collimators is modulated in synchrony with the x-ray exposure to selectively block the radiation emitted to different parts of the object. This mode of operation does not allow the reproduction of every possible attenuation profile, but it can reproduce the profile of any object with an attenuation profile monotonically decreasing from the center to the periphery, such as an object with an elliptical cross section. Therefore, the new adaptive filter provides the same advantages as the currently existing static bow-tie filters, which are typically designed to work for a pre-determined cylindrical object at a fixed distance from the source, and provides the additional capability to adapt its performance at image acquisition time to better compensate for the actual diameter and location of the imaged object. A detailed description of the prototype filter, the implemented control methods, and a preliminary experimental validation of its performance are presented.
Estimation of breast dose saving potential using a breast positioning technique for organ-based tube current modulated CT
Wanyi Fu, Xiaoyu Tian, Gregory Sturgeon, et al.
In thoracic CT, organ-based tube current modulation (OTCM) reduces breast dose by lowering the tube current in the 120° anterior dose reduction zone of patients. However, in practice the breasts usually expand to an angle larger than the dose reduction zone. This work aims to simulate a breast positioning technique (BPT) to constrain the breast tissue to within the dose reduction zone for OTCM and to evaluate the corresponding potential reduction in breast dose. Thirteen female anthropomorphic computational phantoms were studied (age range: 27-65 y.o., weight range: 52-105.8 kg). Each phantom was modeled in the supine position with and without application of the BPT. Attenuation-based tube current (ATCM, reference mA) was generated by a ray-tracing program, taking into account the patient attenuation change in the longitudinal and angular plane (CAREDose4D, Siemens Healthcare). OTCM was generated by reducing the mA to 20% between ± 60° anterior of the patient and increasing the mA in the remaining projections correspondingly (X-CARE, Siemens Healthcare) to maintain the mean tube current. Breast tissue dose was estimated using a validated Monte Carlo program for a commercial scanner (SOMATOM Definition Flash, Siemens Healthcare). Compared to standard tube current modulation, breast dose was significantly reduced using OTCM by 19.8±4.7%. With the BPT, breast dose was reduced by an additional 20.4±6.5% to 37.1±6.9%, using the same CTDIvol. BPT was more effective for phantoms simulating women with larger breasts with the average breast dose reduction of 30.2%, 39.2%, and 49.2% from OTCMBP to ATCM, using the same CTDIvol for phantoms with 0.5, 1.5, and 2.5 kg breasts, respectively. This study shows that a specially designed BPT improves the effectiveness of OTCM.
Robust dynamic myocardial perfusion CT deconvolution using adaptive-weighted tensor total variation regularization
Changfei Gong, Dong Zeng, Zhaoying Bian, et al.
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.
Organ dose conversion coefficients for tube current modulated CT protocols for an adult population
Wanyi Fu, Xiaoyu Tian, Pooyan Sahbaee, et al.
In computed tomography (CT), patient-specific organ dose can be estimated using pre-calculated organ dose conversion coefficients (organ dose normalized by CTDIvol, h factor) database, taking into account patient size and scan coverage. The conversion coefficients have been previously estimated for routine body protocol classes, grouped by scan coverage, across an adult population for fixed tube current modulated CT. The coefficients, however, do not include the widely utilized tube current (mA) modulation scheme, which significantly impacts organ dose. This study aims to extend the h factors and the corresponding dose length product (DLP) to create effective dose conversion coefficients (k factor) database incorporating various tube current modulation strengths. Fifty-eight extended cardiac-torso (XCAT) phantoms were included in this study representing population anatomy variation in clinical practice. Four mA profiles, representing weak to strong mA dependency on body attenuation, were generated for each phantom and protocol class. A validated Monte Carlo program was used to simulate the organ dose. The organ dose and effective dose was further normalized by CTDIvol and DLP to derive the h factors and k factors, respectively. The h factors and k factors were summarized in an exponential regression model as a function of body size. Such a population-based mathematical model can provide a comprehensive organ dose estimation given body size and CTDIvol. The model was integrated into an iPhone app XCATdose version 2, enhancing the 1st version based upon fixed tube current modulation. With the organ dose calculator, physicists, physicians, and patients can conveniently estimate organ dose.
A technique for multi-dimensional optimization of radiation dose, contrast dose, and image quality in CT imaging
Pooyan Sahbaee, Ehsan Abadi, Jeremiah Sanders, et al.
The purpose of this study was to substantiate the interdependency of image quality, radiation dose, and contrast material dose in CT towards the patient-specific optimization of the imaging protocols. The study deployed two phantom platforms. First, a variable sized phantom containing an iodinated insert was imaged on a representative CT scanner at multiple CTDI values. The contrast and noise were measured from the reconstructed images for each phantom diameter. Linearly related to iodine-concentration, contrast to noise ratio (CNR), was calculated for different iodine-concentration levels. Second, the analysis was extended to a recently developed suit of 58 virtual human models (5D-XCAT) with added contrast dynamics. Emulating a contrast-enhanced abdominal image procedure and targeting a peak-enhancement in aorta, each XCAT phantom was “imaged” using a CT simulation platform. 3D surfaces for each patient/size established the relationship between iodine-concentration, dose, and CNR. The Sensitivity of Ratio (SR), defined as ratio of change in iodine-concentration versus dose to yield a constant change in CNR was calculated and compared at high and low radiation dose for both phantom platforms. The results show that sensitivity of CNR to iodine concentration is larger at high radiation dose (up to 73%). The SR results were highly affected by radiation dose metric; CTDI or organ dose. Furthermore, results showed that the presence of contrast material could have a profound impact on optimization results (up to 45%).
Experimental demonstration of a dynamic bowtie for region-based CT fluence optimization
Vance Robinson, Walt Smith, Xue Rui, et al.
Technology development in Computed Tomography (CT) is driven by clinical needs, for example the need for image quality sufficient for the clinical task, and the need to obtain the required image quality using the lowest possible radiation dose to the patient. One approach to manage dose without compromising image quality is to spatially vary the X-ray flux such that regions of high interest receive more radiation while regions of low interest or regions sensitive to radiation receive less dose. If the region of interest (ROI) is centered at the CT system’s axis of rotation, a simple stationary bowtie mounted between the X-ray tube and the patient is sufficient to reduce the X-ray flux outside the central region. If the ROI is off center, then a dynamic bowtie that can track the ROI as the gantry rotates is preferred. We experimentally demonstrated the dynamic bowtie using a design that is relatively simple, low cost, requires no auxiliary power supply, and can be retrofitted to an existing clinical CT scanner. We installed our prototype dynamic bowtie on a clinical CT scanner, and we scanned a phantom with a pre-selected off-center ROI. The dynamic bowtie reduced the X-ray intensity outside the targeted ROI tenfold. As a result, the reconstructed image shows significantly lower noise within the dynamic bowtie ROI compared to regions outside it. Our preliminary results suggest that a dynamic bowtie could be an effective solution for further reducing CT radiation dose.
Hybrid deterministic and stochastic x-ray transport simulation for transmission computed tomography with advanced detector noise model
Lucretiu M. Popescu
We present a model for simulation of noisy X-ray computed tomography data sets. The model is made of two main components, a photon transport simulation component that generates the noiseless photon field incident on the detector, and a detector response model that takes as input the incident photon field parameters and given the X-ray source intensity and exposure time can generate noisy data sets, accordingly. The photon transport simulation component combines direct ray-tracing of polychromatic X-rays for calculation of transmitted data, with Monte Carlo simulation for calculation of the scattered-photon data. The Monte Carlo scatter simulation is accelerated by implementing particle splitting and importance sampling variance reduction techniques. The detector-incident photon field data are stored as energy expansion coefficients on a refined grid that covers the detector area. From these data the detector response model is able to generate noisy detector data realizations, by reconstituting the main parameters that describe each detector element response in statistical terms, including spatial correlations. The model is able to generate very fast, on the fly, CT data sets corresponding to different radiation doses, as well as detector response characteristics, facilitating data management in extensive optimization studies by reducing the computation time and storage space demands.
An automated technique for estimating patient-specific regional imparted energy and dose in TCM CT exams
Jeremiah W. Sanders, Xiaoyu Tian, W. Paul Segars, et al.
Currently computed tomography (CT) dosimetry relies on CT dose index (CTDI) and size specific dose estimates (SSDE). Organ dose is a better metric of radiation burden. However, organ dose estimation requires precise knowledge of organ locations. Regional imparted energy and dose can also be used to quantify radiation burden. Estimating the imparted energy from CT exams is beneficial in that it does not require precise estimates of the organ size or location. This work investigated an automated technique for retrospectively estimating the imparted energy from chest and abdominopelvic tube current modulated (TCM) CT exams. Monte Carlo simulations of chest and abdominopelvic TCM CT examinations across various tube potentials and TCM strengths were performed on 58 adult computational extended cardiac-torso (XCAT) phantoms to develop relationships between scanned mass and imparted energy normalized by dose length product (DLP). An automated algorithm for calculating the scanned patient volume was further developed using an open source mesh generation toolbox. The scanned patient volume was then used to estimate the scanned mass accounting for diverse density within the scan region. The scanned mass and DLP from the exam were used to estimate the imparted energy to the patient using the knowledgebase developed from the Monte Carlo simulations. Patientspecific imparted energy estimates were made from 20 chest and 20 abdominopelvic clinical CT exams. The average imparted energy was 274 ± 141 mJ and 681 ± 376 mJ for the chest and abdominopelvic exams, respectively. This method can be used to estimate the regional imparted energy and/or regional dose in chest and abdominopelvic TCM CT exams across clinical operations.
A framework for analytical estimation of patient-specific CT dose
The authors introduce an algorithm to estimate the spatial dose distributions in computed tomography (CT) images. The algorithm calculates dose distributions due to the primary and scattered photons separately. The algorithm only requires the CT data set that includes the patient CT images and the scanner acquisition parameters. Otherwise the scanner acquisition parameters are extracted from the CT images. Using the developed algorithm, the dose distributions for head and chest phantoms are computed and the results show the excellent agreements with the dose distributions obtained using a commercial Monte Carlo code. The developed algorithm can be applied to a patient-specific CT dose estimation based on the CT data.
Modulation transfer function determination using the edge technique for cone-beam micro-CT
Evaluating spatial resolution is an essential work for cone-beam computed tomography (CBCT) manufacturers, prototype designers or equipment users. To investigate the cross-sectional spatial resolution for different transaxial slices with CBCT, the slanted edge technique with a 3D slanted edge phantom are proposed and implemented on a prototype cone-beam micro-CT. Three transaxial slices with different cone angles are under investigation. An over-sampled edge response function (ERF) is firstly generated from the intensity of the slightly tiled air to plastic edge in each row of the transaxial reconstruction image. Then the oversampled ESF is binned and smoothed. The derivative of the binned and smoothed ERF gives the line spread function (LSF). At last the presampled modulation transfer function (MTF) is calculated by taking the modulus of the Fourier transform of the LSF. The spatial resolution is quantified with the spatial frequencies at 10% MTF level and full-width-half-maximum (FWHM) value. The spatial frequencies at 10% of MTFs are 3.1±0.08mm-1, 3.0±0.05mm-1, and 3.2±0.04mm-1 for the three transaxial slices at cone angles of 3.8°, 0°, and -3.8° respectively. The corresponding FWHMs are 252.8μm, 261.7μm and 253.6μm. Results indicate that cross-sectional spatial resolution has no much differences when transaxial slices being 3.8° away from z=0 plane for the prototype conebeam micro-CT.
An approach for quantitative image quality analysis for CT
Amir Rahimi, Joe Cochran, Doug Mooney, et al.
An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.
Low dose CT perfusion using k-means clustering
Francesco Pisana, Thomas Henzler, Stefan Schönberg, et al.
We aim at improving low dose CT perfusion functional parameters maps and CT images quality, preserving quantitative information. In a dynamic CT perfusion dataset, each voxel is measured T times, where T is the number of acquired time points. In this sense, we can think about a voxel as a point in a T-dimensional space, where the coordinates of the voxels would be the values of its time attenuation curve (TAC). Starting from this idea, a k-means algorithm was designed to group voxels in K classes. A modified guided time-intensity profile similarity (gTIPS) filter was implemented and applied only for those voxels belonging to the same class. The approach was tested on a digital brain perfusion phantom as well as on clinical brain and body perfusion datasets, and compared to the original TIPS implementation. The TIPS filter showed the highest CNR improvement, but lowest spatial resolution. gTIPS proved to have the best combination of spatial resolution and CNR improvement for CT images, while k-gTIPS was superior to both gTIPS and TIPS in terms of perfusion maps image quality. We demonstrate k-means clustering analysis can be applied to denoise dynamic CT perfusion data and to improve functional maps. Beside the promising results, this approach has the major benefit of being independent from the perfusion model employed for functional parameters calculation. No similar approaches were found in literature.
Poster Session: Detectors
icon_mobile_dropdown
Quantitative comparison using generalized relative object detectability (G-ROD) metrics of an amorphous selenium detector with high resolution microangiographic fluoroscopes (MAF) and standard flat panel detectors (FPD)
A novel amorphous selenium (a-Se) direct detector with CMOS readout has been designed, and relative detector performance investigated. The detector features include a 25μm pixel pitch, and 1000μm thick a-Se layer operating at 10V/μm bias field. A simulated detector DQE was determined, and used in comparative calculations of the Relative Object Detectability (ROD) family of prewhitening matched-filter (PWMF) observer and non-pre-whitening matched filter (NPWMF) observer model metrics to gauge a-Se detector performance against existing high resolution micro-angiographic fluoroscopic (MAF) detectors and a standard flat panel detector (FPD). The PWMF-ROD or ROD metric compares two x-ray imaging detectors in their relative abilities in imaging a given object by taking the integral over spatial frequencies of the Fourier transform of the detector DQE weighted by an object function, divided by the comparable integral for a different detector. The generalized-ROD (G-ROD) metric incorporates clinically relevant parameters (focal- spot size, magnification, and scatter) to show the degradation in imaging performance for detectors that are part of an imaging chain. Preliminary ROD calculations using simulated spheres as the object predicted superior imaging performance by the a-Se detector as compared to existing detectors. New PWMF-G-ROD and NPWMF-G-ROD results still indicate better performance by the a-Se detector in an imaging chain over all sphere sizes for various focal spot sizes and magnifications, although a-Se performance advantages were degraded by focal spot blurring. Nevertheless, the a-Se technology has great potential to provide break- through abilities such as visualization of fine details including of neuro-vascular perforator vessels and of small vascular devices.
Exploration of maximum count rate capabilities for large-area photon counting arrays based on polycrystalline silicon thin-film transistors
Albert K. Liang, Martin Koniczek, Larry E. Antonuk, et al.
Pixelated photon counting detectors with energy discrimination capabilities are of increasing clinical interest for x-ray imaging. Such detectors, presently in clinical use for mammography and under development for breast tomosynthesis and spectral CT, usually employ in-pixel circuits based on crystalline silicon – a semiconductor material that is generally not well-suited for economic manufacture of large-area devices. One interesting alternative semiconductor is polycrystalline silicon (poly-Si), a thin-film technology capable of creating very large-area, monolithic devices. Similar to crystalline silicon, poly-Si allows implementation of the type of fast, complex, in-pixel circuitry required for photon counting – operating at processing speeds that are not possible with amorphous silicon (the material currently used for large-area, active matrix, flat-panel imagers). The pixel circuits of two-dimensional photon counting arrays are generally comprised of four stages: amplifier, comparator, clock generator and counter. The analog front-end (in particular, the amplifier) strongly influences performance and is therefore of interest to study. In this paper, the relationship between incident and output count rate of the analog front-end is explored under diagnostic imaging conditions for a promising poly-Si based design. The input to the amplifier is modeled in the time domain assuming a realistic input x-ray spectrum. Simulations of circuits based on poly-Si thin-film transistors are used to determine the resulting output count rate as a function of input count rate, energy discrimination threshold and operating conditions.
Indirect-detection single-photon-counting x-ray detector for breast tomosynthesis
Hao Jiang, Joerg Kaercher, Roger Durst
X-ray mammography is a crucial screening tool for early identification of breast cancer. However, the overlap of anatomical features present in projection images often complicates the task of correctly identifying suspicious masses. As a result, there has been increasing interest in acquisition of volumetric information through digital breast tomosynthesis (DBT) which, compared to mammography, offers the advantage of depth information. Since DBT requires acquisition of many projection images, it is desirable that the noise in each projection image be dominated by the statistical noise of the incident x-ray quanta and not by the additive noise of the imaging system (referred to as quantum-limited imaging) and that the cumulative dose be as low as possible (e.g., no more than for a mammogram). Unfortunately, the electronic noise (~2000 electrons) present in current DBT systems based on active matrix, flat-panel imagers (AMFPIs) is still relatively high compared with modest x-ray gain of the a-Se and CsI:Tl x-ray converters often used. To overcome the modest signal-to-noise ratio (SNR) limitations of current DBT systems, we have developed a large-area x-ray imaging detector with the combination of an extremely low noise (~20 electrons) active-pixel CMOS and a specially designed high resolution scintillator. The high sensitivity and low noise of such system provides better SNR by at least an order of magnitude than current state-of-art AMFPI systems and enables x-ray indirect-detection single photon counting (SPC) at mammographic energies with the potential of dose reduction.
SWAD: inherent photon counting performance of amorphous selenium multi-well avalanche detector
Photon counting detectors (PCDs) have the potential to improve x-ray imaging, however they are still hindered by several performance limitations and high production cost. By using amorphous Selenium (a-Se) the cost of PCDs can be significantly reduced compared to crystalline materials and enable large area detector fabrication. To overcome the problem of low carrier mobility and low charge conversion gain in a-Se, we are developing a novel direct conversion a- Se field-Shaping multi-Well Avalanche Detector (SWAD). SWAD circumvents the charge transport limitation by using a Frisch grid built within the readout circuit, reducing charge collection time to ~200 ns. Field shaping permits depth independent avalanche gain in wells, resulting in total conversion gain that is comparable to Si and CdTe. In the present work we investigate the effects of charge sharing and energy loss to understand the inherent photon counting performance for SWAD at x-ray energies used in breast imaging applications (20-50keV). The energy deposition profile for each interacting x-ray was determined with Monte Carlo simulation. For the energy ranges we are interested in, photoelectric interaction dominates, with a k-fluorescence yield of approximately 60%. Using a monoenergetic 45 keV beam incident on a target pixel in 400um of a-Se, our results show that only 20.42 % and 22.4 % of primary interacting photons have kfluorescence emissions which escape the target pixel for 100um and 85um pixel sizes respectively, demonstrating SWAD’s potential for high spatial resolution applications.
Focal spot deblurring for high resolution direct conversion x-ray detectors
Small pixel high resolution direct x-ray detectors have the advantage of higher spatial sampling and decreased blurring characteristic. The limiting factors for such systems becomes the degradation due to the focal spot size. One solution is a smaller focal spot; however, this can limit the output of the x-ray tube. Here a software solution of deconvolving with an estimated focal spot blur is presented. To simulate images from a direct detector affected with focal-spot blur, first a set of high-resolution stent images (FRED from Microvention, Inc., Tustin, CA) were acquired using a 75μm pixel size Dexela-Perkin-Elmer detector and frame averaged to reduce quantum noise. Then the averaged image was blurred with a known Gaussian blur. To add noise to the blurred image a flat-field image was multiplied with the blurred image. Both the ideal and the noisy-blurred images were then deconvolved with the known Gaussian function using either threshold-based inverse filtering or Weiner deconvolution. The blur in the ideal image was removed and the details were recovered successfully. However, the inverse filtering deconvolution process is extremely susceptible to noise. The Weiner deconvolution process was able to recover more of the details of the stent from the noisy-blurred image, but for noisier images, stent details are still lost in the recovery process.
Noise power spectrum measurements under nonuniform gains and their compensations
The fixed pattern noise, which is due to the nonuniform amplifier gains and scintillator sensitivities, should be alleviated in radiography imaging and should have less influence on measuring the noise power spectrum (NPS) of the radiography detector. In order to reduce the influence, background trend removing methods, which are based on low-pass filtering, polynomial fitting, and subtracting the average image of the uniform exposure images, are traditionally employed in the literature. In terms of removing the fixed pattern noise, the subtraction method shows a good performance. However, the number of images to be averaged is practically finite and thus the noise contained in the average image contaminates the image difference and inflates the NPS curve. In this paper, an image formation model considering the nonuniform gain is constructed and two measuring methods, which are based on the subtraction and gain correction, respectively, are considered. In order to accurately measure a normalized NPS (NNPS) in the measuring methods, the number of images to be averaged is considered for NNPS compensations. For several flat-panel radiography detectors, the NNPS measurements are conducted and compared with conventional approaches, which have no compensation stages. Through experiments it is shown that the compensation can provide accurate NNPS measurements less influenced by the fixed pattern noise.
A comparison of quantum limited dose and noise equivalent dose
Quantum-limited-dose (QLD) and noise-equivalent-dose (NED) are performance metrics often used interchangeably. Although the metrics are related, they are not equivalent unless the treatment of electronic noise is carefully considered. These metrics are increasingly important to properly characterize the low-dose performance of flat panel detectors (FPDs). A system can be said to be quantum-limited when the Signal-to-noise-ratio (SNR) is proportional to the square-root of x-ray exposure. Recent experiments utilizing three methods to determine the quantum-limited dose range yielded inconsistent results. To investigate the deviation in results, generalized analytical equations are developed to model the image processing and analysis of each method. We test the generalized expression for both radiographic and fluoroscopic detectors. The resulting analysis shows that total noise content of the images processed by each method are inherently different based on their readout scheme. Finally, it will be shown that the NED is equivalent to the instrumentation-noise-equivalent-exposure (INEE) and furthermore that the NED is derived from the quantum-noise-only method of determining QLD. Future investigations will measure quantum-limited performance of radiographic panels with a modified readout scheme to allow for noise improvements similar to measurements performed with fluoroscopic detectors.
Optimizing the CsI thickness for chest dual-shot dual-energy detectors
Dong Woon Kim, Junwoo Kim, Hanbean Youn, et al.
Dual-energy imaging method has been introduced to improve conspicuity of abnormalities in radiographs. The method typically uses the fast kilovoltage-switching approach, which acquires low and high-energy projections in successive x-ray exposures with the same detector. However, it is typically known that there exists an optimal detector thickness regarding specific imaging tasks or energies used. In this study, the dual-energy detectability has been theoretically addressed for various combinations of detector thicknesses for low and high-energy spectra using the cascaded-systems analysis. Cesium iodide (CsI) is accounted for the x-ray converter in the hypothetical detector. The simple prewhitening model shows that a larger CsI thickness (250 mg cm-2 for example) would be preferred to the the typical CsI thickness of 200 mg cm-2 for better detectability. On the other hand, the typical CsI thickness is acceptable for the prewhitening model considering human-eye filter. The theoretical strategy performed in this study will be useful for a better design of detectors for dual-energy imaging.
Physical properties of a new flat panel detector with cesium-iodide technology
Andreas Hahn, Petar Penchev, Martin Fiebich
Flat panel detectors have become the standard technology in projection radiography. Further progress in detector technology will result in an improvement of MTF and DQE. The new detector (DX-D45C; Agfa; Mortsel/Belgium) is based on cesium-iodine crystals and has a change in the detector material and the readout electronics. The detector has a size of 30 cm x 24 cm and a pixel matrix of 2560 x 2048 with a pixel pitch of 124 μm. The system includes an automatic exposure detector, which enables the use of the detector without a connection to the x-ray generator. The physical properties of the detector were determined following IEC 62220-1-1 in a laboratory setting. The MTF showed an improvement compared to the previous version of cesium-iodine based flat-panel detectors. Thereby the DQE is also improved especially for the higher frequencies. The new detector showed an improvement in the physical properties compared to the previous versions. This enables a potential for further dose reductions in clinical imaging.
Detective quantum efficiency: a standard test to ensure optimal detector performance and low patient exposures
The detective quantum efficiency (DQE), expressed as a function of spatial frequency, describes the ability of an x-ray detector to produce high signal-to-noise ratio (SNR) images. While regulatory and scientific communities have used the DQE as a primary metric for optimizing detector design, the DQE is rarely used by end users to ensure high system performance is maintained. Of concern is that image quality varies across different systems for the same exposures with no current measures available to describe system performance. Therefore, here we conducted an initial DQE measurement survey of clinical x-ray systems using a DQE-testing instrument to identify their range of performance. Following laboratory validation, experiments revealed that the DQE of five different systems under the same exposure level (8.0 μGy) ranged from 0.36 to 0.75 at low spatial frequencies, and 0.02 to 0.4 at high spatial frequencies (3.5 cycles/mm). Furthermore, the DQE dropped substantially with decreasing detector exposure by a factor of up to 1.5x in the lowest spatial frequency, and a factor of 10x at 3.5 cycles/mm due to the effect of detector readout noise. It is concluded that DQE specifications in purchasing decisions, combined with periodic DQE testing, are important factors to ensure patients receive the health benefits of high-quality images for low x-ray exposures.
MTF and NPS of single-shot dual-energy sandwich detectors
Junwoo Kim, Dong Woon Kim, Hanbean Yun, et al.
The actual meaning of the modulation-transfer function (MTF) and the noise-power spectrum (NPS) is ambiguous in dual-energy images obtained from the single-shot sandwich detector, and their properties for various detector design parameters are also being questioned. In this study, the authors regard the sandwich detector including weighted logarithmic subtraction operation as a single black-box detector, and measure the single-shot dual-energy MTF and NPS performances. Subtraction of two images obtained from the sub-detector layers, which have different thick x-ray converters (hence, different spatial-resolution performances), of the sandwich detector yields a band-pass filter characteristic of the MTF. On the other hand, the NPS is the weighted sum of each NPS obtained from the sub-detector layers. The MTF characteristic is reflected into the DQE, hence the DQE shows a similar band-pass filter characteristics. Therefore, the sandwich detector may lose the contrast performance for large-area objects, but it may emphasize the contrast performance for objects with importance at mid-frequency information.
Poster Session: Dual and Multi Energy CT
icon_mobile_dropdown
Computed tomography with single-shot dual-energy sandwich detectors
Single-shot dual-energy sandwich detector can produce sharp images because of subtraction of images from two sub-detector layers, which have different thick x-ray converters, of the sandwich detector. Inspired by this observation, the authors have developed a microtomography system with the sandwich detector in pursuit of high-resolution bone-enhanced small-animal imaging. The preliminary results show that the bone-enhanced images reconstructed with the subtracted projection data are better in visibility of bone details than the conventionally reconstructed images. In addition, the bone-enhanced images obtained from the sandwich detector are relatively immune to the artifacts caused by photon starvation. The microtomography with the single-shot dual-energy sandwich detector will be useful for the high-resolution bone imaging.
Theoretical and Monte Carlo optimization of a stacked three-layer flat-panel x-ray imager for applications in multi-spectral diagnostic medical imaging
We propose a new design of a stacked three-layer flat-panel x-ray detector for dual-energy (DE) imaging. Each layer consists of its own scintillator of individual thickness and an underlying thin-film-transistor-based flat-panel. Three images are obtained simultaneously in the detector during the same x-ray exposure, thereby eliminating any motion artifacts. The detector operation is two-fold: a conventional radiography image can be obtained by combining all three layers’ images, while a DE subtraction image can be obtained from the front and back layers’ images, where the middle layer acts as a mid-filter that helps achieve spectral separation. We proceed to optimize the detector parameters for two sample imaging tasks that could particularly benefit from this new detector by obtaining the best possible signal to noise ratio per root entrance exposure using well-established theoretical models adapted to fit our new design. These results are compared to a conventional DE temporal subtraction detector and a single-shot DE subtraction detector with a copper mid-filter, both of which underwent the same theoretical optimization. The findings are then validated using advanced Monte Carlo simulations for all optimized detector setups. Given the performance expected from initial results and the recent decrease in price for digital x-ray detectors, the simplicity of the three-layer stacked imager approach appears promising to usher in a new generation of multi-spectral digital x-ray diagnostics.
Noise suppression for energy-resolved CT using similarity-based non-local filtration
Joe Harms, Tonghe Wang, Michael Petrongolo, et al.
In energy-resolved CT, images are reconstructed independently at different energy levels, resulting in images with different qualities but the same structures. We propose a similarity-based non-local filtration method to extract structural information from these images for noise suppression. For each pixel, we calculate its similarity to other pixels based on CT number. The calculation is repeated on each image at different energy levels and similarity values are averaged to generate a similarity matrix. Noise suppression is achieved by multiplying the image vector by the similarity matrix. Multiple scans on a tabletop CT system are used to simulate 6-channel energy-resolved CT, with energies ranging from 75 to 125 kVp. Phantom studies show that the proposed method improves average contrast-to-noise ratio (CNR) of seven materials on the 75 kVp image by a factor of 22. Compared with averaging CT images for noise suppression, our method achieves a higher CNR and reduces the CT number error of iodine solutions from 16.5% to 3.5% and the overall image root of mean-square error (RMSE) from 3.58% to 0.93%. On the phantom with line-pair structures, our algorithm reduces noise standard deviation (STD) by a factor of 23 while maintaining 7 lp/cm spatial resolution. Additionally, anthropomorphic head phantom studies show noise STD reduction by a factor or 26 with no loss of spatial resolution. The noise suppression achieved by the similarity-based method is clinically attractive, especially for CNRs of iodine in contrast-enhanced CT.
Dual energy x-ray imaging and scoring of coronary calcium: physics-based digital phantom and clinical studies
Bo Zhou, Di Wen, Katelyn Nye, et al.
Coronary artery calcification (CAC) as assessed with CT calcium score is the best biomarker of coronary artery disease. Dual energy x-ray provides an inexpensive, low radiation-dose alternative. A two shot system (GE Revolution-XRd) is used, raw images are processed with a custom algorithm, and a coronary calcium image (DECCI) is created, similar to the bone image, but optimized for CAC visualization, not lung visualization. In this report, we developed a physicsbased, digital-phantom containing heart, lung, CAC, spine, ribs, pulmonary artery, and adipose elements, examined effects on DECCI, suggested physics-inspired algorithms to improve CAC contrast, and evaluated the correlation between CT calcium scores and a proposed DE calcium score. In simulation experiment, Beam hardening from increasing adipose thickness (2cm to 8cm) reduced Cg by 19% and 27% in 120kVp and 60kVp images, but only reduced Cg by <7% in DECCI. If a pulmonary artery moves or pulsates with blood filling between exposures, it can give rise to a significantly confounding PA signal in DECCI similar in amplitude to CAC. Observations suggest modifications to DECCI processing, which can further improve CAC contrast by a factor of 2 in clinical exams. The DE score had the best correlation with "CT mass score" among three commonly used CT scores. Results suggest that DE x-ray is a promising tool for imaging and scoring CAC, and there still remains opportunity for further DECCI processing improvements.
Enhanced diagnostic value for coronary CT angiography of calcified coronary arteries using dual energy and a novel high-Z contrast material: a phantom study
Jack W. Lambert, Karen G. Ordovas, Yuxin Sun, et al.
Dual-energy CT is emerging as a dose-saving tool for coronary CT angiography that allows calcium-scoring without the need for a separate unenhanced scan acquisition. Unfortunately the similar attenuation coefficient profiles of iodine and calcium limits the accuracy of their decomposition in the material basis images. We evaluate a tungsten-based contrast material with a more distinct attenuation profile from calcium, and compare its performance to a conventional iodinated agent. We constructed a custom thorax phantom containing simulated sets of vessels 3, 6 and 9 mm in diameter. The vessel sets were walled with concentric and eccentric calcifications (“plaque”) with concentrations of 0, 20, 30 and 40% weight calcium hydroxyapatite (HAP). The phantom was filled sequentially with iodine and tungsten contrast material, and scanned helically using a fast-kV-switching DECT scanner. At material decomposition, both iodine and tungsten vessel lumens were separable from the HAP vessel walls, but separation was superior with tungsten which showed minimal false positive signal in the HAP image. Assessing their relative performance using line profiles, the HAP signal was greater in the tungsten separation in 6/9 of the vessel sets, and within 15% of the iodine separation for the remaining 3/9 sets. The robust phantom design enabled systematic evaluation of dual-energy material separation for calcium and a candidate non-iodinated vascular contrast element. This approach can be used to screen further agents and also refine dual energy CT material decomposition approaches.
Reconstruction of limited-angle dual-energy CT using mutual learning and cross-estimation (MLCE)
Dual-energy CT (DECT) imaging has gained a lot of attenuation because of its capability to discriminate materials. We proposes a flexible DECT scan strategy which can be realized on a system with general X-ray sources and detectors. In order to lower dose and scanning time, our DECT acquires two projections data sets on two arcs of limited-angular coverage (one for each energy) respectively. Meanwhile, a certain number of rays from two data sets form conjugate sampling pairs. Our reconstruction method for such a DECT scan mainly tackles the consequent limited-angle problem. Using the idea of artificial neural network, we excavate the connection between projections at two different energies by constructing a relationship between the linear attenuation coefficient of the high energy and that of the low one. We use this relationship to cross-estimate missing projections and reconstruct attenuation images from an augmented data set including projections at views covered by itself (projections collected in scanning) and by the other energy (projections estimated) for each energy respectively. Validated by our numerical experiment on a dental phantom with rather complex structures, our DECT is effective in recovering small structures in severe limited-angle situations. This DECT scanning strategy can much broaden DECT design in reality.
Calcium scoring with dual-energy CT in men and women: an anthropomorphic phantom study
Qin Li, Songtao Liu M.D., Kyle Myers, et al.
This work aimed to quantify and compare the potential impact of gender differences on coronary artery calcium scoring with dual-energy CT. An anthropomorphic thorax phantom with four synthetic heart vessels (diameter 3-4.5 mm: female/male left main and left circumflex artery) were scanned with and without female breast plates. Ten repeat scans were acquired in both single- and dual-energy modes and reconstructed at six reconstruction settings: two slice thicknesses (3 mm, 0.6 mm) and three reconstruction algorithms (FBP, IR3, IR5). Agatston and calcium volume scores were estimated from the reconstructed data using a segmentation-based approach. Total calcium score (summation of four vessels), and male/female calcium scores (summation of male/female vessels scanned in phantom without/with breast plates) were calculated accordingly. Both Agatston and calcium volume scores were found comparable between single- and dual-energy scans (Pearson r= 0.99, p<0.05). The total calcium scores were larger for the thinner slice thickness. Among the scores obtained from the three reconstruction algorithms, FBP yielded the highest and IR5 yielded the lowest scores. The total calcium scores from the phantom without breast plates were significantly larger than those from the phantom with breast plates, and the difference increased with the stronger denoising in iterative algorithm and with thicker slices. Both gender-based anatomical differences and vessel size impacted the calcium scores. The calcium volume scores tended to be underestimated when the vessels were smaller. These findings are valuable for understanding inconsistencies between women and men in calcium scoring, and for standardizing imaging protocols for improved gender-specific calcium scoring.
Dual-energy computed tomography of the head: a phantom study assessing axial dose distribution, eye lens dose, and image noise level
Kosuke Matsubara, Hiroki Kawashima, Takashi Hamaguchi, et al.
The aim of this study was to propose a calibration method for small dosimeters to measure absorbed doses during dual- source dual-energy computed tomography (DECT) and to compare the axial dose distribution, eye lens dose, and image noise level between DE and standard, single-energy (SE) head CT angiography. Three DE (100/Sn140 kVp 80/Sn140 kVp, and 140/80 kVp) and one SE (120 kVp) acquisitions were performed using a second-generation dual-source CT device and a female head phantom, with an equivalent volumetric CT dose index. The axial absorbed dose distribution at the orbital level and the absorbed doses for the eye lens were measured using radiophotoluminescent glass dosimeters. CT attenuation numbers were obtained in the DE composite images and the SE images of the phantom at the orbital level. The doses absorbed at the orbital level and in the eye lens were lower and standard deviations for the CT attenuation numbers were slightly higher in the DE acquisitions than those in the SE acquisition. The anterior surface dose was especially higher in the SE acquisition than that in the DE acquisitions. Thus, DE head CT angiography can be performed with a radiation dose lower than that required for a standard SE head CT angiography, with a slight increase in the image noise level. The 100/Sn140 kVp acquisition revealed the most balanced axial dose distribution. In addition, our proposed method was effective for calibrating small dosimeters to measure absorbed doses in DECT.
Multi-energy method of digital radiography for imaging of biological objects
V. D. Ryzhikov, S. V. Naydenov, O. D. Opolonin, et al.
This work has been dedicated to the search for a new possibility to use multi-energy digital radiography (MER) for medical applications. Our work has included both theoretical and experimental investigations of 2-energy (2E) and 3- energy (3Е) radiography for imaging the structure of biological objects. Using special simulation methods and digital analysis based on the X-ray interaction energy dependence for each element of importance to medical applications in the X-ray range of energy up to 150 keV, we have implemented a quasi-linear approximation for the energy dependence of the X-ray linear mass absorption coefficient μm (E) that permits us to determine the intrinsic structure of the biological objects. Our measurements utilize multiple X-ray tube voltages (50, 100, and 150 kV) with Al and Cu filters of different thicknesses to achieve 3-energy X-ray examination of objects. By doing so, we are able to achieve significantly improved imaging quality of the structure of the subject biological objects. To reconstruct and visualize the final images, we use both two-dimensional (2D) and three-dimensional (3D) palettes of identification. The result is a 2E and/or 3E representation of the object with color coding of each pixel according to the data outputs. Following the experimental measurements and post-processing, we produce a 3Е image of the biological object – in the case of our trials, fragments or parts of chicken and turkey.
Poster Session: Image Reconstruction
icon_mobile_dropdown
Iterative image reconstruction for multienergy computed tomography via structure tensor total variation regularization
Dong Zeng, Zhaoying Bian, Changfei Gong, et al.
Multienergy computed tomography (MECT) has the potential to simultaneously offer multiple sets of energy- selective data belonging to specific energy windows. However, because sufficient photon counts are not available in the specific energy windows compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise (SNR) and strong streak artifacts. To eliminate this drawback, in this work we present a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization to improve the MECT images quality from low-milliampere-seconds (low-mAs) data acquisitions. Henceforth the present scheme is referred to as `PWLS- STV' for simplicity. Specifically, the STV regularization is derived by penalizing the eigenvalues of the structure tensor of every point in the MECT images. Thus it can provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Experiments with a digital XCAT phantom clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of noise-induced artifacts suppression, resolution preservation, and material decomposition assessment.
Iterative CT reconstruction using coordinate descent with ordered subsets of data
F. Noo, K. Hahn, H. Schöndube, et al.
Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.
Optimization-based reconstruction for reduction of CBCT artifact in IGRT
Kilo-voltage cone-beam computed tomography (CBCT) plays an important role in image guided radiation therapy (IGRT) by providing 3D spatial information of tumor potentially useful for optimizing treatment planning. In current IGRT CBCT system, reconstructed images obtained with analytic algorithms, such as FDK algorithm and its variants, may contain artifacts. In an attempt to compensate for the artifacts, we investigate optimization-based reconstruction algorithms such as the ASD-POCS algorithm for potentially reducing arti- facts in IGRT CBCT images. In this study, using data acquired with a physical phantom and a patient subject, we demonstrate that the ASD-POCS reconstruction can significantly reduce artifacts observed in clinical re- constructions. Moreover, patient images reconstructed by use of the ASD-POCS algorithm indicate a contrast level of soft-tissue improved over that of the clinical reconstruction. We have also performed reconstructions from sparse-view data, and observe that, for current clinical imaging conditions, ASD-POCS reconstructions from data collected at one half of the current clinical projection views appear to show image quality, in terms of spatial and soft-tissue-contrast resolution, higher than that of the corresponding clinical reconstructions.
Resolution-enhancing hybrid, spectral CT reconstruction
D. P. Clark, C. T. Badea
Spectral x-ray imaging based on photon-counting x-ray detectors (PCXD) is an area of growing interest. By measuring the energy of x-ray photons, a spectral CT system can better differentiate elements using a single scan. However, the spatial resolution achievable with most PCXDs limits their application, particularly in preclinical CT imaging. Consequently, our group is developing a hybrid micro-CT scanner based on a high-resolution, energy-integrating (EID) detector and a lower-resolution, PCXD. To complement this system, we propose and demonstrate a hybrid, spectral CT reconstruction algorithm which robustly combines the spectral contrast of the PCXD with the spatial resolution of the EID. Specifically, the high-resolution, spectrally resolved data (X) is recovered as the sum of two matrices: one with low column rank (XL) determined from the EID data and one with intensity gradient sparse columns (XS) corresponding to the upsampled spectral contrast obtained from the PCXD data. We test the proposed algorithm in a feasibility study focused on molecular imaging of atherosclerotic plaque using activatable iodine and gold nanoparticles. The results show accurate estimation of material concentrations at increased spatial resolution for a voxel size ratio between the PCXD and the EID of 500 μm3:100 μm3. Specifically, regularized, iterative reconstruction of the MOBY mouse phantom around the K-edges of iodine (33.2 keV) and gold (80.7 keV) reduces the reconstruction error by more than a factor of three relative to least-squares, algebraic reconstruction. Likewise, the material decomposition accuracy into iodine, gold, calcium, and water improves by more than a factor of two.
Axial 3D region of interest reconstruction using weighted cone beam BPF/DBPF algorithm cascaded with adequately oriented orthogonal butterfly filtering
Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF’s capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).
Joint regularization for spectro-temporal CT reconstruction
D. P. Clark, C. T. Badea
X-ray CT is widely used, both clinically and preclinically, for fast, high-resolution, anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. In previous work, we proposed and demonstrated a projection acquisition and reconstruction strategy for 5D CT (3D + dual-energy + time) which recovered spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. The approach relied on the approximate separability of the temporal and spectral reconstruction sub-problems, which enabled substantial projection undersampling and effective regularization. Here, we extend this previous work to more general, nonseparable 5D CT reconstruction cases (3D + muti-energy + time) with applicability to K-edge imaging of exogenous contrast agents. We apply the newly proposed algorithm in phantom simulations using a realistic system and noise model for a photon counting x-ray detector with six energy thresholds. The MOBY mouse phantom used contains realistic concentrations of iodine, gold, and calcium in water. Relative to weighted least-squares reconstruction, the proposed 5D reconstruction algorithm improved reconstruction and material decomposition accuracy by 3-18 times. Furthermore, by exploiting joint, low rank image structure between time points and energies, ~80 HU of contrast associated with the Kedge of gold and ~35 HU of contrast associated with the blood pool and myocardium were recovered from more than 400 HU of noise.
Texture-preserved penalized weighted least-squares reconstruction of low-dose CT image via image segmentation and high-order MRF modeling
Hao Han, Hao Zhang, Xinzhou Wei, et al.
In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.
Regularized CT reconstruction on unstructured grid
Computed tomography (CT) is an ill-posed problem. Reconstruction on unstructured grid reduces the computational cost and alleviates the ill-posedness by decreasing the dimension of the solution space. However, there was no systematic study on edge-preserving regularization methods for CT reconstruction on unstructured grid. In this work, we propose a novel regularization method for CT reconstruction on unstructured grid, such as triangular or tetrahedral meshes generated from the initial images reconstructed via analysis reconstruction method (e.g., filtered back-projection). The proposed regularization method is modeled as a three-term optimization problem, containing a weighted least square fidelity term motivated by the simultaneous algebraic reconstruction technique (SART). The related cost function contains two non-differentiable terms, which bring difficulty to the development of the fast solver. A fixed-point proximity algorithm with SART is developed for solving the related optimization problem, and accelerating the convergence. Finally, we compare the regularized CT reconstruction method to SART with different regularization methods. Numerical experiments demonstrated that the proposed regularization method on unstructured grid is effective to suppress noise and preserve edge features.
A new look at signal sparsity paradigm for low-dose computed tomography image reconstruction
Yan Liu, Hao Zhang, William Moore, et al.
Signal sparsity in computed tomography (CT) image reconstruction field is routinely interpreted as sparse angular sampling around the patient body whose image is to be reconstructed. For CT clinical applications, while the normal tissues may be known and treated as sparse signals but the abnormalities inside the body are usually unknown signals and may not be treated as sparse signals. Furthermore, the locations and structures of abnormalities are also usually unknown, and this uncertainty adds in more challenges in interpreting signal sparsity for clinical applications. In this exploratory experimental study, we assume that once the projection data around the continuous body are discretized regardless at what sampling rate, the image reconstruction of the continuous body from the discretized data becomes a signal sparse problem. We hypothesize that a dense prior model describing the continuous body is a desirable choice for achieving an optimal solution for a given clinical task. We tested this hypothesis by adapting total variation stroke (TVS) model to describe the continuous body signals and showing the gain over the classic filtered backprojection (FBP) at a wide range of angular sampling rate. For the given clinical task of detecting lung nodules of size 5mm and larger, a consistent improvement of TVS over FBP on nodule detection was observed by an experienced radiologists from low sample rate to high sampling rate. This experimental outcome concurs with the expectation of the TVS model. Further investigation for theoretical insights and task-dependent evaluations is needed.
Texture-preserving Bayesian image reconstruction for low-dose CT
Hao Zhang, Hao Han, Yifan Hu, et al.
Markov random field (MRF) model has been widely used in Bayesian image reconstruction to reconstruct piecewise smooth images in the presence of noise, such as in low-dose X-ray computed tomography (LdCT). While it can preserve edge sharpness via edge-preserving potential function, its regional smoothing may sacrifice tissue image textures, which have been recognized as useful imaging biomarkers, and thus it compromises clinical tasks such as differentiating malignant vs. benign lesions, e.g., lung nodule or colon polyp. This study aims to shift the edge preserving regional noise smoothing paradigm to texture-preserving framework for LdCT image reconstruction while retaining the advantage of MRF’s neighborhood system on edge preservation. Specifically, we adapted the MRF model to incorporate the image textures of lung, bone, fat, muscle, etc. from previous full-dose CT scan as a priori knowledge for texture-preserving Bayesian reconstruction of current LdCT images. To show the feasibility of proposed reconstruction framework, experiments using clinical patient scans (with lung nodule or colon polyp) were conducted. The experimental outcomes showed noticeable gain by the a priori knowledge for LdCT image reconstruction with the well-known Haralick texture measures. Thus, it is conjectured that texture-preserving LdCT reconstruction has advantages over edge-preserving regional smoothing paradigm for texture-specific clinical applications.
Fast conjugate gradient algorithm extension for analyzer-based imaging reconstruction
Oriol Caudevilla, Jovan G. Brankov
This paper presents an extension of the classic Conjugate Gradient Algorithm. Motivated by the Analyzer-Based Imaging inverse problem, the novel method maximizes the Poisson regularized log-likelihood with a non-linear transformation of parameter faster than other solutions. The new approach takes advantage of the special properties of the Poisson log-likelihood to conjugate each ascend direction with respect all the previous directions taken by the algorithm. Our solution is compared with the general solution for non-quadratic unconstrained problems: the Polak- Ribiere formula. Both methods are applied to the ABI reconstruction problem.
Efficient iterative image reconstruction algorithm for dedicated breast CT
Natalia Antropova, Adrian Sanchez, Ingrid S. Reiser, et al.
Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.
Direct reconstruction of enhanced signal in computed tomography perfusion
Bin Li, Qingwen Lyu, Jianhua Ma, et al.
High imaging dose has been a concern in computed tomography perfusion (CTP) as repeated scans are performed at the same location of a patient. On the other hand, signal changes only occur at limited regions in CT acquired at different time points. In this work, we propose a new reconstruction strategy by effectively utilizing the initial phase high-quality CT to reconstruct the later phase CT acquired with a low-dose protocol. In the proposed strategy, initial high-quality CT is considered as a base image and enhanced signal (ES) is reconstructed directly by minimizing the penalized weighted least-square (PWLS) criterion. The proposed PWLS-ES strategy converts the conventional CT reconstruction into a sparse signal reconstruction problem. Digital and anthropomorphic phantom studies were performed to evaluate the performance of the proposed PWLS-ES strategy. Both phantom studies show that the proposed PWLS-ES method outperforms the standard iterative CT reconstruction algorithm based on the same PWLS criterion according to various quantitative metrics including root mean squared error (RMSE) and the universal quality index (UQI).
A method for investigating system matrix properties in optimization-based CT reconstruction
Sean D. Rose, Emil Y. Sidky, Xiaochuan Pan
Optimization-based iterative reconstruction methods have shown much promise for a variety of applications in X-ray computed tomography (CT). In these reconstruction methods, the X-ray measurement is modeled as a linear mapping from a finite-dimensional image space to a finite dimensional data-space. This mapping is dependent on a number of factors including the basis functions used for image representation1 and the method by which the matrix representing this mapping is generated.2 Understanding the properties of this linear mapping and how it depends on our choice of parameters is fundamental to optimization-based reconstruction. In this work, we confine our attention to a pixel basis and propose a method to investigate the effect of pixel size in optimization-based reconstruction. The proposed method provides insight into the tradeoff between higher resolution image representation and matrix conditioning. We demonstrate this method for a particular breast CT system geometry. We find that the images obtained from accurate solution of a least squares reconstruction optimization problem have high sensitivity to pixel size within certain regimes. We propose two methods by which this sensitivity can be reduced and demonstrate their efficacy. Our results indicate that the choice of pixel size in optimization-based reconstruction can have great impact on the quality of the reconstructed image, and that understanding the properties of the linear mapping modeling the X-ray measurement can help guide us with this choice.
A comparative study of the effects of using normalized patches for penalized likelihood tomographic reconstruction
Patch-based regularization methods, which have proven useful not only for image denoising, but also for tomographic reconstruction, penalize image roughness based on the intensity differences between two nearby patches. However, when two patches are not considered to be similar in the general sense of similarity but still have similar features in a scaled domain after normalizing the two patches, the difference between the two patches in the scaled domain is smaller than the intensity difference measured in the standard method. Standard patch-based methods tend to ignore such similarities due to the large intensity differences between the two patches. In this work, for patch-based penalized likelihood tomographic reconstruction, we propose a new approach to the similarity measure using the normalized patch differences as well as the intensity-based patch differences. A normalized patch difference is obtained by normalizing and scaling the intensity-based patch difference. To selectively take advantage of the standard patch (SP) and normalized patch (NP), we use switching schemes that can select either SP or NP based on the gradient of a reconstructed image. In this case the SP is selected for restoring large-scaled piecewise-smooth regions, while the NP is selected for preserving the contrast of fine details. The numerical experiments using software phantom demonstrate that our proposed methods not only improve overall reconstruction accuracy in terms of the percentage error, but also reveal better recovery of fine details in terms of the contrast recovery coefficient.
High-resolution and large-volume tomography reconstruction for x-ray microscopy
This paper presents a method of X-ray image acquisition for the high-resolution tomography reconstruction that uses a light source of synchrotron radiation to reconstruct a three-dimensional tomographic volume dataset for a nanoscale object. For large objects, because of the limited field-of-view, a projection image of an object should to be taken by several shots from different locations, and using an image stitching method to combine these image blocks together. In this study, the overlap of image blocks should be small because our light source is the synchrotron radiation and the X-ray dosage should be minimized as possible. We use the properties of synchrotron radiation to enable the image stitching and alignment success when the overlaps between adjacent image blocks are small. In this study, the size of overlaps can reach to 15% of the size of each image block. During the reconstruction, the mechanical stability should be considered because it leads the misalignment problem in tomography. We adopt the feature-based alignment
An adaptive method for weighted median priors in transmission tomography reconstruction
We present an adaptive method of selecting the center weight in the weighted-median prior for penalized-likelihood (PL) transmission tomography reconstruction. While the well-known median filter, which is closely related to the median prior, preserves edges, it is known to have an unfortunate effect of removing fine details because it tends to eliminate any structure that occupies less than half of the window elements. On the other hand, center-weighted median filters can preserve fine details by using relatively large center weights. But the large center weights can degrade monotonic regions due to insufficient noise suppression. In this work, to adaptively select the center weight, we first calculate pixelwise standard deviation over 3×3 neighbors of each pixel at every PL iteration and measure its cumulative histogram, which is a monotonically non-decreasing 1-D function. We then normalize the resulting function to maintain its range over [1,9]. In this case the domain of the normalized function represents the standard deviation at each pixel, and the range can be used for the center weight of a 3×3 median window. We implemented the median prior within the PL framework and used an alternating joint minimization algorithm based on a separable paraboloidal surrogates algorithm. The experimental results demonstrate that our proposed method not only compromises the two extreme cases (the largest and smallest center weights) yielding a good reconstruction over the entire image in terms of the percentage error, but also outperforms the standard method in terms of the contrast recovery coefficient measured in several regions of interest.
Continuous analog of multiplicative algebraic reconstruction technique for computed tomography
Kiyoko Tateishi, Yusaku Yamaguchi, Omar M. Abou Al-Ola, et al.
We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.
Texture enhanced optimization-based image reconstruction (TxE-OBIR) from sparse projection views
Huiqiao Xie, Tianye Niu, Yi Yang, et al.
The optimization-based image reconstruction (OBIR) has been proposed and investigated in recent years to reduce radiation dose in X-ray computed tomography (CT) through acquiring sparse projection views. However, the OBIR usually generates images with a quite different noise texture compared to the clinical widely used reconstruction method (i.e. filtered back-projection – FBP). This may make the radiologists/physicians less confident while they are making clinical decisions. Recognizing the fact that the X-ray photon noise statistics is relatively uniform across the detector cells, which is enabled by beam forming devices (e.g. bowtie filters), we propose and evaluate a novel and practical texture enhancement method in this work. In the texture enhanced optimization-based image reconstruction (TxEOBIR), we first reconstruct a texture image with the FBP algorithm from a full set of synthesized projection views of noise. Then, the TxE-OBIR image is generated by adding the texture image into the OBIR reconstruction. As qualitatively confirmed by visual inspection and quantitatively by noise power spectrum (NPS) evaluation, the proposed method can produce images with textures that are visually identical to those of the gold standard FBP images.
Acceleration of iterative tomographic image reconstruction by reference-based back projection
The purpose of this paper is to design and implement an efficient iterative reconstruction algorithm for computational tomography. We accelerate the reconstruction speed of algebraic reconstruction technique (ART), an iterative reconstruction method, by using the result of filtered backprojection (FBP), a wide used algorithm of analytical reconstruction, to be an initial guess and the reference for the first iteration and each back projection stage respectively. Both two improvements can reduce the error between the forward projection of each iteration and the measurements. We use three methods of quantitative analysis, root-mean-square error (RMSE), peak signal to noise ratio (PSNR), and structural content (SC), to show that our method can reduce the number of iterations by more than half and the quality of the result is better than the original ART.
Noise reduction in computed tomography using a multiplicative continuous-time image reconstruction method
In clinical X-ray computed tomography (CT), filtered back-projection as a transform method and iterative reconstruction such as the maximum-likelihood expectation-maximization (ML-EM) method are known methods to reconstruct tomographic images. As the other reconstruction method, we have presented a continuous-time image reconstruction (CIR) system described by a nonlinear dynamical system, based on the idea of continuous methods for solving tomographic inverse problems. Recently, we have also proposed a multiplicative CIR system described by differential equations based on the minimization of a weighted Kullback–Leibler divergence. We prove theoretically that the divergence measure decreases along the solution to the CIR system, for consistent inverse problems. In consideration of the noisy nature of projections in clinical CT, the inverse problem belongs to the category of ill-posed problems. The performance of a noise-reduction scheme for a new (previously developed) CIR system was investigated by means of numerical experiments using a circular phantom image. Compared to the conventional CIR and the ML-EM methods, the proposed CIR method has an advantage on noisy projection with lower signal-to-noise ratios in terms of the divergence measure on the actual image under the same common measure observed via the projection data. The results lead to the conclusion that the multiplicative CIR method is more effective and robust for noise reduction in CT compared to the ML-EM as well as conventional CIR methods.
Assessment of tomographic reconstruction performance using the Mojette transform
The Mojette transform is a discrete and exact Radon transform, based on the discrete geometry of the projection and reconstruction lattice. The specific sampling scheme of the Mojette transform results in theoretical exact image reconstruction. In this paper, we compare the reconstructions obtained with the Mojette transform to the ones obtained with several usual projection/backprojection digitized Radon transform. These experiments validate and demonstrate the performance of the Mojette transform sampling over classical implementations based on continuous space.
Poster Session: Measurements
icon_mobile_dropdown
A wide-acceptance Compton spectrometer for spectral characterization of a medical x-ray source
Michelle A. Espy, A. Gehring, A. Belian, et al.
Accurate knowledge of the x-ray spectra used in medical treatment and radiography is important for dose calculations and material decomposition analysis. Indirect measurements via transmission through materials are possible. However, such spectra are challenging to measure directly due to the high photon fluxes. One method of direct measurement is via a Compton spectrometer (CS) method. In this approach, the x-rays are converted to a much lower flux of electrons via Compton scattering on a converter foil (typically beryllium or aluminum). The electrons are then momentum selected by bending in a magnetic field. With tight angular acceptance of electrons into the magnet of ~ 1 deg, there is a linear correlation between incident photon energy and electron position recorded on an image plate. Here we present measurements of Bremsstrahlung spectrum from a medical therapy machine, a Scanditronix M22 Microtron. Spectra with energy endpoints from 6 to 20 MeV are directly measured, using a CS with a wide energy range from 0.5 to 20 MeV. We discuss the sensitivity of the device and the effects of converter material and collimation on the accuracy of the reconstructed spectra. Approaches toward improving the sensitivity, including the use of coded apertures, and potential future applications to characterization of spectra are also discussed.
X-ray spectrum estimation from transmission measurements by an exponential of a polynomial model
Boris Perkhounkov, Jessika Stec, Emil Y. Sidky, et al.
There has been much recent research effort directed toward spectral computed tomography (CT). An important step in realizing spectral CT is determining the spectral response of the scanning system so that the relation between material thicknesses and X-ray transmission intensity is known. We propose a few parameter spectrum model that can accurately model the X-ray transmission curves and has a form which is amenable to simultaneous spectral CT image reconstruction and CT system spectrum calibration. While the goal is to eventually realize the simultaneous image reconstruction/spectrum estimation algorithm, in this work we investigate the effectiveness of the model on spectrum estimation from simulated transmission measurements through known thicknesses of known materials. The simulated transmission measurements employ a typical X-ray spectrum used for CT and contain noise due to the randomness in detecting finite numbers of photons. The proposed model writes the X-ray spectrum as the exponential of a polynomial (EP) expansion. The model parameters are obtained by use of a standard software implementation of the Nelder-Mead simplex algorithm. The performance of the model is measured by the relative error between the predicted and simulated transmission curves. The estimated spectrum is also compared with the model X-ray spectrum. For reference, we also employ a polynomial (P) spectrum model and show performance relative to the proposed EP model.
A dual-energy medical instrument for measurement of x-ray source voltage and dose rate
V. D. Ryzhikov, S. V. Naydenov, V. G. Volkov, et al.
An original dual-energy detector and medical instrument have been developed to measure the output voltages and dose rates of X-ray sources. Theoretical and experimental studies were carried out to characterize the parameters of a new scintillator-photodiode sandwich-detector based on specially-prepared zinc selenide crystals in which the low-energy detector (LED) works both as the detector of the low-energy radiation and as an absorption filter allowing the highenergy fraction of the radiation to pass through to the high-energy detector (HED). The use of the LED as a low-energy filter in combination with a separate HED opens broad possibilities for such sandwich structures. In particular, it becomes possible to analyze and process the sum, difference and ratio of signals coming from these detectors, ensuring a broad (up to 106) measurement range of X-ray intensity from the source and a leveling of the energy dependence. We have chosen an optimum design of the detector and the geometry of the component LED and HED parts that allow energy-dependence leveling to within specified limits. The deviation in energy dependence of the detector does not exceed about 5% in the energy range from 30 to 120 keV. The developed detector and instrument allow contactless measurement of the anode voltage of an X-ray emitter from 40 to 140 kV with an error no greater than 3%. The dose rate measurement range is from 1 to 200 R/min. An original medical instrument has passed clinical testing and was recommended for use in medical institutions for X-ray diagnostics.
Poster Session: New Systems and Technologies
icon_mobile_dropdown
In vivo small animal lung speckle imaging with a benchtop in-line XPC system
A. B. Garson III, S. Gunsten, S. Vasireddi, et al.
X-ray phase-contrast (XPC) images of mouse lungs were acquired in vivo with a benchtop XPC system employing a conventional microfocus source. A strong speckled intensity pattern was present in lung regions of the XPC radiographs, previously only observed in synchroton experiments and in situ benchtop studies. We showed how the texture characteristics of the speckle is influenced by the amount of air present in the lungs at different points in the breathing cycle.
Seventh-generation CT
A new dual-drum CT system architecture has been recently introduced with the potential to achieve significantly higher temporal resolution than is currently possible in medical imaging CT. The concept relies only on known technologies; in particular rotation speeds several times higher than what is possible today could be achieved leveraging typical x-ray tube designs and capabilities. However, the architecture lends itself to the development of a new arrangement of x-ray sources in a toroidal vacuum envelope containing a rotating cathode ring and a (optionally rotating) shared anode ring to potentially obtain increased individual beam power as well as increase total exposure per rotation. The new x-ray source sub-system design builds on previously described concepts and could make the provision of multiple conventional high-power cathodes in a CT system practical by distributing the anode target between the cathodes. In particular, relying on known magnetic-levitation technologies, it is in principle possible to more than double the relative speed of the electron-beam with respect to the target, thus potentially leading to significant individual beam power increases as compared to today’s state-of-the-art. In one embodiment, the proposed design can be naturally leveraged by the dual-drum CT concept previously described to alleviate the problem of arranging a number of conventional rotating anode-stem x-ray tubes and power conditioners on the limited space of a CT gantry. In another embodiment, a system with three cathodes is suggested leveraging the architecture previously proposed by Franke.
A glass-sealed field emission x-ray tube based on carbon nanotube emitter for medical imaging
Seung Jun Yeo, Jaeik Jeong, Jeung Sun Ahn, et al.
We report the design and fabrication of a carbon nanotube based a glass-sealed field emission x-ray tube without vacuum pump. The x-ray tube consists of four electrodes with anode, focuser, gate, and cathode electrode. The shape of cathode is rectangular for isotropic focal spot size at anode target. The obtained x-ray images show clearly micrometer scale.
Organ radiation exposure with EOS: GATE simulations versus TLD measurements
A. H. Clavel, P. Thevenard-Berger, F. R. Verdun, et al.
EOS® is an innovative X-ray imaging system allowing the acquisition of two simultaneous images of a patient in the standing position, during the vertical scan of two orthogonal fan beams. This study aimed to compute organs radiation exposure to a patient, in the particular geometry of this system. Two different positions of the patient in the machine were studied, corresponding to postero-anterior plus left lateral projections (PA-LLAT) and antero-posterior plus right lateral projections (AP-RLAT). To achieve this goal, a Monte-Carlo simulation was developed based on a GATE environment. To model the physical properties of the patient, a computational phantom was produced based on computed tomography scan data of an anthropomorphic phantom. The simulations provided several organs doses, which were compared to previously published dose results measured with Thermo Luminescent Detectors (TLD) in the same conditions and with the same phantom. The simulation results showed a good agreement with measured doses at the TLD locations, for both AP-RLAT and PA-LLAT projections. This study also showed that the organ dose assessed only from a sample of locations, rather than considering the whole organ, introduced significant bias, depending on organs and projections.
Scattering-compensated cone beam x-ray luminescence computed tomography
Peng Gao, Junyan Rong, Huangsheng Pu, et al.
X-ray luminescence computed tomography (XLCT) opens new possibilities to perform molecular imaging with x-ray. It is a dual modality imaging technique based on the principle that some nanophosphors can emit near-infrared (NIR) light when excited by x-rays. The x-ray scattering effect is a great issue in both CT and XLCT reconstruction. It has been shown that if the scattering effect compensated, the reconstruction average relative error can be reduced from 40% to 12% in the in the pencil beam XLCT. However, the scattering effect in the cone beam XLCT has not been proved. To verify and reduce the scattering effect, we proposed scattering-compensated cone beam x-ray luminescence computed tomography using an added leading to prevent the spare x-ray outside the irradiated phantom in order to decrease the scattering effect. Phantom experiments of two tubes filled with Y2O3:Eu3+ indicated that the proposed method could reduce the scattering by a degree of 30% and can reduce the location error from 1.8mm to 1.2mm. Hence, the proposed method was feasible to the general case and actual experiments and it is easy to implement.
Microstructure analysis of the pulmonary acinus by a synchrotron radiation CT
K. Minami, K. Maeda, Y. Kawata, et al.
Conversion of images at micro level of the normal lung and those with very early stage lung disease, and the quantitative analysis of morphology on the images can contribute to the thoracic image diagnosis of the next generation. The collection of every minute CT images is necessary in using high luminance synchrotron radiation CT for converting the images. The purpose of this study is to analyze the structure of secondary pulmonary lobules. We also show the structure of the secondary pulmonary lobule by means of extending our vision to a wider field through the image reconfiguration from the projection image of the synchrotron radiation CT.
Ultrasound waveform tomography with the second-order total-generalized-variation regularization
Ultrasound waveform tomography with the total-variation regularization could improve reconstructions of tumor margins, but the reconstructions usually contain unwanted blocky artifacts. We develop a new ultrasound waveform tomography method with a second-order total-generalized-variation regularization scheme to improve tomographic reconstructions of breast tumors and remove blocky artifacts in reconstruction results. We validate our new method using numerical phantom data and real phantom data acquired using our synthetic-aperture breast ultrasound tomography system with two parallel transducer arrays. Compared to reconstructions of ultrasound waveform tomography with modified total-variation regularization, our new ultrasound waveform tomography yields accurate sound-speed reconstruction results with significantly reduced artifacts.
Poster Session: PET, SPECT, MR, Ultrasound
icon_mobile_dropdown
Quantitative evaluation of susceptibility effects caused by dental materials in head magnetic resonance imaging
S. Strocchi, M. Ghielmi, F. Basilico, et al.
This work quantitatively evaluates the effects induced by susceptibility characteristics of materials commonly used in dental practice on the quality of head MR images in a clinical 1.5T device. The proposed evaluation procedure measures the image artifacts induced by susceptibility in MR images by providing an index consistent with the global degradation as perceived by the experts. Susceptibility artifacts were evaluated in a near-clinical setup, using a phantom with susceptibility and geometric characteristics similar to that of a human head. We tested different dentist materials, called PAL Keramit, Ti6Al4V-ELI, Keramit NP, ILOR F, Zirconia and used different clinical MR acquisition sequences, such as “classical” SE and fast, gradient, and diffusion sequences. The evaluation is designed as a matching process between reference and artifacts affected images recording the same scene. The extent of the degradation induced by susceptibility is then measured in terms of similarity with the corresponding reference image. The matching process involves a multimodal registration task and the use an adequate similarity index psychophysically validated, based on correlation coefficient. The proposed analyses are integrated within a computer-supported procedure that interactively guides the users in the different phases of the evaluation method. 2-Dimensional and 3-dimensional indexes are used for each material and each acquisition sequence. From these, we drew a ranking of the materials, averaging the results obtained. Zirconia and ILOR F appear to be the best choice from the susceptibility artefacts point of view, followed, in order, by PAL Keramit, Ti6Al4V-ELI and Keramit NP.
Effects of tissue heterogeneity on single-coil, scanning MIT imaging
J. R. Feldkamp, S. Quirk
We recently reported on the use of a single induction coil to accomplish imaging of the electrical conductivity in human tissues via magnetic induction tomography (MIT). A key to the method was the development of a mapping equation that quantitatively relates an arbitrary electrical conductivity distribution to ohmic loss in a coil consisting of concentric circular loops in a plane. By making multiple coil loss measurements at a number of locations in the vicinity of the target (scan), this mapping equation can be used to build an algorithm for 3D image construction of electrical conductivity. Important assumptions behind the mathematical formula included uniform relative permittivity throughout all space and continuous variation in conductivity. In this paper, these two assumptions were tested in a series of experiments involving the use of human tissue phantoms created from agarose, doped with sufficient sodium chloride to yield physiological conductivities. Inclusions of doped agarose were scanned both while isolated and also while embedded in a matrix of agarose gel having lowered conductivity - to help evaluate the effects of abrupt permittivity change. The effects of discontinuous conductivity change were simulated by filling 5 cm diameter petri dishes with 1.4% aqueous KCl and placing them in a much larger, 14 cm diameter petri dish - gap distance varied from about 3 mm to 30 mm. In either case, we will show that these effects are minimal on resultant images, helping to further validate the mapping equation used to construct MIT images. Because of their simplicity, scans reported here did not include coil rotation. To acknowledge the importance of rotation, however, we have devoted a section of this work to illustrate the profound benefits of coil rotation during a scan – though virtual data are used, where coil rotation is more easily specified.
Quantitative evaluation of PET image using event information bootstrap
Hankyeol Song, Shin Hye Kwak, Kyeong Min Kim, et al.
The purpose of this study was to enhance the effect in the PET image quality according to event bootstrap of small animal PET data. In order to investigate the time difference condition, realigned sinograms were generated from randomly sampled data set using bootstrap. List-mode data was obtained from small animal PET scanner for Ge-68 30 sec, Y-90 20 min and Y-90 60 min. PET image was reconstructed by Ordered Subset Expectation Maximization(OSEM) 2D with the list-mode format. Image analysis was investigated by Signal to Noise Ratio(SNR) of Ge-68 and Y-90 image. Non-parametric resampled PET image SNR percent change for the Ge-68 30 sec, Y-90 60 min, and Y-90 20 min was 1.69 %, 7.03 %, and 4.78 %, respectively. SNR percent change of non-parametric resampled PET image with time difference condition was 1.08 % for the Ge-68 30 sec, 6.74 % for the Y-90 60 min and 10.94 % for the Y-90 29 min. The result indicated that the bootstrap with time difference condition had a potential to improve a noisy Y-90 PET image quality. This method should be expected to reduce Y-90 PET measurement time and to enhance its accuracy.
Event-by-event PET image reconstruction using list-mode origin ensembles algorithm
Andriy Andreyev
There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.
In vitro flow assessment: from PC-MRI to computational fluid dynamics including fluid-structure interaction
Jonas Kratzke, Fabian Rengier, Christian Weis, et al.
Initiation and development of cardiovascular diseases can be highly correlated to specific biomechanical parameters. To examine and assess biomechanical parameters, numerical simulation of cardiovascular dynamics has the potential to complement and enhance medical measurement and imaging techniques. As such, computational fluid dynamics (CFD) have shown to be suitable to evaluate blood velocity and pressure in scenarios, where vessel wall deformation plays a minor role. However, there is a need for further validation studies and the inclusion of vessel wall elasticity for morphologies being subject to large displacement. In this work, we consider a fluid-structure interaction (FSI) model including the full elasticity equation to take the deformability of aortic wall soft tissue into account. We present a numerical framework, in which either a CFD study can be performed for less deformable aortic segments or an FSI simulation for regions of large displacement such as the aortic root and arch. Both of the methods are validated by means of an aortic phantom experiment. The computational results are in good agreement with 2D phase-contrast magnetic resonance imaging (PC-MRI) velocity measurements as well as catheter-based pressure measurements. The FSI simulation shows a characteristic vessel compliance effect on the flow field induced by the elasticity of the vessel wall, which the CFD model is not capable of. The in vitro validated FSI simulation framework can enable the computation of complementary biomechanical parameters such as the stress distribution within the vessel wall.
Quantitative analysis of L-SPECT system for small animal brain imaging
Tasneem Rahman, Murat Tahtali, Mark R. Pickering
This paper aims to investigate the performance of a newly proposed L-SPECT system for small animal brain imaging. The L-SPECT system consists of an array of 100 × 100 micro range diameter pinholes. The proposed detector module has a 48 mm by 48 mm active area and the system is based on a pixelated array of NaI crystals (10×10×10 mm elements) coupled with an array of position sensitive photomultiplier tubes (PSPMTs). The performance of this system was evaluated with pinhole radii of 50 μm, 60 μm and 100 μm. Monte Carlo simulation studies using the Geant4 Application for Tomographic Emission (GATE) software package validate the performance of this novel dual head L-SPECT system where a geometric mouse phantom is used to investigate its performance. All SPECT data were obtained using 120 projection views from 0° to 360° with a 3° step. Slices were reconstructed using conventional filtered back projection (FBP) algorithm. We have evaluated the quality of the images in terms of spatial resolution (FWHM) based on line spread function, the system sensitivity, the point source response function and the image quality. The sensitivity of our newly proposed L- SPECT system was about 4500 cps/μCi at 6 cm along with excellent full width at half-maximum (FWHM) using 50 μm pinhole aperture at several radii of rotation. The analysis results show the combination of excellent spatial resolution and high detection efficiency over an energy range between 20-160 keV. The results demonstrate that SPECT imaging using a pixelated L-SPECT detector module is applicable in a quantitative study of mouse brain imaging.
Characterization of various tissue mimicking materials for medical ultrasound imaging
Audrey Thouvenot, Tamie Poepping, Terry M. Peters, et al.
Tissue mimicking materials are physical constructs exhibiting certain desired properties, which are used in machine calibration, medical imaging research, surgical planning, training, and simulation. For medical ultrasound, those specific properties include acoustic propagation speed and attenuation coefficient over the diagnostic frequency range. We investigated the acoustic characteristics of polyvinyl chloride (PVC) plastisol, polydimethylsiloxane (PDMS), and isopropanol using a time-of-light technique, where a pulse was passed through a sample of known thickness contained in a water bath. The propagation speed in PVC is approximately 1400ms-1 depending on the exact chemical composition, with the attenuation coefficient ranging from 0:35 dB cm-1 at 1MHz to 10:57 dB cm-1 at 9 MHz. The propagation speed in PDMS is in the range of 1100ms-1, with an attenuation coefficient of 1:28 dB cm-1 at 1MHz to 21:22 dB cm-1 at 9 MHz. At room temperature (22 °C), a mixture of water-isopropanol (7:25% isopropanol by volume) exhibits a propagation speed of 1540ms-1, making it an excellent and inexpensive tissue-mimicking liquid for medical ultrasound imaging.
Poster Session: Phase Contrast Imaging
icon_mobile_dropdown
Construction and evaluation of a high-energy grating-based x-ray phase-contrast imaging setup
Christian Hauke, Florian Horn, Georg Pelzer, et al.
Interferometric x-ray imaging becomes more and more attractive for applications such as medical imaging or non-destructive testing, because it provides the opportunity to obtain additional information on the internal structure of radiographed objects.12 Therefore, three types of images are acquired: An attenuation image like in conventional x-ray imaging, an image of the differential phase-shift generated by the object and the so called dark-field image, which contains information about the object’s granularity even on sub-pixel scale.3 However, most experiments addressing grating-based x-ray phase-contrast imaging with polychromatic sources are restricted to energies up to about 40 keV. For the application of this imaging method to thicker objects like human specimens or dense components, higher tube voltages are required. This is why we designed and constructed a laboratory setup for high energies, which is able to image larger objects.4 To evaluate the performance of the setup, the mean visibility of the field of view was measured for several tube voltages. The result shows that the mean visibility has a peak value of 23% at a tube voltage of 60 kV and is constantly greater than 16% up to a tube voltage of 120 kV. Thus, good image quality is provided even for high energies. To further substantiate the performance of the setup at high energies, a human ex-vivo foot was examined at a tube voltage of 75 kV. The interferometric x-ray images show a good image quality and a promising diagnostic power.
Feasibility of using energy-resolving detectors in differential phase-contrast imaging
In a common clinical setting, conventional absorption-based imaging provides relatively good contrast between bonelike and soft-tissue materials. The reliability of material differentiation, however, is hampered when materials with similar absorption properties are scanned. This problem can be addressed by utilizing a spectral imaging technique whereby multiple X-ray measurements are taken at different beam conditions. In this work, we discuss the possibility of using a spectral imaging approach in a grating-based, differential-phase contrast-imaging (DPCI) modality. Two approaches, dual exposure with a conventional flat-panel detector (FPD) and a single exposure with a photon-counting energy-resolving detector (PCD), were reviewed. The feasibility of a single-exposure DPCI and a two-bin PCD setup was assessed quantitatively by a least-squares minimization algorithm applied to an X-ray diffraction pattern. It was shown that a two-peak-shaped X-ray spectrum can allow PCDs to be placed unambiguously at single Talbot distances making it possible to simultaneously detect photons in each energy bin with comparable efficiencies. The results of this work can help build a bridge between two rapidly developing imaging modalities, X-ray spectral imaging and X-ray DPCI.
Joint reconstruction of absorption and refractive properties in propagation-based x-ray phase-contrast tomography via a non-linear image reconstruction algorithm
Propagation-based X-ray phase-contrast tomography (XPCT) provides the opportunity to image weakly absorbing objects and is being explored actively for a variety of important pre-clinical applications. Quantitative XPCT image reconstruction methods typically involve a phase retrieval step followed by application of an image reconstruction algorithm. Most approaches to phase retrieval require either acquiring multiple images at different object-to-detector distances or introducing simplifying assumptions, such as a single-material assumption, to linearize the imaging model. In order to overcome these limitations, a non-linear image reconstruction method has been proposed previously that jointly estimates the absorption and refractive properties of an object from XPCT projection data acquired at a single propagation distance, without the need to linearize the imaging model. However, the numerical properties of the associated non-convex optimization problem remain largely unexplored. In this study, computer simulations are conducted to investigate the feasibility of the joint reconstruction problem in practice. We demonstrate that the joint reconstruction problem is ill-posed and sensitive to system inconsistencies. Particularly, the method can generate accurate refractive index images only if the object is thin and has no phase-wrapping in the data. However, we also observed that, for weakly absorbing objects, the refractive index images reconstructed by the joint reconstruction method are, in general, more accurate than those reconstructed using methods that simply ignore the object’s absorption.
Improvement of the visibility for x-ray phase contrast imaging using photon counting detector
S. Sano, K. Tanabe, T. Yoshimuta, et al.
In the case of employing Talbot interferometer to the medical imaging, a practical X-ray tube should be combined with the interferometer. Practical x-ray tubes radiate continuous X-rays and the interference intensity (so-called visibility) becomes worse because of the wide spectrum of continuous X-rays. In order to achieve high visibility, we have estimated the visibility improvement effect using the photon counting detector (PCD). The detected spectra using a 2D imaging PCD are distorted due to charge sharing and pileup, which would make visibility worse. First, we have made a model for Monte-Calro calculation to calculate the distorted spectra and point spread function (PSF) for the charge sharing. The calculation model is based on the summation of the monochromatic response function which is the detected charge on the interested pixel for one photon injection. Distortion of spectra was calculated taking into account the charge sharing effect and pulse pileup. Then we have obtained an estimation result of the visibility improvement effect using the PCD of CdTe. The visibilities of the energy integrating detector (EID) of CdTe and the PCD are calculated and compared, where the Talbot interferometer type is a fringe scanning using phase grating and absorption grating. Visibility of the EID is 36% and that of PCD is 60% without pileup effect. In high dose rate condition, the CNR decreasing ratio is remarkable. The visibility decreasing effect and quantum noise increasing effect are correlated and the both effect worsen the CNR.
Quantification of signal detection performance degradation induced by phase-retrieval in propagation-based x-ray phase-contrast imaging
In propagation-based X-ray phase-contrast (PB XPC) imaging, the measured image contains a mixture of absorption- and phase-contrast. To obtain separate images of the projected absorption and phase (i.e., refractive) properties of a sample, phase retrieval methods can be employed. It has been suggested that phase-retrieval can always improve image quality in PB XPC imaging. However, when objective (task-based) measures of image quality are employed, this is not necessarily true and phase retrieval can be detrimental. In this work, signal detection theory is utilized to quantify the performance of a Hotelling observer (HO) for detecting a known signal in a known background. Two cases are considered. In the first case, the HO acts directly on the measured intensity data. In the second case, the HO acts on either the retrieved phase or absorption image. We demonstrate that the performance of the HO is superior when acting on the measured intensity data. The loss of task-specific information induced by phase-retrieval is quantified by computing the efficiency of the HO as the ratio of the test statistic signal-to-noise ratio (SNR) for the two cases. The effect of the system geometry on this efficiency is systematically investigated. Our findings confirm that phase-retrieval can impair signal detection performance in XPC imaging.
Quantitative imaging of the microbubble concentrations by using an in-line phase contrast tomosynthesis prototype: a preliminary phantom study
The purpose of this study is to demonstrate the feasibility of using a high-energy in-line phase contrast tomosynthesis system to quantitatively imaging microbubbles in a tissue simulating phantom under a limited radiation dose. The imaging system used in the investigation was a bench top in-line phase contrast tomosynthesis prototype operated under 120 kVp tube voltage and 0.5 mA tube current. A prime beam filter made of 2.3 mm Cu, 0.8 mm Pb and 1.0 mm Al was employed to obtain as large as possible portion of x-ray photon energy higher than 60 keV. The tissue simulating phantom was built by three acrylic slabs and a wax slab to mimic a 40 mm thick compressed breast. There were two tiny-sized structures with average 1 mm depth engraved on the two different layers. The microbubble suspensions with different concentrations were injected into those tiny structures. The inline phase contrast angular projections acquired were used to reconstruct the in-plane slices of the tiny structures on different layers. The CNRs vs microbubble concentrations were investigated. As the result, the microbubble suspensions were clearly visible, showing higher CNR when compared with the areas with no microbubble. Furthermore, a monotonously increasing relation between CNRs and microbubble concentrations was observed after calculating the area CNR of the phase contrast tomosynthesis slices.
Evaluation of a new reconstruction algorithm for x-ray phase-contrast imaging
Maria Seifert, Christian Hauke, Florian Horn, et al.
X-ray grating-based phase-contrast imaging might open up entirely new opportunities in medical imaging. However, transferring the interferometer technique from laboratory setups to conventional imaging systems the necessary rigidity of the system is difficult to achieve. Therefore, vibrations or distortions of the system lead to inaccuracies within the phase-stepping procedure. Given insufficient stability of the phase-step positions, up to now, artifacts in phase-contrast images occur, which lower the image quality. This is a problem with regard to the intended use of phase-contrast imaging in clinical routine as for example tiny structures of the human anatomy cannot be observed. In this contribution we evaluate an algorithm proposed by Vargas et.al.1 and applied to X-ray imaging by Pelzer et.al. that enables us to reconstruct a differential phase-contrast image without the knowledge of the specific phase-step positions. This method was tested in comparison to the standard reconstruction by Fourier analysis. The quality of phase-contrast images remains stable, even if the phase-step positions are completely unknown and not uniformly distributed. To also achieve attenuation and dark-field images the proposed algorithm has been combined with a further algorithm of Vargas et al.3 Using this algorithm, the phase-step positions can be reconstructed. With the help of the proper phase-step positions it is possible to get information about the phase, the amplitude and the offset of the measured data. We evaluated this algorithm concerning the measurement of thick objects which show a high absorbency.
Poster Session: Photon Counting CT
icon_mobile_dropdown
Photon counting CT of the liver with dual-contrast enhancement
Daniela Muenzel, Roland Proksa, Heiner Daerr, et al.
The diagnostic quality of photon counting computed tomography (PCCT) is one the unexplored areas in medical imaging; at the same time, it seems to offer the opportunity as a fast and highly sensitive diagnostic tool. Today, conventional computed tomography (CT) is the standard imaging technique for diagnostic evaluation of the parenchyma of the liver. However, considerations on radiation dose are still an important factor in CT liver imaging, especially with regard to multi-phase contrast enhanced CT. In this work we report on a feasibility study for multi-contrast PCCT for simultaneous liver imaging at different contrast phases. PCCT images of the liver were simulated for a contrast-enhanced examination performed with two different contrast agents (CA), iodine (CA 1) and gadolinium (CA 2). PCCT image acquisition was performed at the time point with portal venous contrast distribution of CA 1 and arterial contrast phase for CA 2. Therefore, a contrast injection protocol was planned with sequential injection of CA 1 and CA 2 to provide a time dependent difference in contrast distribution of both CAs in the vessels and parenchyma of the liver. Native, arterial, and portal venous contrast enhanced images have been calculated based on the spectral separation of PCCT. In simulated PCCT images, we were able to differentiate between the tissue enhancement of CA 1 and CA 2. The distribution of both CA within the parenchyma of the liver was illustrated with perfusion maps for CA 1 and CA 2. In addition, virtual noncontrast enhanced image were calculated. In conclusion, multi-phase PCCT imaging of the liver based on a single scan is a novel approach for spectral PCCT imaging, offering detailed contrast information in a single scan volume and a significant reduction of radiation dose.
Feasibility study of sparse-angular sampling and sinogram interpolation in material decomposition with a photon-counting detector
Dohyeon Kim, Byungdu Jo, Su-Jin Park, et al.
Spectral computed tomography (SCT) is a promising technique for obtaining enhanced image with contrast agent and distinguishing different materials. We focused on developing the analytic reconstruction algorithm in material decomposition technique with lower radiation exposure and shorter acquisition time. Sparse-angular sampling can reduce patient dose and scanning time for obtaining the reconstruction images. In this study, the sinogram interpolation method was used to improve the quality of material decomposed images in sparse angular sampling. A prototype of spectral CT system with 64 pixels CZT-based photon counting detector was used. The source-to-detector distance and the source-tocenter of rotation distance were 1200 and 1015 mm, respectively. The x-ray spectrum at 90 kVp with a tube current of 110 μA was used. Two energy bins (23-33 keV and 34-44 keV) were set to obtain the two images for decomposed iodine and calcification. We used PMMA phantom and its height and radius were 50 mm and 17.5 mm, respectively. The phantom contained 4 materials including iodine, gadolinium, calcification, and liquid state lipid. We evaluated the signal to noise ratio (SNR) of materials to examine the significance of sinogram interpolation method. The decomposed iodine and calcification images were obtained by projection based subtraction method using two energy bins with 36 projection data. The SNR in decomposed images were improved by using sinogram interpolation method. And these results indicated that the signal of decomposed material was increased and the noise of decomposed material was reduced. In conclusion, the sinogram interpolation method can be used in material decomposition method with sparse-angular sampling.
Novel approaches to address spectral distortions in photon counting x-ray CT using artificial neural networks
M. Touch, D. P. Clark, W. Barber, et al.
Spectral CT using a photon-counting x-ray detector (PCXD) can potentially increase accuracy of measuring tissue composition. However, PCXD spectral measurements suffer from distortion due to charge sharing, pulse pileup, and Kescape energy loss. This study proposes two novel artificial neural network (ANN)-based algorithms: one to model and compensate for the distortion, and another one to directly correct for the distortion. The ANN-based distortion model was obtained by training to learn the distortion from a set of projections with a calibration scan. The ANN distortion was then applied in the forward statistical model to compensate for distortion in the projection decomposition. ANN was also used to learn to correct distortions directly in projections. The resulting corrected projections were used for reconstructing the image, denoising via joint bilateral filtration, and decomposition into three-material basis functions: Compton scattering, the photoelectric effect, and iodine. The ANN-based distortion model proved to be more robust to noise and worked better compared to using an imperfect parametric distortion model. In the presence of noise, the mean relative errors in iodine concentration estimation were 11.82% (ANN distortion model) and 16.72% (parametric model). With distortion correction, the mean relative error in iodine concentration estimation was improved by 50% over direct decomposition from distorted data. With our joint bilateral filtration, the resulting material image quality and iodine detectability as defined by the contrast-to-noise ratio were greatly enhanced allowing iodine concentrations as low as 2 mg/ml to be detected. Future work will be dedicated to experimental evaluation of our ANN-based methods using 3D-printed phantoms.
Low-dose performance of a whole-body research photon-counting CT scanner
Photon-counting CT (PCCT) is an emerging technique that may bring new possibilities to clinical practice. Compared to conventional CT, PCCT is able to exclude electronic noise that may severely impair image quality at low photon counts. This work focused on assessing the low-dose performance of a whole-body research PCCT scanner consisting of two subsystems, one equipped with an energy-integrating detector, and the other with a photon-counting detector. Evaluation of the low-dose performance of the research PCCT scanner was achieved by comparing the noise performance of the two subsystems, with an emphasis on examining the impact of electronic noise on image quality in low-dose situations.
Poster Session: Radiation Therapy
icon_mobile_dropdown
Estimation of the influence of radical effect in the proton beams using a combined approach with physical data and gel data
The purpose of this study was to estimate an impact on radical effect in the proton beams using a combined approach with physical data and gel data. The study used two dosimeters: ionization chambers and polymer gel dosimeters. Polymer gel dosimeters have specific advantages when compared to other dosimeters. They can measure chemical reaction and they are at the same time a phantom that can map in three dimensions continuously and easily. First, a depth-dose curve for a 210 MeV proton beam measured using an ionization chamber and a gel dosimeter. Second, the spatial distribution of the physical dose was calculated by Monte Carlo code system PHITS: To verify of the accuracy of Monte Carlo calculation, and the calculation results were compared with experimental data of the ionization chamber. Last, to evaluate of the rate of the radical effect against the physical dose. The simulation results were compared with the measured depth-dose distribution and showed good agreement. The spatial distribution of a gel dose with threshold LET value of proton beam was calculated by the same simulation code. Then, the relative distribution of the radical effect was calculated from the physical dose and gel dose. The relative distribution of the radical effect was calculated at each depth as the quotient of relative dose obtained using physical and gel dose. The agreement between the relative distributions of the gel dosimeter and Radical effect was good at the proton beams.
Spot scanning proton therapy plan assessment: design and development of a dose verification application for use in routine clinical practice
Kurt E. Augustine, Timothy J. Walsh, Chris J. Beltran, et al.
The use of radiation therapy for the treatment of cancer has been carried out clinically since the late 1800’s. Early on however, it was discovered that a radiation dose sufficient to destroy cancer cells can also cause severe injury to surrounding healthy tissue. Radiation oncologists continually strive to find the perfect balance between a dose high enough to destroy the cancer and one that avoids damage to healthy organs. Spot scanning or “pencil beam” proton radiotherapy offers another option to improve on this. Unlike traditional photon therapy, proton beams stop in the target tissue, thus better sparing all organs beyond the targeted tumor. In addition, the beams are far narrower and thus can be more precisely “painted” onto the tumor, avoiding exposure to surrounding healthy tissue. To safely treat patients with proton beam radiotherapy, dose verification should be carried out for each plan prior to treatment. Proton dose verification systems are not currently commercially available so the Department of Radiation Oncology at the Mayo Clinic developed its own, called DOSeCHECK, which offers two distinct dose simulation methods: GPU-based Monte Carlo and CPU-based analytical. The three major components of the system include the web-based user interface, the Linux-based dose verification simulation engines, and the supporting services and components. The architecture integrates multiple applications, libraries, platforms, programming languages, and communication protocols and was successfully deployed in time for Mayo Clinic’s first proton beam therapy patient. Having a simple, efficient application for dose verification greatly reduces staff workload and provides additional quality assurance, ultimately improving patient safety.
Poster Session: Scatter and Diffraction Imaging
icon_mobile_dropdown
Validation of coded aperture coherent scatter spectral imaging for normal and neoplastic breast tissues via surgical pathology
R. E. Morris, K. E. Albanese, M. N. Lakshmanan, et al.
This study intends to validate the sensitivity and specificity of coded aperture coherent scatter spectral imaging (CACSSI) by comparison to standard histological preparation and pathologic analysis methods used to differentiate normal and neoplastic breast tissues. A composite overlay of the CACSSI rendered image and pathologist interpreted stained sections validate the ability of CACSSI to differentiate normal and neoplastic breast structures ex-vivo. Via comparison to pathologist annotated slides, the CACSSI system may be further optimized to maximize sensitivity and specificity for differentiation of breast carcinomas.
Poster Session: Task Driven Imaging, Observers, Detectability, Phantom Studies
icon_mobile_dropdown
Development of a Hausdorff distance based 3D quantification technique to evaluate the CT imaging system impact on depiction of lesion morphology
The purpose of this study was to develop a 3D quantification technique to assess the impact of imaging system on depiction of lesion morphology. Regional Hausdorff Distance (RHD) was computed from two 3D volumes: virtual mesh models of synthetic nodules or “virtual nodules” and CT images of physical nodules or “physical nodules”. The method can be described in following steps. First, the synthetic nodule was inserted into anthropomorphic Kyoto thorax phantom and scanned in a Siemens scanner (Flash). Then, nodule was segmented from the image. Second, in order to match the orientation of the nodule, the digital models of the “virtual” and “physical” nodules were both geometrically translated to the origin. Then, the “physical” was gradually rotated at incremental 10 degrees. Third, the Hausdorff Distance was calculated from each pair of “virtual” and “physical” nodules. The minimum HD value represented the most matching pair. Finally, the 3D RHD map and the distribution of RHD were computed for the matched pair. The technique was scalarized using the FWHM of the RHD distribution. The analysis was conducted for various shapes (spherical, lobular, elliptical, and speculated) of nodules. The calculated FWHM values of RHD distribution for the 8-mm spherical, lobular, elliptical, and speculated “virtual” and “physical” nodules were 0.23, 0.42, 0.33, and 0.49, respectively.
Development and comparison of projection and image space 3D nodule insertion techniques
This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.
Evaluation of a projection-domain lung nodule insertion technique in thoracic CT
Chi Ma, Baiyu Chen, Chi Wan Koo, et al.
Task-based assessment of computed tomography (CT) image quality requires a large number of cases with ground truth. Inserting lesions into existing cases to simulate positive cases is a promising alternative approach. The aim of this study was to evaluate a recently-developed raw-data based lesion insertion technique in thoracic CT. Lung lesions were segmented from patient CT images, forward projected, and reinserted into the same patient CT projection data. In total, 32 nodules of various attenuations were segmented from 21 CT cases. Two experienced radiologists and 2 residents blinded to the process independently evaluated these inserted nodules in two sub-studies. First, the 32 inserted and the 32 original nodules were presented in a randomized order and each received a rating score from 1 to 10 (1=absolutely artificial to 10=absolutely realistic). Second, the inserted and the corresponding original lesions were presented side-by-side to each reader, who identified the inserted lesion and provided a confidence score (1=no confidence to 5=completely certain). For the randomized evaluation, discrimination of real versus artificial nodules was poor with areas under the receiver operative characteristic curves being 0.69 (95% CI: 0.58-0.78), 0.57 (95% CI: 0.46-0.68), and 0.62 (95% CI: 0.54-0.69) for the 2 radiologists, 2 residents, and all 4 readers, respectively. For the side-by-side evaluation, although all 4 readers correctly identified inserted lesions in 103/128 pairs, the confidence score was moderate (2.6). Our projection-domain based lung nodule insertion technique provides a robust method to artificially generate clinical cases that prove to be difficult to differentiate from real cases.
Synthesized interstitial lung texture for use in anthropomorphic computational phantoms
Marc F. Becchetti, Justin B. Solomon, W. Paul Segars, et al.
A realistic model of the anatomical texture from the pulmonary interstitium was developed with the goal of extending the capability of anthropomorphic computational phantoms (e.g., XCAT, Duke University), allowing for more accurate image quality assessment. Contrast-enhanced, high dose, thorax images for a healthy patient from a clinical CT system (Discovery CT750HD, GE healthcare) with thin (0.625 mm) slices and filtered back- projection (FBP) were used to inform the model. The interstitium which gives rise to the texture was defined using 24 volumes of interest (VOIs). These VOIs were selected manually to avoid vasculature, bronchi, and bronchioles. A small scale Hessian-based line filter was applied to minimize the amount of partial-volumed supernumerary vessels and bronchioles within the VOIs. The texture in the VOIs was characterized using 8 Haralick and 13 gray-level run length features. A clustered lumpy background (CLB) model with added noise and blurring to match CT system was optimized to resemble the texture in the VOIs using a genetic algorithm with the Mahalanobis distance as a similarity metric between the texture features. The most similar CLB model was then used to generate the interstitial texture to fill the lung. The optimization improved the similarity by 45%. This will substantially enhance the capabilities of anthropomorphic computational phantoms, allowing for more realistic CT simulations.
Second generation anthropomorphic physical phantom for mammography and DBT: Incorporating voxelized 3D printing and inkjet printing of iodinated lesion inserts
Dhiraj Sikaria, Stephanie Musinsky, Gregory M. Sturgeon, et al.
Physical phantoms are needed for the evaluation and optimization of new digital breast tomosynthesis (DBT) systems. Previously, we developed an anthropomorphic phantom based on human subject breast CT data and fabricated using commercial 3D printing. We now present three key advancements: voxelized 3D printing, photopolymer material doping, and 2D inkjet printing of lesion inserts. First, we bypassed the printer’s control software in order to print in voxelized form instead of conventional STL surfaces, thus improving resolution and allowing dithering to mix the two photopolymer materials into arbitrary proportions. We demonstrated ability to print details as small as 150μm, and dithering to combine VeroWhitePlus and TangoPlus in 10% increments. Second, to address the limited attenuation difference among commercial photopolymers, we evaluated a beta sample from Stratasys with increased TiO2 doping concentration up to 2.5%, which corresponded to 98% breast density. By spanning 36% to 98% breast density, this doubles our previous contrast. Third, using inkjet printers modified to print with iopamidol, we created 2D lesion patterns on paper that can be sandwiched into the phantom. Inkjet printing has advantages of being inexpensive and easy, and more contrast can be delivered through overprinting. Printing resolution was maintained at 210 μm horizontally and 330 μm vertically even after 10 overprints. Contrast increased linearly with overprinting at 0.7% per overprint. Together, these three new features provide the basis for creating a new anthropomorphic physical breast phantom with improved resolution and contrast, as well as the ability to insert 2D lesions for task-based assessment of performance.
Poster Session: Tomosynthesis and Digital Radiography
icon_mobile_dropdown
kV x-ray dual digital tomosynthesis for image guided lung SBRT
Larry Partain, Douglas Boyd, Namho Kim, et al.
Two simulated sets of digital tomosynthesis images of the lungs, each acquired at a 90 degree angle from the other, with 19 projection images used for each set and SART iterative reconstructed, gives dual tomosynthesis slice image quality approaching that of spiral CT, and with a data acquisition time that is 3% of that of cone beam CT. This fast kV acquisition, should allow near real time tracking of lung tumors in patients receiving SBRT, based on a novel TumoTrakTM multi-source X-ray tube design. Until this TumoTrakTM prototype is completed over the next year, its projected performance was simulated from the DRR images created from a spiral CT data set from a lung cancer patient. The resulting dual digital tomosynthesis reconstructed images of the lung tumor were exceptional and approached that of the gold standard Feldkamp CT reconstruction of breath hold, diagnostic, spiral, multirow, CT data. The relative dose at 46 mAs was less than 10% of what it would have been if the digital tomosynthesis had been done at the 472 mAs of the CT data set. This is for a 0.77 fps imaging rate sufficient to resolve respiratory motion in many free breathing patients during SBRT. Such image guidance could decrease the magnitudes of targeting error margins by as much as 20 mm or more in the craniocaudal direction for lower lobe lesions while markedly reducing dose to normal lung, heart and other critical structures. These initial results suggest a wide range of topics for future work.
Design, optimization and evaluation of a “smart” pixel sensor array for low-dose digital radiography
Kai Wang, Xinghui Liu, Hai Ou, et al.
Amorphous silicon (a-Si:H) thin-film transistors (TFTs) have been widely used to build flat-panel X-ray detectors for digital radiography (DR). As the demand for low-dose X-ray imaging grows, a detector with high signal-to-noise-ratio (SNR) pixel architecture emerges. “Smart” pixel is intended to use a dual-gate photosensitive TFT for sensing, storage, and switch. It differs from a conventional passive pixel sensor (PPS) and active pixel sensor (APS) in that all these three functions are combined into one device instead of three separate units in a pixel. Thus, it is expected to have high fill factor and high spatial resolution. In addition, it utilizes the amplification effect of the dual-gate photosensitive TFT to form a one-transistor APS that leads to a potentially high SNR. This paper addresses the design, optimization and evaluation of the smart pixel sensor and array for low-dose DR. We will design and optimize the smart pixel from the scintillator to TFT levels and validate it through optical and electrical simulation and experiments of a 4x4 sensor array.
Modeling acquisition geometries with improved super-resolution in digital breast tomosynthesis
Raymond J. Acciavatti, E. Paul Wileyto, Andrew D. A. Maidment
In digital breast tomosynthesis (DBT), a reconstruction is created from multiple x-ray projection images. Our previous work demonstrated that the reconstruction is capable of super-resolution (i.e., subpixel resolution) relative to the detector. In order for super-resolution to yield a reliable improvement in image quality, it should be achievable at all positions in the reconstruction. This paper demonstrates that super-resolution is not achievable at all depths, or at all heights above the breast support. For this purpose, a bar pattern phantom was imaged using a commercial DBT system. A goniometry stand was used to orient the long axis of the parallel bars along an oblique plane relative to the breast support. This setup allowed a single test frequency to be visualized over a continuous range of depths. The orientation of the test frequency was parallel to the direction of x-ray tube motion. An oblique reconstruction in the plane of the bar pattern phantom showed that the existence of super-resolution is depth-dependent. To identify design strategies for optimizing super-resolution, a theoretical model was then developed in which a test frequency higher than the alias frequency of the detector was simulated. Two design modifications that improve super-resolution are identified. In particular, it is shown that reducing the spacing between the x-ray source positions minimizes the number of depths lacking super-resolution. Additionally, introducing detector motion along the direction perpendicular to the breast support allows for more uniform super-resolution throughout the image volume. In conclusion, this work presents strategies for optimizing super-resolution in DBT.
Scatter estimation and removal of anti-scatter grid-line artifacts from anthropomorphic head phantom images taken with a high resolution image detector
In radiography, one of the best methods to eliminate image-degrading scatter radiation is the use of anti-scatter grids. However, with high-resolution dynamic imaging detectors, stationary anti-scatter grids can leave grid-line shadows and moiré patterns on the image, depending upon the line density of the grid and the sampling frequency of the x-ray detector. Such artifacts degrade the image quality and may mask small but important details such as small vessels and interventional device features. Appearance of these artifacts becomes increasingly severe as the detector spatial resolution is improved. We have previously demonstrated that, to remove these artifacts by dividing out a reference grid image, one must first subtract the residual scatter that penetrates the grid; however, for objects with anatomic structure, scatter varies throughout the FOV and a spatially differing amount of scatter must be subtracted. In this study, a standard stationary Smit-Rontgen X-ray grid (line density - 70 lines/cm, grid ratio - 13:1) was used with a high-resolution CMOS detector, the Dexela 1207 (pixel size - 75 micron) to image anthropomorphic head phantoms. For a 15 x 15cm FOV, scatter profiles of the anthropomorphic head phantoms were estimated then iteratively modified to minimize the structured noise due to the varying grid-line artifacts across the FOV. Images of the anthropomorphic head phantoms taken with the grid, before and after the corrections, were compared demonstrating almost total elimination of the artifact over the full FOV. Hence, with proper computational tools, antiscatter grid artifacts can be corrected, even during dynamic sequences.
Optical geometry calibration method for free-form digital tomosynthesis
Digital tomosynthesis is a type of limited angle tomography that allows 3D information to be reconstructed from a set of x-ray projection images taken at various angles using an x-ray tube, a mechanical arm to rotate the tube about the object, and a digital detector. Tomosynthesis reconstruction requires the precise location of the detector with respect to each x-ray source, forcing all current clinical tomosynthesis systems to use a physically coupled source and detector so the geometry is always known and is always the same. This limits the imaging geometries and its large size is impractical for mobile or field operations. To counter this, we have developed a free form tomosynthesis with a decoupled, free-moving source and detector that uses a novel optical method for accurate and real-time geometry calibration to allow for manual, hand-held tomosynthesis and even CT imaging. We accomplish this by using a camera, attached to the source, to track the motion of the source relative to the detector. Attached to the detector is an optical pattern and the image captured by the camera is then used to determine the relative camera/pattern position and orientation by analyzing the pattern distortion and calculating the source positions for each projection, necessary for 3D reconstruction. This allows for portable imaging in the field and also as an inexpensive upgrade to existing 2D systems, such as in developing countries, to provide 3D image data. Here we report the first feasibility demonstrations of free form digital tomosynthesis systems using the method.
Initial clinical evaluation of stationary digital chest tomosynthesis
Allison E. Hartman, Jing Shan, Gongting Wu, et al.
Computed Tomography (CT) is the gold standard for image evaluation of lung disease, including lung cancer and cystic fibrosis. It provides detailed information of the lung anatomy and lesions, but at a relatively high cost and high dose of radiation. Chest radiography is a low dose imaging modality but it has low sensitivity. Digital chest tomosynthesis (DCT) is an imaging modality that produces 3D images by collecting x-ray projection images over a limited angle. DCT is less expensive than CT and requires about 1/10th the dose of radiation. Commercial DCT systems acquire the projection images by mechanically scanning an x-ray tube. The movement of the tube head limits acquisition speed. We recently demonstrated the feasibility of stationary digital chest tomosynthesis (s-DCT) using a carbon nanotube (CNT) x-ray source array in benchtop phantom studies. The stationary x-ray source allows for fast image acquisition. The objective of this study is to demonstrate the feasibility of s-DCT for patient imaging. We have successfully imaged 31 patients. Preliminary evaluation by board certified radiologists suggests good depiction of thoracic anatomy and pathology.
Anatomical decomposition in dual energy chest digital tomosynthesis
Lung cancer is the leading cause of cancer death worldwide and the early diagnosis of lung cancer has recently become more important. For early screening lung cancer, computed tomography (CT) has been used as a gold standard for early diagnosis of lung cancer [1]. The major advantage of CT is that it is not susceptible to the problem of misdiagnosis caused by anatomical overlapping while CT has extremely high radiation dose and cost compared to chest radiography. Chest digital tomosynthesis (CDT) is a recently introduced new modality for lung cancer screening with relatively low radiation dose compared to CT [2] and also showing high sensitivity and specificity to prevent anatomical overlapping occurred in chest radiography. Dual energy material decomposition method has been proposed for better detection of pulmonary nodules as means of reducing the anatomical noise [3]. In this study, possibility of material decomposition in CDT was tested by simulation study and actual experiment using prototype CDT. Furthermore organ absorbed dose and effective dose were compared with single energy CDT. The Gate v6 (Geant4 application for tomographic emission), and TASMIP (Tungsten anode spectral model using the interpolating polynomial) code were used for simulation study and simulated cylinder shape phantom consisted of 4 inner beads which were filled with spine, rib, muscle and lung equivalent materials. The patient dose was estimated by PCXMC 1.5 Monte Carlo simulation tool [4]. The tomosynthesis scan was performed with a linear movement and 21 projection images were obtained over 30 degree of angular range with 1.5° degree of angular interval. The proto type CDT system has same geometry with simulation study and composed of E7869X (Toshiba, Japan) x-ray tube and FDX3543RPW (Toshiba, Japan) detector. The result images showed that reconstructed with dual energy clearly visualize lung filed by removing unnecessary bony structure. Furthermore, dual energy CDT could enhance spine bone hidden by heart effectively. The effective dose in dual energy CDT was slightly higher than single energy CDT, while only 10% of average thoracic CT [5]. Dual energy tomosynthesis is a new technique; therefore, there is little guidance for its integration into the clinical practice and this study can be used to improve diagnosis efficiency of lung field screening using CDT
Optimization of exposure parameters in digital tomosynthesis considering effective dose and image quality
Digital tomosynthesis system (DTS), which scans an object in a limited angle, has been considered as an innovative imaging modality which can present lower patient dose than computed tomography and solve the problem of poor depth resolution in conventional digital radiography. Although it has many powerful advantages, only breast tomosynthesis system has been adopted in many hospitals. In order to reduce the patient dose while maintaining image quality, the acquisition conditions need to be studied. In this study, we analyzed effective dose and image qualities of chest phantom using commercialized universal chest digital tomosynthesis (CDT) R/F system to study the optimized exposure parameters. We set 10 different acquisition conditions including the default acquisition condition by user manual of Shimadzu (100 kVp with 0.5 mAs). The effective dose was calculated from PCXMC software version 1.5.1 by utilizing the total X-ray exposure measured by ion chamber. The image quality was evaluated by signal difference to noise ratio (SDNR) in the regions of interest (ROIs) pulmonary arteries at different axial in-plane. We analyzed a figure of merit (FOM) which considers both the effective dose and the SDNR in order to determine the optimal acquisition condition. The results indicated that the most suitable acquisition parameters among 10 conditions were condition 7 and 8 (120 kVp with 0.04 mAs and 0.1 mAs, respectively), which indicated lower effective dose while maintaining reasonable SDNRs and FOMs for three specified regions. Further studies are needed to be conducted for detailed outcomes in CDT acquisition conditions.
Digital breast tomosynthesis reconstruction using spatially weighted non-convex regularization
Regularization is an effective strategy for reducing noise in tomographic reconstruction. This paper proposes a spatially weighted non-convex (SWNC) regularization method for digital breast tomosynthesis (DBT) image reconstruction. With a non-convex cost function, this method can suppress noise without blurring microcalcifications (MC) and spiculations of masses. To minimize the non-convex cost function, we apply a majorize-minimize separable quadratic surrogate algorithm (MM-SQS) that is further accelerated by ordered subsets (OS). We applied the new method to a heterogeneous breast phantom and to human subject DBT data, and observed improved image quality in both situations. A quantitative study also showed that the SWNC method can significantly enhance the contrast-to-noise ratio of MCs. By properly selecting its parameters, the SWNC regularizer can preserve the appearance of the mass margins and breast parenchyma.
Optimal kVp in chest computed radiography using visual grading scores: a comparison between visual grading characteristics and ordinal regression analysis
Xiaoming Zheng, Myeongsoo Kim, Sook Yang
The purposes of this work were to determine the optimal peak voltage for chest computed radiography (CR) using visual grading scores and to compare visual grading characteristics (VGC) and ordinal regression in visual grading analysis. An Afga CR system was used to acquire images of an anthropomorphic chest phantom. Both entrance surface dose and detector surface dose were measured using the Piranha 657 dosimeter. The images were acquired under various voltages from 80 to 120 kVp and exposures from 0.5 to 12.5 mAs. The image qualities were evaluated by 5 experienced radiologists/radiographers based on modified European imaging criteria using 1-5 visual grading scale. The VGC, ordinal regression as well as the conventional visual grading analysis (VGA) were employed for the image quality analysis. Both VGC and ordinal regression yielded the same results with both 100 kVp and 120 kVp producing the best image quality. The image quality of the 120 kVp was slightly higher than that of the 100 kVp but its dose was also higher than that of the 100kVp. On balancing image quality with dose, the 100 kVp should be the optimal kVp for the chest imaging using the Afga CR system. The ordinal regression is a powerful tool in the analysis of image quality using visual grading scores and the VGC can be handled by the ordinal regression.
On the properties of artificial neural network filters for bone-suppressed digital radiography
Dual-energy imaging can enhance lesion conspicuity. However, the conventional (fast kilovoltage switching) dual-shot dual-energy imaging is vulnerable to patient motion. The single-shot method requires a special design of detector system. Alternatively, single-shot bone-suppressed imaging is possible using post-image processing combined with a filter obtained from training an artificial neural network. In this study, the authors investigate the general properties of artificial neural network filters for bone-suppressed digital radiography. The filter properties are characterized in terms of various parameters such as the size of input vector, the number of hidden units, the learning rate, and so on. The preliminary result shows that the bone-suppressed image obtained from the filter, which is designed with 5,000 teaching images from a single radiograph, results in about 95% similarity with a commercial bone-enhanced image.
Effects of angular range on image quality of chest digital tomosynthesis
Chest digital tomosynthesis (CDT) is a new 3D imaging technique that can be expected to improve clinical diagnosis over conventional chest radiography. We investigated the effect of the angular range of data acquisition on the image quality using newly developed CDT system. The four different acquisition sets were studied using ±15°, ±20°, ±30°, and ±35° angular ranges with 21 projection views (PVs). The point spread function (PSF), modulation transfer function (MTF), artifact spread function (ASF), and normalized contrast-to-noise ratio (CNR) were used to evaluate the image quality. We found that increasing angular ranges improved vertical resolution. The results indicated that there was the opposite relationship of the CNR with angular range for the two tissue types. While CNR for heart tissue increased with increasing angular range, CNR for spine bone decreased. The results showed that the angular range is an important parameter for the CDT exam.
Quantitative comparison of spatial resolution in step-and-shoot and continuous motion digital breast tomosynthesis
This study compares the spatial resolution in step-and-shoot and continuous motion acquisition modes of digital tomosynthesis using a bench-top prototype designed for breast phantoms imaging. The prototype employs a flat panel detector with a 50 μm pixel pitch, a micro focus x-ray tube and a motorized stage. A sharp metal edge with a thickness of 0.2 mm was used to measure the modulation transfer function (MTF). The edge was rotated from −7.5° to +7.5° with 1.5° increments to acquire 11 angular projections using 40 kVp, 500 μA with 5.55 s per projection. In continuous motion mode, the motorized stage moved the test object for the whole exposure time at a speed of 0.377 mm/s. The impact of acquisition speed in continuous DBT was also investigated, and a high speed of 0.753 mm/s was used. In step-and-shoot mode, the cutoff frequencies (10% MTF) in projection view (0°) and reconstructed DBT slices were 5.55 lp/mm and 4.95 lp/mm. Spatial resolution dropped in the continuous motion mode of the DBT due to the blur caused by the rotation of the stage and the cutoff frequencies reduced to 3.6 lp/mm and 3.18 lp/mm in the projection view (0º) and reconstructed DBT slices. At high rotational speed in continuous motion mode, the cutoff frequencies in the DBT slices dropped by 17 % to 2.65 lp/mm. Rotational speed of the rotation stage and spatial resolution are interconnected. Hence, reducing the motion blur in the continuous acquisition mode is important to maintain high spatial resolution for diagnostic purposes.
C-arm technique using distance driven method for nephrolithiasis and kidney stones detection
Nuhad Malalla, Pengfei Sun, Ying Chen, et al.
Distance driven represents a state of art method that used for reconstruction for x-ray techniques. C-arm tomography is an x-ray imaging technique that provides three dimensional information of the object by moving the C-shaped gantry around the patient. With limited view angle, C-arm system was investigated to generate volumetric data of the object with low radiation dosage and examination time. This paper is a new simulation study with two reconstruction methods based on distance driven including: simultaneous algebraic reconstruction technique (SART) and Maximum Likelihood expectation maximization (MLEM). Distance driven is an efficient method that has low computation cost and free artifacts compared with other methods such as ray driven and pixel driven methods. Projection images of spherical objects were simulated with a virtual C-arm system with a total view angle of 40 degrees. Results show the ability of limited angle C-arm technique to generate three dimensional images with distance driven reconstruction.
Ray tracing reconstruction investigation for C-arm tomosynthesis
C-arm tomosynthesis is a three dimensional imaging technique. Both x-ray source and the detector are mounted on a C-arm wheeled structure to provide wide variety of movement around the object. In this paper, C-arm tomosynthesis was introduced to provide three dimensional information over a limited view angle (less than 180o) to reduce radiation exposure and examination time. Reconstruction algorithms based on ray tracing method such as ray tracing back projection (BP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were developed for C-arm tomosynthesis. C-arm tomosynthesis projection images of simulated spherical object were simulated with a virtual geometric configuration with a total view angle of 40 degrees. This study demonstrated the sharpness of in-plane reconstructed structure and effectiveness of removing out-of-plane blur for each reconstruction algorithms. Results showed the ability of ray tracing based reconstruction algorithms to provide three dimensional information with limited angle C-arm tomosynthesis.
Integration of kerma-area product and cumulative air kerma determination into a skin dose tracking system for fluoroscopic imaging procedures
The skin dose tracking system (DTS) that we developed provides a color-coded mapping of the cumulative skin dose distribution on a 3D graphic of the patient during fluoroscopic procedures in real time. The DTS has now been modified to also calculate the kerma area product (KAP) and cumulative air kerma (CAK) for fluoroscopic interventions using data obtained in real-time from the digital bus on a Toshiba Infinix system. KAP is the integral of air kerma over the beam area and is typically measured with a large-area transmission ionization chamber incorporated into the collimator assembly. In this software, KAP is automatically determined for each x-ray pulse as the product of the air kerma/ mAs from a calibration file for the given kVp and beam filtration times the mAs per pulse times the length and width of the beam times a field nonuniformity correction factor. Field nonuniformity is primarily the result of the heel effect and the correction factor was determined from the beam profile measured using radio-chromic film. Dividing the KAP by the beam area at the interventional reference point provides the area averaged CAK. The KAP and CAK per x-ray pulse are summed after each pulse to obtain the total procedure values in real-time. The calculated KAP and CAK were compared to the values displayed by the fluoroscopy machine with excellent agreement. The DTS now is able to automatically calculate both KAP and CAK without the need for measurement by an add-on transmission ionization chamber.