Proceedings Volume 10573

Medical Imaging 2018: Physics of Medical Imaging

cover
Proceedings Volume 10573

Medical Imaging 2018: Physics of Medical Imaging

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 26 June 2018
Contents: 30 Sessions, 216 Papers, 61 Presentations
Conference: SPIE Medical Imaging 2018
Volume Number: 10573

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10573
  • Breast Imaging
  • Breast Phantoms
  • Tomosynthesis
  • Detectors
  • CT Systems and Algorithms
  • Keynote and Innovations in Imaging Systems
  • Photon Counting Detectors
  • CT Image Quality and Dose
  • Photon Counting Imaging
  • Multi-Energy CT
  • Deep Learning for CT
  • Neuroimaging
  • Cardiothoracic
  • Phase Contrast Imaging
  • Image Reconstruction
  • Poster Session: New Imaging Systems
  • Poster Session: Quantitative Imaging
  • Poster Session: Phantoms
  • Poster Session: Image Reconstruction
  • Poster Session: Algorithms
  • Poster Session: CT Image Quality and Dose
  • Poster Session: Cone-Beam CT
  • Poster Session: Phase Contrast Imaging
  • Poster Session: Multi-Energy X-Ray and CT
  • Poster Session: Photon-Counting Imaging
  • Poster Session: Breast Imaging
  • Poster Session: Tomosynthesis
  • Poster Session: Detectors
  • Poster Session: Radiography and Fluoroscopy
Front Matter: Volume 10573
icon_mobile_dropdown
Front Matter: Volume 10573
This PDF file contains the front matter associated with SPIE Proceedings Volume 10573, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Breast Imaging
icon_mobile_dropdown
Comparison of direct-conversion a-Se and CsI scintillator-based CMOS FFDM/DBT flat-panel detectors using an anthropomorphic breast phantom with embedded microcalcification signals
Andrey Makeev, Lynda Ikejimba, Stephen J. Glick
Digital flat-panel detectors (FPD) used in x-ray breast imaging [full field digital mammography (FFDM) or digital breast tomosynthesis (DBT)] utilize semiconductor-based sensors, which directly convert incident x-ray photons into charge pulses, or scintillator-based sensors which, first, absorb x-rays and then re-radiate the energy in a form of optical light, which is subsequently collected by photodiodes. Direct conversion detectors lack the optical light stage and, hence, potentially allow for better spatial resolution, which can be beneficial for imaging of small signals, like microcalcifications. Some of the recent scintillator detectors use columnar CsI to contain optical photon dispersion, by using total internal reflection inside the columns.

In this study we test two detectors, an amorphous Selenium (a-Se) direct-conversion FPD with a pixel size of 85 μm, integrated in a number of FFDM and DBT systems, and a new CsI columnar scintillator detector with a 49.5 μm pixel and CMOS active pixel readout technology.

Detectors were evaluated, using 1) linear assessment methods, i.e. modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE); 2) a human observer reader study in which microcalcification signals were detected in FFDM images of an anthropomorphic breast phantom.
A linear systems approach to modeling anatomic noise in digital mammography
Lesion detectability in digital mammography (DM) is limited by spatial variations in breast tissue composition, commonly referred to as anatomic noise. Quantification of anatomic noise and subsequent incorporation into task-based assessments of DM image quality currently requires an empirical approach, in which the anatomic noise power spectrum (NPS) is extracted from clinical images or images of physical phantoms. This limitation precludes fully theoretical modeling of novel approaches for suppressing anatomic noise. We show theoretically that the anatomic NPS in DM is linearly related to the NPS of the thickness of fibroglandular tissue. We validated this relationship using a validated digital model of a three-dimensional structured breast. We simulated breasts with power-law exponents of 3, thicknesses ranging from 5 cm to 7 cm, and fibgrolandular tissue fractions ranging from 40 % to 60 %. The fibroglandular component of each simulated breast was projected onto a theoretical image plane. For each set of parameters, the fibroglandular NPS was extracted from the ensemble average of 100 fibroglandular projections and fit to a power-law model. The magnitude and power-law exponent of the fibroglandular NPS were then used to predict the system-dependent anatomic NPS over a wide range of tube voltages. Theoretical predictions were then compared with the anatomic NPS extracted from ensembles of simulated x-ray projection images. In all cases, good agreement was observed between the predictions of the linear theory and the simulated anatomic NPS. The linear systems approach developed here can therefore be used to theoretically optimize and evaluate novel breast-imaging techniques without the requirement for empirical input parameters.
Towards determination of individual glandular dose
Hannie Förnvik, Pontus Timberg, Magnus Dustler, et al.
Due to variations in amount and distribution of glandular breast tissue among women, the mean glandular dose (MGD) can be a poor measure of the individual glandular dose. Therefore, to improve the basis for risk assessment related to radiation dose from breast X-ray examinations, the distribution should be considered. Breast tomosynthesis (BT) is an imaging technique that may be used as an alternative or complement to standard mammography in breast cancer screening, and it could provide the required 3D-localisation of glandular tissue for estimation of the individual glandular dose. In this study, we investigated the possibility to localize glandular tissue from BT data and use a Monte Carlo simulation routine to estimate the glandular dose for software breast phantoms with different amount and distribution of glandular breast tissue. As an initial evaluation of the method, the local energy absorption in glandular tissue was estimated for seven breast phantoms and the corresponding phantoms recreated from reconstructed BT data. As expected, the normalized glandular dose was found to differ substantially with glandular distribution. This emphasizes the importance of glandular tissue localization for estimation of the individual glandular dose. The results showed good accuracy for estimation of normalized glandular dose using breast phantoms recreated from reconstructed BT image volumes (relative differences between –7.3% and +9.5%). Following this initial study, the method will be evaluated for more phantoms and potentially developed for patient cases. In the future it could become a useful tool in breast dosimetry as a step towards the individual glandular dose.
Classification of breast microcalcifications using dual-energy mammography
Purpose: To investigate the potential of spectral mammography to distinguish between type I microcalcifications, consisting of calcium oxalate dihydrate compounds typically associated with benign lesions, and type II microcalcifications containing hydroxyapatite which are predominantly detected in malignant tumors. Methods: Simulation and phantom studies were carried out using polyenergetic spectra from a tungsten anode x-ray tube at low and high voltage (kVp) settings. Low- and high-energy attenuation ratios were calculated for microcalcifications of different types and thicknesses. Classification of type I or II calcifications was performed by using the attenuation ratio as a criterion. The results were evaluated using receiver operating characteristic (ROC) analysis. Results: In simulation studies, four combinations of dual (low and high kVp) energy exposure technique were investigated: 1) 30 kVp, 1 mm aluminum filtration and 50 kVp, 0.1 mm copper filtration; 2) 30 kVp, 1 mm Al and 50 kVp 0.2 mm Cu; 3) 30 kVp, 2 mm Al and 50 kVp, 0.1 mm Cu; and 4) 30 kVp 2 mm Al and 50 kVp, 0.2 mm Cu. In the ROC curve analysis, area under the curve (AUC) values were above 0.95 for all of the spectra with the exposure equivalent to 1.5 mGy of the mean glandular dose delivered to a 4 cm thick breast phantom. 30 kVp/2 mm Al and 50 kVp/0.1 mm Cu tube voltage/filter combinations demonstrated better performance compared to other combinations. In phantom studies, a 40 kVp setting paired with 1 mm Al filter and a 70 kVp setting paired with 0.1 mm Cu filter operated at 20 mAs technique were used. Obtained AUC value was also high (0.91), supporting the accuracy of the proposed classification method. Conclusion: The results of the current study suggest that dual energy mammography systems have potential to be used for discrimination between type I and type II microcalcifications to improve early breast cancer diagnosis and reduce the number of unnecessary breast biopsies.
Virtual clinical trial of lesion detection in digital mammography and digital breast tomosynthesis
Predrag R. Bakic, Bruno Barufaldi, David Higginbotham, et al.
We have designed and conducted 35 virtual clinical trials (VCTs) of breast lesion detection in digital mammography (DM) and digital breast tomosynthesis (DBT) using a novel open-source simulation pipeline, OpenVCT. The goal of the VCTs is to test in-silico reports that DBT provides substantial improvements in the detectability of masses, while the detectability of microcalcifications remains comparable to DM. For this test, we generated 12 software breast phantoms (volume 700ml, compressed thickness 6.33cm), varying the number of simulated tissue compartments and their shape. Into each phantom, we inserted multiple lesions located 2cm apart in the plane parallel to detector at the level of the nipple. Simulated ellipsoidal masses (oblate spheroids 7mm in diameter and of various thicknesses) and single calcifications of various size and composition were inserted; a total of 17,640 lesions were simulated for this project. DM and DBT projections of phantoms with and without lesions were synthesized assuming a clinical acquisition geometry. Exposure parameters (mAs and kVp) were selected to match AEC settings. Processed DM images and reconstructed DBT slices were obtained using a commercially available software library. Lesion detection was simulated by channelized Hotelling observers, with 15 LG channels and a spread of 22, using independent sets of 480 image samples (150×150 pixel ROIs) for training and 480 samples for testing. Our VCTs showed an average AUC improvement for DBT vs DM of 0.027 for microcalcifications and 0.103 for masses, in close agreement (within 1%) of clinical data reported in the literature.
Breast Phantoms
icon_mobile_dropdown
Evaluation of statistical breast phantoms with higher resolution
In previous work, we generated computational breast phantoms by using a principal component analysis (PCA) or "Eigenbreast" technique. For this study, we sought to address resolution limitations in the previous synthesized breast phantoms by analyzing new human subject data set with higher resolution.

We utilize PCA to sample input breast cases, then by using weighted sums along the different eigenvectors or "eigenbreasts," a number of new cases can be generated. While breasts can vary in structure and form, we used a series of compressed breasts derived from human subject breast CT volumes to create the eigenbreasts. We used an initial set of thirty-five phantoms from a new CT patient population with 155x155x155 μm3 voxel size. The training set and synthetized phantoms were evaluated by power law exponent β and changes in volumetric breast density as a result of the PCA process.

The synthetic phantoms were found to have similar β and fibroglandular density distributions to the training dataset. Individual synthetic phantoms appeared to capture glandular features present in the training phantoms but had visually different texture features. This work shows that earlier work on the eigenbreast technique can be extended to newer datasets with higher resolution and produce synthetic phantoms that retain the quantitative properties of training data.
Anthropomorphic breast phantoms for evaluation of FFDM/DBT and breast CT using inkjet printing
Lynda C. Ikejimba, Jesse Salad, Andrey Makeev, et al.
The breast phantoms currently available for evaluating full field digital mammography (FFDM), digital breast tomosynthesis (DBT), and breast CT (bCT) systems often lack the complexity present in real breasts. In this work we present a new methodology for creating physical anthropomorphic breast phantoms for use in FFDM, DBT, and dedicated bCT systems using zinc acetate-doped ink. First, an uncompressed virtual phantom was created through analytical modeling. The model represented a breast with 28% fibroglandular density with 13 tissue classes and contained a 5 mm lesion. The breast was binarized to two tissue classes: adipose and fibroglandular tissue. The phantom was then realized through inkjet printing using dye ink doped with zinc acetate for the fibroglandular components and three candidate materials for the adipose background: parchment paper, organic paper, and office paper. The fabrication process was evaluated in terms of material realism and reproducibility using spectroscopy, a clinical FFDM system, and a benchtop bCT system. The linear attenuation coefficient of the doped ink plus parchment paper and parchment paper alone closely matched those of the fibroglandular and adipose tissues, respectively. A methodology for generating anthropomorphic breast phantoms was developed using a novel inkjet printing technique for use in FFDM/DBT, as well as dedicated breast CT systems. A novel uncompressed breast phantom for bCT was fabricated using inexpensive, easily available materials with realistic tissue properties.
Improved simulation of Cooper ligaments in breast phantoms
David D. Pokrajac, Marko D. Petkovic, Andrew D. A. Maidment, et al.
Computer simulation of breast anatomy plays a crucial role in virtual clinical trials (VCTs) for preclinical optimization of breast imaging systems. Software breast phantoms provide ground truth about tissue distribution and flexibility to cover anatomical variations. We have experience with designing software phantoms based upon recursive partitioning using octrees; these phantoms simulate tissue compartments and fibrous ligaments, which contribute to the parenchymal texture. Realistic simulation critically affects the image quality and the VCT accuracy. Our simulation method may result in artifacts (bumps and dents) due to prematurely stopped partitioning of octrees. These artifacts compromise the image quality by reducing ligament smoothness and distorting parenchymal texture. In this study, we discuss the phenomenology of the artifacts and propose utilization of a spherical approximation of cubes corresponding to the octree nodes, to assess minimal and maximal distance from a cube to a median surface of the ligament. We demonstrate that the proposed technique is complementary to our earlier method proposed to improve smoothness of simulated Cooper’s ligaments surface. We show that the proposed technique leads to observable changes in simulated phantom projections. The effect of the computational overhead introduced by the proposed method on the simulation time may be compensated by an efficient implementation. The proposed method may be also applied to the simulation of quasi-planar structures in other organs and (biological or non-biological) domains.
Development of a physical 3D anthropomorphic breast texture model using selective laser sintering rapid prototype printing
Anthropomorphic breast phantoms are useful for development and characterization of breast x-ray imaging systems. Rapid prototyping (RP) opens a new way for generating complex shapes similar to real breast tissue patterns at reasonably high resolution and a high degree of reproducibility. Such a phantom should have x-ray attenuation properties similar to adipose and fibroglandular tissue across a broad x-ray energy range. However material selection is limited to those that are compatible with the printing system, which often requires adding non-organic dopants. Fortunately, there are some off-the-shelf materials that may be suitable for breast phantoms. Here a polyamide-12/water texture phantom is being investigated, which can be used for mammography, tomosynthesis and breast CT. Polyamide-12 (PA-12) is shown to have linear attenuation coefficients across an energy range of 15 – 40 keV matching adipose tissue to within 10% effective breast density. A selective laser sintering (SLS) printer is used for manufacturing the phantom. The phantom was imaged on the Senographe Pristina (GE Healthcare, Chicago, IL), while initial assessment of 3D fidelity with the original design was performed by acquiring volume images of the phantom on a micro-CT system. A root mean distance error of 0.22 mm was seen between the micro-CT volume and the original. The PA-12 structures appeared to be slightly smaller than in the original, possibly due to infiltration of the water into the PA-12 surfaces. Power spectra measurements for mammograms of the simulated and physical phantoms both demonstrated an inverse power-law spectrum shape with exponent β= 3.72 and 3.76, respectively.
Design and validation of biologically inspired spiculated breast lesion models utilizing structural tissue distortion
Premkumar Elangovan, Elena Mihalas, Majdi Alnowami, et al.
The use of conventional clinical trials to optimise technology and techniques in breast cancer screening carries with it issues of dose, high cost and delay. This has motivated the development of Virtual Clinical Trials (VCTs) as an alternative in-silico assessment paradigm. However, such an approach requires a set of modelling tools that can realistically represent the key biological and technical components within the imaging chain. The OPTIMAM image simulation toolbox provides a complete validated end-to-end solution for VCTs, wherein commonly-found regular and irregular lesions can be successfully and realistically simulated. As spiculated lesions are the second most common form of solid mass we report on our latest developments to produce realistic spiculated lesion models, with particular application in Alternative Forced Choice trials. We make use of sets of spicules drawn using manually annotated landmarks and interpolated by a fitted 3D spline for each spicule. Once combined with a solid core, these are inserted into 2D and tomosynthesis image segments and blended using a combination of elongation, rotational alignment with background, spicule twisting and core radial contraction effects. A mixture of real and simulated images (86 2D and 86 DBT images) with spiculated lesions were presented to an experienced radiologist in an observer study. The latest observer study results demonstrated that 88.4% of simulated images of lesions in 2D and 67.4% of simulated lesions in DBT were rated as definitely or probably real on a six-point scale. This presents a significant improvement on our previous work which did not employ any background blending algorithms to simulate spiculated lesions in clinical images.
3D printed anthropomorphic physical phantom for mammography and DBT with high contrast custom materials, lesions, and uniform chest wall region
Andrea Rossman, Matthew Catenacci, Anne M. Li, et al.
Anthropomorphic breast phantoms mimic anatomy to evaluate the performance of clinical mammography and digital breast tomosynthesis (DBT) systems. Our goal is to make a phantom that mimics clinically relevant appearance of a patient to allow for improved imaging systems and lesion detection. We previously presented a voxelized 3D printed physical phantom with breast tissue anatomy and a uniform chest wall for evaluating standard QC metrics. In the current study, metal ink resolution patterns were designed for the uniform chest wall spanning 1.5 to 10 lp/mm to cover the resolution range of mammography and DBT systems, and including test objects and fiducial markers for future automated processing. The previous phantom had a limited range of 36%-64% breast density using the commercial photopolymer inks TangoPlus and VeroWhite. Several doped materials were tested with the aim of increasing the contrast of the fibroglandular breast tissue in the previous phantom. We created custom-made photopolymers doped with several materials, including tungsten, to increase breast density, as well as iodine to simulate contrast-enhanced lesions. We also measured a new, commercial photopolymer ink, VeroPureWhite, which corresponds to 92% breast density. The tungsten-doped material allows for 33-100% breast density range in the phantom, more than double the density range in our previous phantom. To our phantom with normal anatomy, we also added lesion inserts in the form of 3D-printed mass lesions with varying sizes and contrasts and uniform, commercially produced iodine inserts to investigate interactions of lesions without and with contrast in breast tissue.
Tomosynthesis
icon_mobile_dropdown
Geometric calibration for a next-generation digital breast tomosynthesis system using virtual line segments
Chloe J. Choi, Trevor L. Vent, Raymond J. Acciavatti, et al.
Our next-generation tomosynthesis (NGT) system prototype introduces additional geometric movements to conventional Digital Breast Tomosynthesis (DBT) acquisition geometries, to provide isotropic super-resolution. These movements include x-ray source movement in the posteroanterior (PA) direction and detector movement in the z-direction (perpendicular to the breast support). The desired benefits of the NGT system are only achievable with precise geometric calibration. In our previous work, a geometric phantom with 24 point-like ball bearings (BB’s) at four different magnifications was designed and a geometric calibration method that minimizes the difference between the projected locations and the calculated locations of BB’s was tested. This study investigates a new calibration method using the same phantom, utilizing projected 2D equations of virtual line segments created by any two BB’s for more precise reconstruction of the various acquisition modes of the NGT system. The geometric parameters were solved with two approaches: (1) solving each projection individually and (2) solving all projections simultaneously. Furthermore, two algorithms to compensate for any possible inaccuracy in BB locations within the phantom, presumably by less than desired manufacturing precision, were developed and compared: (1) manually identifying and removing poorly positioned BB’s and (2) performing an iteration to re-calculate the BB locations. Magnification digital breast tomosynthesis was also performed to test the calibration method further. Tomographic image reconstructions successfully demonstrated isotropic super-resolution and magnified super-resolution.
Stationary digital intraoral tomosynthesis: demonstrating the clinical potential of the first-generation system
Connor Puett, Christina Inscoe, Robert Hilton, et al.
Stationary intraoral tomosynthesis (sIOT) is an experimental imaging approach using a fixed array of carbon nanotubeenabled x-ray sources to produce a series of projections from which three-dimensional information can be reconstructed and displayed. Customized to the dental workspace, the first-generation sIOT tube is compact, easy-to-operate, and designed to interface with standard digital intraoral detectors. The purpose of this work was to explore the utility of the sIOT device across a range of dental pathologies and thereby identify limitations potentially amenable to correction through post-acquisition processing. Phantoms, extracted human teeth, and cadaveric specimens containing caries, fractures, and dilacerated roots, often associated with amalgam restorations, were imaged using tube settings that match the kVp and mA used in conventional clinical 2D intraoral imaging. An iterative reconstruction approach generated a stack of image slices through which the reader scrolls to appreciate depth relationships. Initial experience demonstrated an improved ability to visualize occlusal caries, interproximal caries, crown and root fractures, and root dilacerations when compared to 2D imaging. However, artifacts around amalgam restorations and metal implants proved problematic, leading to the incorporation of an artifact reduction step in the post-acquisition processing chain. These findings support the continued study of sIOT as a viable limited-angle tomography tool for dental applications and provide a foundation for the ongoing development of image processing steps to maximize the diagnostic utility of the displayed images.
Application of three-pass metal artefact reduction to photon-counting breast tomosynthesis
Harald S. Heese, Frank Bergner, Klaus Erhard, et al.
Digital breast tomosynthesis is a rising modality in breast cancer screening and diagnosis. As such, there is also increasing interest in employing breast tomosynthesis in diagnostic tasks like tomosynthesis-guided stereotactic breast biopsy, which includes imaging in presence of metal objects. Since reconstruction techniques in tomosynthesis operate on projection data from a limited angular range, highly attenuating metal objects create strong streak-like tomosynthesis artefacts, which are accompanied by strong undershoots at the object boundaries in the focal and adjacent slices. These artefacts can significantly hamper image quality by obscuring anatomical detail in the vicinity of the metal object.

In this contribution, we therefore present an approach for reducing such metal artefacts by means of a three-pass reconstruction method. The method analyzes the reconstructed tomosynthesis volume for metal contributions. It eventually determines corresponding pixels in the projection data, and decomposes the projections accordingly into metal and nonmetal projections. After each projection set is reconstructed independently, the final, enhanced tomosynthesis volume is obtained by a non-linear blending operation.

The proposed approach was evaluated on a set of eight clinical cases. Each breast contained a metal clip, which is typically left as marker after biopsy. The proposed method achieved to retain the appearance of the metal object in the focal and its adjacent slices. At the same time complete removal of streak artefacts in all distant slices was achieved. Efficacy of the method in presence of larger objects was demonstrated in phantom studies, where visibility of microcalcifications was completely restored.
Parallel-shift tomosynthesis for orthopedic applications
Christoph Luckner, Stefan Sesselmann, Thomas Mertelmeier, et al.
The upsurge in interest of digital tomosynthesis is mainly caused by breast imaging; however, it finds more and more attention in orthopedic imaging as well. Offering a superior in-plane resolution compared to CT imaging and the additional depth information compared to conventional 2-D X-ray images, tomosynthesis may be an interesting complement to the other two imaging modalities. Additionally, a tomosynthesis scan is likely to be faster and the radiation dose is considerably below that of a CT. Usually, a tomosynthetic acquisition focuses only on one body part as the common acquisition techniques restrict the field-of-view. We propose a method which is able to perform full-body acquisitions with a standard X-ray system by shifting source and detector simultaneously in parallel planes without the need to calibrate the system beforehand. Furthermore, a novel aliasing filter is introduced which addresses the impact of the non-isotropic resolution during the reconstruction. We provide images obtained by filtered as well as unfiltered backprojection and discuss the influence of the scanning angle as well as the reconstruction filter on the reconstructed images. We found from the experiments that our method shows promising results especially for the imaging of anatomical structures which are usually obscured by each other since the depth resolution allows to distinguish between these structures. Additionally, as of the high isotropic in-plane spatial resolution of the tomographic volume, it is easily possible to perform precise measurements which are a crucial task, e. g. during the planning of orthopedic surgeries or the assessment of pathologies like scoliosis or subtle fractures.
Investigating the contributions of anatomical variations and quantum noise to image texture in digital breast tomosynthesis
Our previous work on DBT image texture indicates that certain texture features may impact human observer performance for the task of low-contrast mass detection. Despite this, little is yet known about these texture statistics in the context of medical imaging. In this study, we investigate the factors that influence texture features in simulated DBT images. Specifically, we explore whether or not changes in quantum noise and anatomical variations are reflected in image texture curves. Our findings concerning the effects of Wiener filtration and changes in DBT system parameters indicate that texture statistics are affected by both anatomical variations and quantum noise.
Initial clinical evaluation of gated stationary digital chest tomosynthesis
Yueh Z. Lee, E. Taylor Gunnell, Christy R. Inscoe, et al.
High resolution imaging of the chest is dependent on a breath hold maintained throughout the imaging time. However, pediatric and comatose patients are unable to follow respiration commands. Gated tomosynthesis could offer a lower dose, high in-plane resolution imaging modality, but current systems are unable to prospectively gate in a reasonable scan time. In contrast, a carbon nanotube (CNT) based linear x-ray source array offers both the angular span and precise control necessary to generate x-ray projections for gated tomosynthesis. The goal of this study was to explore the first clinical use of the CNT linear x-ray source array for gated clinical chest imaging. Eighteen pediatric cystic fibrosis patients were recruited for this study, with 13 usable image sets. The s-GDCT system consists of a CNT linear x-ray source array coupled with a digital detector. A respiration signal derived from a respiratory belt served as a gating signal with sources fired sequentially when the imaging window and maximum inspiration window coincided. Images were reconstructed and reviewed for motion blur and ability to identify major anatomical structures. Image quality was highly dependent on quality of the respiration gating signal, and a correlation was found between qualitative image quality and height of the respiration peak. We demonstrate the first prospectively gated evaluation of the stationary digital chest tomosynthesis patients in pediatric patients. Though further refinements in projection selection and respiratory gating approaches are necessary, this study demonstrates the potential utility of this low dose imaging approach.
Application of neural networks to model the signal-dependent noise of a digital breast tomosynthesis unit
Fabrício A. Brito, Lucas R. Borges, Igor Guerrero, et al.
This work presents a practical method for estimating the spatially-varying gain of the signal-dependent portion of the noise from a digital breast tomosynthesis (DBT) system. A number of image processing algorithms require previous knowledge of the noise properties of a DBT unit. However, this information is not easily available and thus must be estimated. The estimation of such parameters requires a large number of calibration images, as it often changes with acquisition angle, spatial position and radiographic factors. This could represent a barrier in the algorithm’s deployment, mainly for clinical applications. Thus, we modeled the gain of the Poisson noise of a commercially available DBT unit as a function of the radiographic factors, acquisition angle, and pixel position. First, we measured the noise parameters of a clinical DBT unit by acquiring 36 sets of calibration images (raw projections) using uniform phantoms of different thicknesses, within a range of radiographic factors commonly used in clinical practice. With this information, we trained a multilayer perceptron artificial neural network (MLP-ANN) to predict the gain of the Poisson noise automatically as a function of the acquisition setup. Furthermore, we varied the number of calibration images in the learning step of the MLP-ANN to determine the minimum number of images necessary to obtain an accurate model. Results show that the MLP-ANN was able to yield the desired parameters with average error of less than 2%, using a learning dataset limited to only seven sets of calibration images. The accuracy of the model, along with its computational efficiency, makes this method an attractive tool for clinical image-based applications.
Detectors
icon_mobile_dropdown
Evaluation of novel multilayer x-ray detector designs using rapid Monte Carlo simulation
Direct and indirect active matrix flat-panel imagers (AMFPI) have become the dominant technology in digital radiography and fluoroscopy, and further improvements in imaging performance are being sought through novel detector designs. Two novel multilayer x-ray detectors are proposed to improve the DQEs of existing AMFPI in R/F and CBCT applications that require high DQE and wide dynamic range. Both detectors utilize a back-irradiation (BI) geometry, and incorporate both a-Se and scintillators in their designs. The first design, the Hybrid-AMFPI is a composite direct/indirect detector that aims to improve the quantum efficiency of a-Se (with a maximum thickness of 1 mm due to carrier trapping) by adding a scintillator. The second design, the BI-SHARP-AMFPI (Back-Irradiated Scintillator HARPAMFPI), uses a High Gain Avalanche Rushing Photoconductor (HARP) a-Se layer to detect and amplify optical photons from an x-ray scintillator. This work uses the Fujita-Lubberts-Swank (FLS) Monte Carlo (MC) framework proposed by Star-Lack et al. to investigate the potential improvements in imaging performance of these detectors and the optimal detector configuration. Simulations were carried out at RQA5 and RQA9 standard beam qualities. Both front-irradiation (FI) and BI geometries were evaluated to demonstrate the advantage of BI. Our simulations confirm that the DQE of the Hybrid AMFPI is substantially improved at low spatial frequencies compared to an otherwise identical direct AMFPI. Additionally, the role of gain matching of direct and indirect signal (a consideration unique to multilayer AMFPI) is investigated in the imaging performance of both the Hybrid and BI-SHARP-AMFPI.
Empirical and theoretical examination of the noise performance of a prototype polycrystalline silicon active pixel array
Martin Koniczek, Larry E. Antonuk, Youcef El-Mohri, et al.
Active matrix flat-panel imagers (AMFPIs), which typically incorporate a single a-Si:H thin-film transistor (TFT) in each pixel, have become ubiquitous in diagnostic x-ray imaging by virtue of many advantages, including good radiation damage resistance and the economic availability of monolithic, large area backplanes. However, under conditions of low exposure per image frame, such as encountered in fluoroscopy, digital breast tomosynthesis and breast cone-beam CT, AMFPI performance degrades due to the effect of additive noise primarily originating from the acquisition electronics. To overcome this limitation, while retaining the advantages of AMFPIs, large area imagers can be fabricated using polycrystalline silicon (poly-Si) TFTs configured to form in-pixel amplifiers. Such active pixel (AP) circuits provide signal enhancement prior to readout, thereby largely overcoming the effect of additive noise, as well as facilitating correlated multiple sampling (CMS). In this paper, early results of an examination of the noise performance of a poly-Si AP prototype array are reported. The array consists of pixel circuit designs incorporating a single-stage amplifier with three TFTs and was operated at 25 fps using CMS techniques. Noise performance is compared to results obtained from sophisticated circuit simulations which account for TFT thermal and flicker noise. Noise is found to depend on many variables, including the size of the source-follower TFT, the reset voltage, the addressing time and the sampling technique – with noise levels from individual pixels as low as 715 e. The circuit simulations were found to reproduce the trends for noise as a function of the aforementioned variables.
Imaging performance of CMOS and a-Si:H flat-panel detectors for C-arm fluoroscopy and cone-beam CT
N. M. Sheth, M. W. Jacobson, W. Zbijewski, et al.
Purpose: CMOS detectors are a potentially advantageous sensor technology for indirect-detection flat-panel detectors (FPDs), offering finer pixel pitch, faster frame rate, and lower electronic noise compared to a-Si:H sensors. This work presents a preliminary analysis of the 2D and 3D imaging performance of both detector technologies.

Methods: Two mobile C-arms were equipped with CMOS (Xineos 3030HS) and a-Si:H (PaxScan 3030X) FPDs. Technical assessment includes measurement of spatial resolution (MTF), image noise (NPS), and detective quantum efficiency (DQE). Evaluation of CBCT performance considers soft tissue visibility including axial image MTF, NPS, and noise-equivalent quanta (NEQ).

Results: The CMOS detector exhibited lower readout noise and slightly higher spatial resolution as expected. The a- Si:H detector showed about 10-15% higher DQE at low spatial frequencies while the CMOS detector showed greater resilience in DQE at higher spatial frequencies. In matched resolution CBCT, both detectors showed roughly equivalent performance.

Conclusion: CMOS detectors benefit performance with respect to high-frequency tasks, but the current work did not demonstrate strong advantage with respect to low-contrast soft-tissue visualization, in part due to light losses in scintillator-semiconductor coupling. Additional advantages include improved frame rate (reduced CBCT scan time). Ongoing work includes further investigation of modified bandwidth filters to take better advantage of underlying noiseresolution properties.
X-ray performance of new high dynamic range CMOS detector
Arundhuti Ganguly, P. Gerhard Roos, Tom Simak, et al.
A new high dynamic range CMOS x-ray detector is described. This sensor was designed specifically for x-ray imaging as opposed to the common approach of modifying a 3T optical sensor design. This allowed for a highly linear, wide dynamic range operation that has otherwise been a major drawback of CMOS x-ray detectors. The design is scalable from small tiles to large wafer-scale imagers fabricated on 300mm wafers. The performance of such a detector built using a 9.4cm x 9.4cm tile is reported. The pixel size of this detector is 76 μm and it can be operated in the native resolution or 2x2 binned mode. Measurements were performed with a thallium-doped cesium iodide (CsI(Tl)) scintillator deposited on a reflective aluminum substrate. The imager was operated at 30 frames/second. The linearity, dynamic range, sensitivity, MTF, NPS and DQE at RQA5 were measured using the standard protocols. Linearity was measured to be better than 0.2%. Using 600 μm CsI(Tl) scintillator, the maximum linear dose was 9 μGy with high gain and 56 μGy with low gain settings. This is comparable to conventional amorphous silicon flat panel detectors. The MTF is dominated by the scintillator and is 58% at 1 lp/mm and 28% at 2 lp/mm. The DQE is 70% at 0 lp/mm and 12% at the Nyquist frequency of 6.6 lp/mm. The high resolution combined with the large dynamic range and excellent DQE makes this CMOS detector particularly suitable for dynamic imaging including fluoroscopy, angiography and conebeam CT.
Investigation of random gain variations in columnar CsI:Tl using single x-ray photon imaging
Adrian Howansky, A. R. Lubinsky, S. K. Ghose, et al.
The x-ray imaging performance of an indirect flat panel detector (I-FPD) is degraded by random variations in its scintillator’s conversion gain. At energies below the K-edge, these variations are caused by depth-dependence in light collection from within the scintillator, and intrinsic fluctuations in the number of optical photons (Nph) emitted per absorbed x-ray. At fixed energy, the former effect can be quantified by the average depth-dependent gain Nph (𝑧). The latter effect can be evaluated using a Fano factor FN, defined as the variance in Nph divided by its mean at fixed interaction depth. Neither phenomenon has been directly measured in non-transparent scintillators used in medical I-FPDs, namely columnar CsI:Tl. This work presents experimental measurements of Nph(𝑧) and FN in a columnar CsI:Tl scintillator with 1000 μm thickness. X-ray interactions were localized to fixed depths (±10 μm, 100 μm intervals) in the scintillator using a microslit beam of parallel synchrotron radiation (32 keV). Light bursts from single interactions at each depth were imaged using an II-EMCCD optical camera, and their magnitude was characterized by 2D summation of their image pixel values. The II-EMCCD camera was calibrated to convert summed pixel values to numbers of optical photons detected per event. The number distributions of photons collected per event were represented in histograms as “depth-localized pulse height spectra” (DLPHS), from which𝑁̅ph (𝑧) and FN were derived. The II-EMCCD’s noise contribution to these measurements was estimated and removed from FN. Depth-dependent and intrinsic variations in the gain of columnar CsI:Tl are compared.
CT Systems and Algorithms
icon_mobile_dropdown
CT metal artifact reduction using MR image patches
Jonathan S. Nielsen, Jens M. Edmund, Koen Van Leemput
Metal implants give rise to metal artifacts in computed tomography (CT) images, which may lead to diagnostic errors and erroneous CT number estimates when the CT is used for radiation therapy planning. Methods for reducing metal artifacts by exploiting the anatomical information provided by coregistered magnetic resonance (MR) images are of great potential value, but remain technically challenging due to the poor contrast between bone and air on the MR image. In this paper, we present a novel MR-based algorithm for automatic CT metal artifact reduction (MAR), referred to as kerMAR. It combines kernel regression on known CT value/MR patch pairs in the uncorrupted patient volume with a forward model of the artifact corrupted values to estimate CT replacement values. In contrast to pseudo-CT generation that builds on multi-patient modelling, the algorithm requires no MR intensity normalisation or atlas registration. Image results for 7 head-and-neck radiation therapy patients with T1-weighted images acquired in the same fixation as the RT planning CT suggest a potential for more complete MAR close to the metal implants than the oMAR algorithm (Philips) used clinically. Our results further show improved performance in air and bone regions as compared to other MR-based MAR algorithms. In addition, we experimented with using kerMAR to define a prior for iterative reconstruction with the maximum likelihood transmission reconstruction algorithm, however with no apparent improvements
Metal artifact reduction for radiation therapy: a simulation study
Yannan Jin, Drosoula Giantsoudi, Lin Fu, et al.
Metal artifacts have been a challenge in computed tomography (CT) for nearly four decades. Despite intensive research in this area, challenges still exist in commercial metal artifact reduction (MAR) solutions. MAR is particularly important for radiation therapy and proton therapy treatment planning because metal artifacts not only degrade the outline of tumors and sensitive organs, but also introduce errors in stopping power estimation, compromising dose prediction accuracy. In this study, we developed a MAR approach that combines hardware and algorithmic innovations to systematically tackle the challenge of metal artifacts in radiation therapy. We propose to operate the X-ray tube at exceptionally high voltage and the detector DAS with adaptive triggering rate to prevent photon starvation in the CT raw data, followed by physics-based sinogram domain precorrection and model-based iterative reconstruction to correct the metal artifacts. We performed an end-to-end simulation of the integrated MAR approach with advanced hardware and algorithmic solutions. We simulated 700mAs/140 kVp and 550mAs/180 kVp CT scans, 984 views, with and without adaptive triggering, of an image volume based on the Visible Human Project CT data set, and after inserting two Titanium hip prostheses. The results demonstrated that the proposed MAR scheme can effectively eliminate metal artifacts and improve the accuracy of proton therapy planning. The dosimetric evaluation showed that with the proposed MAR solution, the error in range calculation was reduced from 7 mm to <1 mm.
High-resolution extremity cone-beam CT with a CMOS detector: evaluation of a clinical prototype in quantitative assessment of bone microarchitecture
Q. Cao, M. Brehler, A. Sisniega, et al.
Purpose: A prototype high-resolution extremity cone-beam CT (CBCT) system based on a CMOS detector was developed to support quantitative in vivo assessment of bone microarchitecture. We compare the performance of CMOS CBCT to an amorphous silicon (a-Si:H) FPD extremity CBCT in imaging of trabecular bone.

Methods: The prototype CMOS-based CBCT involves a DALSA Xineos3030 detector (99 μm pixels) with 400 μm-thick CsI scintillator and a compact 0.3 FS rotating anode x-ray source. We compare the performance of CMOS CBCT to an a- Si:H FPD scanner built on a similar gantry, but using a Varian PaxScan2530 detector with 0.137 mm pixels and a 0.5 FS stationary anode x-ray source. Experimental studies include measurements of Modulation Transfer Function (MTF) for the detectors and in 3D image reconstructions. Image quality in clinical scenarios is evaluated in scans of a cadaver ankle. Metrics of trabecular microarchitecture (BV/TV, Bone Volume/Total Volume, TbSp, Trabecular Spacing, and TbTh, trabecular thickness) are obtained in a human ulna using CMOS CBCT and a-Si:H FPD CBCT and compared to gold standard μCT.

Results: The CMOS detector achieves ~40% increase in the f20 value (frequency at which MTF reduces to 0.20) compared to the a-Si:H FPD. In the reconstruction domain, the FWHM of a 127 μm tungsten wire is also improved by ~40%. Reconstructions of a cadaveric ankle reveal enhanced modulation of trabecular structures with the CMOS detector and soft-tissue visibility that is similar to that of the a-Si:H FPD system. Correlations of the metrics of bone microarchitecture with gold-standard μCT are improved with CMOS CBCT: from 0.93 to 0.98 for BV/TV, from 0.49 to 0.74 for TbTh, and from 0.9 to 0.96 for TbSp.

Conclusion: Adoption of a CMOS detector in extremity CBCT improved spatial resolution and enhanced performance in metrics of bone microarchitecture compared to a conventional a-Si:H FPD. The results support development of clinical applications of CMOS CBCT in quantitative imaging of bone health.
X-ray cone-beam imaging of the entire spine in the weight-bearing position
F. Noo, A. Fieselmann, M. B. Oktay, et al.
X-ray cone-beam (CB) imaging is moving towards playing a large role in diagnostic radiology. Recently, an innovative, versatile X-ray system (Multitom Rax, Siemens Healthcare, GmbH, Forchheim, Germany) was introduced for diagnostic radiology. This system enables taking X-ray radiographs with high flexibility in patient positioning, as well as acquiring semi-circular short CB scans in a variety of orientations. We show here that this system can be further programmed to accurately scan the entire spine in the weight-bearing position. Such a diagnostic imaging capability has never been demonstrated so far. However, we may expect it to play an important clinical role as clinicians agree that spine diseases would be more accurately interpretable in the weight-bearing position. We implemented a geometry that provides complete data so that CB artifacts may be avoided. This geometry consists of two circular arcs connected by a line segment. We assessed immediate and short-term motion reproducibility, as well as ability to image the entire spine within a Rando phantom. Strongly encouraging results were obtained. Reproducibility with sub-mm accuracy was observed and the entire spine was accurately reconstructed.
Implementation of a piecewise-linear dynamic attenuator
Picha Shunhavanich, N. Robert Bennett, Scott S. Hsieh, et al.
A dynamic bowtie filter can modulate flux along both fan and view angles for reduced dose, scatter, and required detector dynamic range. Reducing the dynamic range requirement is crucial for photon counting detectors. One approach, the piecewise-linear attenuator (Hsieh and Pelc, Med Phys 2013), has shown promising results both in simulations and an initial prototype. Multiple wedges, each covering a different range of fan angle, are moved in the axial direction to change their attenuating thickness as seen in an axial slice. We report on an implementation of a filter with precision components and a control algorithm targeted for operation on a table-top system. Algorithms for optimizing wedge position and mA modulation and for correcting bowtie-specific beam-hardening artifacts are proposed. In experiments, the error between expected and observed bowtie transmission was ~2% on average and ~7% at maximum for a chest phantom. Within object boundaries, the observed flux dynamic ranges of 42, for a chest phantom, and 25, for an abdomen phantom were achieved, corresponding to a reduction factor of 5 and 11 from the object scans without the bowtie. With beam hardening correction, the mean CT number in soft tissue regions was improved by 79 HU on average, and deviated by 7 HU on average from clinical scanner CT images. The implemented piecewise-linear attenuator is able to dynamically adjust its thickness with high precision to achieve flexible flux control.
Dynamic beam filtering for miscentered patients
Andrew Mao, William Shyr, Grace J. Gang, et al.
Traditional CT image acquisitions use bowtie filters to enable dose reduction. However, accurate patient centering within the bore of the CT scanner takes time and is often difficult to achieve precisely. Patient miscentering can result in significant dose, reconstruction noise, and CT number variations–raising overall exposure requirements. Approaches to estimate patient position from scout scans and perform dynamic spatial beam filtration during acquisition are developed and applied in physical experiments on a CT test bench using different beam filtration strategies. While various dynamic beam modulation strategies have been proposed, we focus on two different approaches: 1) simple attenuation-based beam modulation using a translating bowtie filter, and 2) dynamic beam modulation using multiple aperture devices, an emerging beam filtration strategy base on binary filtration of the x-ray beam using variable width slits in a high-density beam blocker. Improved dose utilization and more consistent image performance is demonstrated for miscentered objects and dynamic beam filtration as compared to static filtration. The new methodology has the potential to relax patient centering requirements within the scanner, reduce set-up time, and facilitate additional CT dose reductions.
Keynote and Innovations in Imaging Systems
icon_mobile_dropdown
Clinical applications of optical and optoacoustic imaging techniques in the breast
Exploitation of the optical properties of tissue to characterize biologic composition has created an era of continuous growth over the past decades for optical imaging. These changes enable the identification of functional abnormalities in conjunction with structural changes of biologic tissue. There is currently a wide array of technologies and applications in development and clinical use. The range of different optical hardware choices has led to systems that utilize optical tissue contrast to address specific clinical needs.
Design, construction, and initial results of a prototype multi-contrast x-ray breast imaging system
Ke Li, Ran Zhang, John Garrett, et al.
By integrating a grating-based interferometer with a clinical full field digital mammography (FFDM) system, a prototype multi-contrast (absorption, phase, and dark field) x-ray breast imaging system was developed in this work. Unlike previous benchtop-based multi-contrast x-ray imaging systems that usually have relatively long source-to-detector distance and vibration isolators or dampers for the interferometer, the FFDM hardware platform is subject to mechanical vibration and the constraint of compact system geometry. Current grating fabrication technology also imposes additional constraints on the design of the grating interferometer. Based on these technical constraints and the x-ray beam properties of the FFDM system, three gratings were designed and integrated with the FFDM system. When installing the gratings, no additional vibration damping device was used in order to test the robustness of multi-contrast imaging system against mechanical vibration. The measured visibility of the diffraction fringes was 23±3%, and two images acquired 60 minutes apart demonstrated good system reproducibility with no visible signal drift. Preliminary results generated from the prototype system demonstrate the multi-contrast imaging capability of the system. The three contrast mechanisms provide mutually complementary information of the phantom object. This prototype system provides a much needed platform for evaluating the true clinical utility of the multi-contrast x-ray imaging method for the diagnosis of breast cancer.
Lung cancer, respiratory 3D motion imaging, with a 19 focal spot kV x-ray tube and a 60 fps flat panel imager
Larry Partain, Douglas Boyd, Samuel Song, et al.
The combinations of a 60 fps kV x-ray flat panel imager, a 19 focal spot kV x-ray tube enabled by a steered electron beam, plus SART or SIRT sliding reconstruction via GPUs, allow real time 6 fps 3D-rendered digital tomosynthesis tracking of the respiratory motion of lung cancer lesions. The tube consists of a “U” shaped vacuum chamber with 19 tungsten anodes, spread uniformly over 3 sides of a 30 cm x 30 cm square, each attached to a cylindrical copper heat sink cooled by flowing water. The beam from an electron gun was steered and focused onto each of the 19 anodes in a predetermined sequence by a series of dipole, quadrupole and solenoid magnets. The imager consists of 0.194 mm pixels laid out in 1576 rows by 2048 columns, binned 4x4 to achieve 60 fps projection image operation with 16 bits dynamic range. These are intended for application with free breathing patients during ordinary linac C-arm radiotherapy with modest modifications to typical system hardware or to standard clinical treatment delivery protocols. The sliding digital tomosynthesis reconstruction is completed after every 10 projection images acquired at 60 fps, but using the last 19 such projection images for each such reconstruction at less than 8 mAs exposure per 3D rendered frame. Comparisons, to “ground truth” optical imaging and to diagnostic 4D CT (10 phase) images, are being used to determine the accuracy and limitations of the various versions of this new “19 projection image x-ray tomosynthesis fluorooscopy” motion tracking technique.
Photon Counting Detectors
icon_mobile_dropdown
Count statistics and pileup correction for nonparalyzable photon counting detectors with finite pulse length
Photon counting detectors are expected to be the next big step in the development of medical computed tomography. Accurate modeling of the behavior of photon counting detectors in the high count rate regime is therefore important for detector performance evaluations and the development of accurate image reconstruction methods. The commonly used ideal nonparalyzable detector model is based on the assumption that photon interactions are converted to pulses with zero extent in time, which is too simplistic to accurately predict the behavior of photon counting detectors in both low and high count rate regimes. In this work we develop a statistical count model for a nonparalyzable detector with finite pulse length and use it to derive the asymptotic mean and variance of the output count distribution using tools from renewal theory. We use the statistical moments of the distribution to construct an estimator of the true number of counts for pileup correction. We confirm the accuracy of the model and evaluate the pileup correction using Monte Carlo simulations. The results show that image quality is preserved for surprisingly high count rates.
Spatio-energetic cross-talk in photon counting detectors: numerical detector model (PcTK) and workflow for CT image quality assessment
Katsuyuki Taguchi, Karl Stierstorfer, Christoph Polster, et al.
We developed and reported an analytical model (version 2.1) of inter-pixel cross-talk of energy-sensitive photon counting detectors (PCDs) in 2016 [1]. Since the time, we have identified four problems that are inherent to the design of the model version 2.1. In this study, we have developed a new model (version 3.2; “PcTK” for Photon Counting Toolkit) based on a totally different design concept. The comparison with the previous model version 2.1 and a Monte Carlo (MC) simulation has shown that the new model version 3.2 addressed the four problems successfully. The workflow script for computed tomography (CT) image quality assessment has demonstrated the utility of the model and potential values to CT community. The software packages including the workflow script, built using Matlab 2016a, has been made available to academic researchers free of charge (PcTK; https://pctk.jhu.edu).
A count rate-dependent method for spectral distortion correction in photon counting CT
Jannis Dickmann, Joscha Maier, Stefan Sawall, et al.
Raw-data–based material decomposition in spectral CT using photon–counting energy–selective detectors relies on a precise forward model that predicts a count–rate given intersection lengths for each material. This re- quires extensive system–specific measurements or calibration techniques. Existing calibrations either estimate a detected spectrum and are able to account for spectrally distorted assumptions or correct the predicted count rate using a correction function and can accommodate for count rate–dependent effects such as pulse pileup. We propose a calibration method that uses transmission measurements to optimize a correction function that, unlike existing methods, depends both on the photon energy and the count rate. It is thus able to correct for both kinds of distortions. In a simulated material decomposition into water and iodine, the error was reduced by 96 % compared to the best performing reference method if only pulse pileup was present and reduced by 23 % if additionally spectral distortions were taken into account. In phantom measurements using a Dectris SANTIS prototype detector, the proposed method allowed to reduce the error by 29 % compared to the best performing reference method. Artifacts were below the noise level for the proposed method, while the reference methods either showed an offset in the water region or ring artifacts.
Frequency dependent DQE of photon counting detector with spectral degradation and cross-talk
Charge sharing and migration of scattered and fluorescence photons in photon counting detector (PCD) degrade the detector's energy response and cause a single photon to be potentially counted as multiple events in neighboring pixels, leading to correlations of signal and noise. Signal and noise correlations in conventional linear, space-invariant imaging can be usefully characterized by the frequency dependent detective quantum efficiency, DQE(f). The situation is complicated in the PCDs by the spectral dimension. We analyze DQE(f) of CdTe PCDs using a spatial domain method starting from a previously described computation of spatio-energetic cross talk. DQE(f) is estimated as the squared signal-to-noise ratio of the estimate of the amplitude of a small-signal sinusoidal modulation in the object at a frequency f by a given system compared to that with an ideal detector. DQE(f) for spectral and effective monoenergetic imaging are estimated using a multi-pixel Cramer-Rao lower bound for CdTe detectors of different pixel pitch. For a 120 kVp incident spectrum, DQE(0) for a spectral task was ~18%, 25% and 34% for 250 μm, 500 μm and 1 mm pixels, respectively. Positive correlation between same basis material estimates in neighboring pixels from the spatio-energetic cross-talk causes this effect to have least impact at the detector's Nyquist frequency. For effective monoenergetic imaging, DQE(0) at the optimal energy is relatively tolerant of spectral degradation (85-92% depending on pixel size), but is highly dependent on effective energy, with maximum variation (in 250 μm pixels) of 25-85% for effective energies between 30 to 120 keV.
CT Image Quality and Dose
icon_mobile_dropdown
Joint optimization of fluence field modulation and regularization for multi-task objectives
This work investigates task-driven optimization of fluence field modulation (FFM) and regularization for model- based iterative reconstruction (MBIR) when different imaging tasks are presented by different organs. Example applications of the design framework were demonstrated in an abdomen phantom where the task of interest in the liver is a low-contrast, low-frequency detection task while that in the kidney is a high-contrast, high-frequency discrimination task. The global performance objective is based on maximizing local detectability index (d') at a discrete set of locations. Two objective functions were formulated based on different imaging needs: 1) a maxi- min objective where all tasks are equally important, and 2) a region-of-interest (ROI) objective to maximize imaging performance in an ROI while maintaining a minimum level of performance elsewhere. The FFM pattern for the maxi-min objective is determined by the most challenging task in the liver where both angular and spatial modulation resulted in a ~35% improvement in d' compared to an unmodulated case. The FFM for the ROI objective prescribes the most fluence to the organs of interest, boosting d' by ~59%, but manages to achieve the minimum d' target elsewhere. A spatially varying regularization was found to be important when tasks of different frequency content are present in different parts of the image - the optimal regularization strength for the two studied tasks differed by two orders of magnitude. Initial investigations in this work demonstrated that a multi-task objective is potentially important in shaping the optimal FFM and MBIR regularization, and that these tools may help to generalize task-based acquisition and reconstruction design for more complex diagnostic scenarios.
Can image-domain filtering of FBP CT reconstructions match low-contrast performance of iterative reconstructions?
In large part from concerns about radiation exposure from computed tomography (CT), iterative reconstruction (IR) has emerged as a popular technique for dose reduction. Although IR clearly reduces image noise and improves resolution, its ability to maintain or improve low-contrast detectability over (possibly post-processed) filtered backprojection (FBP) reconstructions is unclear. In this work, we have scanned a low contrast phantom encased in an acrylic oval using two vendors’ scanners at 120 kVp at three dose levels for axial and helical acquisitions with and without automatic exposure control. Using the local noise power spectra of the FBP and IR images to guide the filter design, we developed a two-dimensional angularly-dependent Gaussian filter in the frequency domain that can be optimized to minimize the root-mean-square error between the image-domain filtered FBP and IR reconstructions. The filter is extended to three-dimensions by applying a through-slice Gaussian filter in the image domain. Using this three-dimensional, non-isotropic filtering approach on data with non-uniform statistics from both scanners, we were able to process the FBP reconstructions to closely match the low-contrast performance of IR images reconstructed from the same raw data. From this, we conclude that most or all of the benefits of noise reduction and low-contrast performance of advanced reconstruction can be achieved with adaptive linear filtering of FBP reconstructions in the image domain.
From patient-informed to patient-specific organ dose estimation in clinical computed tomography
Many hospitals keep a record of dose after each patient's CT scan to monitor and manage radiation risks. To facilitate risk management, it is essential to use the most relevant metric, which is the patient-specific organ dose. The purpose of this study was to develop and validate a patient-specific and automated organ dose estimation framework. This framework includes both patient and radiation exposure modeling. From patient CT images, major organs were automatically segmented using Convolutional Neural Networks (CNNs). Smaller organs and structures that were not otherwise segmented were automatically filled in by deforming a matched XCAT phantom from an existing library of models. The organ doses were then estimated using a validated Monte Carlo (PENELOPE) simulation. The segmentation and deformation components of the framework were validated independently. The segmentation methods were trained and validated using 50-patient CT datasets that were manually delineated. The deformation methods were validated using a leave-one-out technique across 50 existing XCAT phantoms that were deformed to create a patient-specific XCAT for each of 50 targets. Both components were evaluated in terms of dice similarity coefficients (DSC) and organ dose. For dose comparisons, a clinical chest-abdomen-pelvis protocol was simulated under fixed tube current (mA). The organ doses were estimated by a validated Monte Carlo package and compared between automated and manual segmentation and between patient-specific XCAT phantoms and their corresponding XCAT targets. Organ dose for phantoms from automated vs. manual segmentation showed a ~2% difference, and organ dose for phantoms deformed by the study vs. their targets showed a variation of ~5% for most organs. These results demonstrate the great potential to assess organ doses in a highly patient-specific manner.
Automated exposure control for CT using a task-based image quality metric
P. Khobragade, Jiahua Fan, Franco Rupcich, et al.
Selecting the tube current when using iterative reconstruction is challenging due to the varying relationship between contrast, noise, spatial resolution, and dose across different algorithms. This study proposes a task-based automated exposure control (AEC) method using a generalized detectability index (d'gen). The proposed method leverages existing AEC methods that are based on a prescribed noise level. The generalized d'gen metric is calculated using look-up tables of task-based modulation transfer function and noise power spectrum. Look-up tables were generated by scanning a 20-cmdiameter American College of Radiology (ACR) phantom and reconstructing with a reference reconstruction algorithm and four levels of an in-house iterative reconstruction algorithm (IR1-4). This study tested the validity of the assumption that the look-up tables can be approximated as being independent of dose level. Preliminary feasibility of the proposed d'gen-AEC method to provide a desired image quality level for different iterative reconstruction algorithms was evaluated for the ACR phantom. The image quality ((d'gen) resulting from the proposed d'gen-AEC method was 3.8 (IR1), 3.9 (IR2), 3.9 (IR3), 3.8 (IR4) compared to the desired d'gen of 3.9 for the reference image. For comparison, images acquired to match the noise standard deviation of the reference image demonstrated reduced image quality (d'gen). of 3.3 for IR1, 3.0 for IR2, 2.5 for IR3, and 1.8 for IR4). For all four iterative reconstruction methods, the d'gen-AEC method resulted in consistent image quality in terms of detectability index at lower dose than the reference scan. The results provide preliminary evidence that the proposed d'gen-AEC can provide consistent image quality across different iterative reconstruction approaches.
Virtual clinical trial in action: textured XCAT phantoms and scanner-specific CT simulator to characterize noise across CT reconstruction algorithms
Ehsan Abadi, W. Paul Segars, Brian Harrawood, et al.
Although non-linear CT systems offer improved image quality over conventional linear systems, they disrupt certain assumptions of the dependency of noise and resolution on radiation dose that are true of linear systems. As such, simplistic phantoms do not fully represent the actual performance of current systems in the clinic. Assessing image quality from clinical images address this limitation, but full realization of image quality attributes, particularly noise, requires the knowledge of the exact heterogeneous anatomy of the patient (not knowable) and/or repeated imaging (ethically unattainable). This limitation can be overcome through realistic simulations enabled by virtual clinical trials (VCTs). This study aimed to characterize the noise properties of CT images reconstructed with filtered back-projection (FBP) and non-linear iterative reconstruction (IR) algorithms through a VCT. The study deployed a new generation version of the Extended Cardio-Torso (XCAT) phantom enhanced with anatomically-based intra-organ heterogeneities. The phantom was virtually “imaged” using a scanner-specific simulator, with fifty repeats, and reconstructed using clinical FBP and IR algorithms. The FBP and IR noise magnitude maps and the relative noise reduction maps were calculated to quantify the amount of noise reduction achieved by IR. Moreover, the 2D noise power spectra were measured for both FBP and IR images. The noise reduction maps showed that IR images have lower noise magnitude in uniform regions but higher noise magnitude at edge voxels, thus the noise reduction attributed to IR is less than what could be expected from uniform phantoms (29% versus 60%). This work demonstrates the utility of our CT simulator and “textured” XCAT phantoms in performing VCT that would be otherwise infeasible.
Photon Counting Imaging
icon_mobile_dropdown
Ultra-high resolution photon-counting detector CT reconstruction using spectral prior image constrained compressed-sensing (UHR-SPICCS)
Kishore Rajendran, Shengzhen Tao, Dilbar Abdurakhimova, et al.
Photon-counting detector based CT (PCD-CT) enables dose efficient high resolution imaging, in addition to providing multi-energy information. This allows better delineation of anatomical structures crucial for several clinical applications ranging from temporal bone imaging to pulmonary nodule visualization. Due to the smaller detector pixel sizes required for high resolution imaging, the PCD-CT images suffer from higher noise levels. The image quality is further degraded in narrow energy bins as a consequence of low photon counts. This limits the potential benefits that high-resolution PCD-CT could offer. Conventional reconstruction techniques such as the filtered back projection (FBP) have poor performance when reconstructing noisy CT projection data. To enable low noise multi-energy reconstructions, we employed a spectral prior image constrained compressed sensing (SPICCS) framework that exploits the spatio-spectral redundancy in the multi-energy acquisitions. We demonstrated noise reduction in narrow energy bins without losing energy-specific attenuation information and spatial resolution. We scanned an anthropomorphic head phantom, and a euthanized pig using our whole-body prototype PCD-CT system in the ultra-high resolution mode at 120 kV. Image reconstructions were performed using SPICCS and compared with conventional FBP. Noise reduction of 18 to 46% was noticed in narrow energy bins corresponding to 25 − 65 keV and 65 − 120 keV, while the mean CT number was preserved. Spatial resolution measurement showed similar modulation transfer function (MTF) values between FBP and SPICCS, demonstrating preservation of spatial resolution.
Photon counting dual energy x-ray imaging at CT count rates: measurements and implications of in-pixel charge sharing correction
Christer Ullberg, Mattias Urech, Charlotte Eriksson, et al.
In photon counting detectors with small pixels, charge sharing has a significant effect on the spectral response and on the image quality. A charge can be shared between pixels for two main reasons; a photon can be physically converted at the edges or corners in-between pixels and K-edge X-ray fluorescence in the converter can redistribute radiative energy to other pixels nearby. A potential drawback of photon counting detectors is the limited count rate due to pulse pileup. Pulse-pileup also distorts the recorded energy of the X-rays as pulse pileup shifts the spectra towards higher energy. With active charge sharing correction the signals from neighboring pixels contribute to dead-time of the pixel and therefore the pulse-pileup at the same input flux is increased. We compare the measured performance of an XCounter dual energy photon counting CdTe detector with 100μm pixels with and without charge sharing correction up to an input flux of 3.5*108 photons/mm2/s and show that there is a benefit to using the charge sharing correction up to 2.5*108 photons/mm2 /s. Spectra are recorded from Am241 and Cd109 at low flux to show the energy response without pulse pileup and from a 90kVp beam at high flux. Material separation between PMMA and Aluminum is evaluated in terms of SDNR for different flux with and without charge sharing up to an input flux of 3.5*108 photons/mm2/s.
Generalized linear-systems framework for performance assessment of energy-resolving photon-counting detectors
The development of energy-resolving photon-counting detectors for medical x-ray imaging is attracting considerable attention. Since the image quality can be degraded by different nonidealities such as charge sharing, Compton scatter and fluorescence, there is a need for developing performance metrics in order to compare and optimize detector designs. For conventional, non-energy-resolving detectors, this is commonly done using the linear-systems-theory framework, in which the detector performance is described by noise-equivalent quanta (NEQ) and detective quantum efficiency (DQE) as functions of spatial frequency. However, these metrics do not take the energy-resolving capabilities of multibin photon-counting detectors into account. In this work, we present a unified mathematical framework for quantifying the performance of energy-resolving detectors. We show that the NEQ and DQE can be generalized into matrix-valued quantities, which describe the detector performance for detection tasks with both spatial and energy dependence. With this framework, a small number of simple measurements or simulations are sufficient to compute the dose efficiency of a detector design for any imaging task, taking the effects of detector nonidealities on spatial and energy resolution into account. We further demonstrate that the same framework also can be used for assessing material quantification performance, thereby extending the commonly used performance metrics based on the Cramér-Rao lower bound to spatial-frequency-dependent tasks. The usefulness of the proposed framework is demonstrated using simulations of charge sharing and fluorescence in a CdTe detector.
Experimental evaluation of the influence of scattered radiation on quantitative spectral CT imaging
Artur Sossin, Michal Rokni, Bernhard Brendel, et al.
With the emergence of energy-resolved x-ray photon counting detectors multi-material spectral x-ray imaging has been made possible. This form of imaging allows the discrimination and quantification of individual materials comprising an inspected anatomical area. However, the acquisition of quantitative material data puts strong requirements on the performance capabilities of a given x-ray system. Scattered radiation is one of the key sources of influencing the quality of material quantification accuracy. The aim of the present investigation was to assess the impact of x-ray scatter on quantitative spectral CT imaging using a pre-clinical photon counting scanner prototype. Acquisitions of a cylindrical phantom with and without scatter were performed. The phantom contained iodine and gadolinium inserts placed at various locations. The projection data was then decomposed onto a water-iodine-gadolinium material basis and reconstructed. An analysis of the resulting iodine and gadolinium material images with and without scatter was conducted. It was concluded that, at an SPR level of up to 3.5%, scatter does not compromise material quantification for all investigated gadolinium concentrations, but for iodine a substantial underestimation was observed. The findings in this study suggest that scatter has a lower impact on K-edge material imaging in comparison with material imaging not featuring a K-edge.
Impact of photon counting detector technology on kV selection and diagnostic workflow in CT
Wei Zhou, Dilbar Abdurakhimova, Michael Bruesewitz, et al.
The purpose of this study is to determine the optimal iodine contrast-to-noise ratio (CNR) achievable for different patient sizes using virtual-monoenergetic-images (VMIs) and a universal acquisition protocol on photon-counting-detector CT (PCD-CT), and to compare results to those from single-energy (SE) and dualsource- dual-energy (DSDE) CT. Vials containing 3 concentrations of iodine were placed in torso-shaped water phantoms of 5 sizes and scanned on a 2nd generation DSDE scanner with both SE and DE modes. Tube current was automatically adjusted based on phantom size with CTDIvol ranging from 5.1 to 22.3 mGy. PCD-CT scans were performed at 140 kV, 25 and 75 keV thresholds, with CTDIvol matched to the SE scans. DE VMIs were created and CNR was calculated for SE images and DE VMIs. The optimal kV (SE) or keV (DE VMI) was chosen at the point of highest CNR with no noticeable artifacts. For 10 mgI/cc vials in the 35 cm phantom, the optimal CNR of VMIs on PCD (22.6@50keV) was comparable to that of the best DSDE protocol (23.9@50keV) and was higher than that of the best SE protocol (19.7@80kV). In general, the difference of optimal CNR between PCD and SE increased with phantom size, with PCD 50 keV VMIs having an equivalent CNR (0.6% difference) with that of SE at the 25 cm phantom and 57% higher CNR at the 45 cm phantom. PCD-CT demonstrated comparable iodine CNR of VMIs to that of DSDE across patient sizes. Whereas SE and DSDE CT exams require use of patient-size-specific acquisitions settings, our findings point to the ability of PCD-CT to simplify protocol selection, using a single VMI keV setting (50 keV), acquisition kV (140 kV), and energy thresholds (25 and 75 keV) for all patient sizes, while achieving optimal or near optimal iodine CNR values.
Development of a spectral photon-counting micro-CT system with a translate-rotate geometry
M. Holbrook, D. P. Clark, W. Barber, et al.
Spectral CT using photon counting x-ray detectors (PCXDs) can provide accurate tissue composition measurements by utilizing the energy dependence of x-ray attenuation in different materials. PCXDs are especially suited for imaging Kedge contrast agents, revealing the spatial distribution of select imaging probes through quantitative material decomposition. To further advance the field, there is a clear and continuing need to develop PCXD hardware and software as part of a new generation of spectral CT imaging systems. Our group specializes in the development of preclinical microCT systems and of novel imaging probes based on K-edge materials. Toward this goal, we have now developed a prototype spectral micro-CT system with a PCXD produced by DxRay. This CZT-based PCXD has 16x16 pixels, each with a size of 0.5 x 0.5 mm, a thickness of 3 mm, and 4 configurable energy thresholds. The detector is thus only 8 mm x 8 mm in size. Due to the limited size of this detector tile, we have implemented a translate-rotate micro-CT system (i.e. a 2nd generation scanner). In this paper we summarize considerable efforts which went into compensating for dead pixels and for pixels with non-linear responses to prevent artifacts in the CT reconstruction results. We also present spectral response measurements for the detector and the results of both phantom and animal experiments with iodine- and gold-based contrast agents. The results confirm our ability to sample and reconstruct tomographic images, but also show that the PCXD prototype has limitations in imaging iodine.
Multi-Energy CT
icon_mobile_dropdown
A general CT reconstruction algorithm for model-based material decomposition
Steven Tilley, Wojciech Zbijewski, Jeffrey H. Siewerdsen, et al.
Material decomposition in CT has the potential to reduce artifacts and improve quantitative accuracy by utilizing spectral models and multi-energy scans. In this work we present a novel Model-Based Material Decomposition (MBMD) method based on an existing iterative reconstruction algorithm derived from a general non-linear forward model. A digital water phantom with inserts containing different concentrations of calcium was scanned on a kV switching system. We used the presented method to simultaneously reconstruct water and calcium material density images, and compared the results to an image domain and a projection domain decomposition method. When switching voltage every other frame, MBMD resulted in more accurate water and calcium concentration values than the image domain decomposition method, and was just as accurate as the projection domain decomposition method. In a second, slower, kV switching scheme (changing voltage every ten frames) which precluded the use of traditional projection domain based methods, MBMD continued to produce quantitatively accurate reconstructions. Finally, we present a preliminary study applying MBMD to a water phantom containing vials of different concentrations of K2HPO4 which was scanned on a cone-beam CT test bench. Both the fast and slow (emulated) kV switching schemes resulted in similar reconstructions, indicating MBMD's robustness to challenging acquisition schemes. Additionally, the K2HPO4 concentration ratios between the vials were accurately represented in the reconstructed K2HPO4 density image.
Energy dependence of SNR and DQE for effective monoenergetic imaging in spectral CT
Synthesized monoenergetic images, generated using linear weighted combination of basis material images, portray the anatomy at a selected effective energy. Images at both high and low effective energies have been proposed as clinically useful. This paper studies the dependence of signal-to-noise ratio (SNR) and detective quantum efficiency (DQE) on the selected energy for CdTe PCDs, and for other spectral CT that uses scintillator detectors. DQE is estimated as the squared of SNR for the system being evaluated divided by that of an ideal PCD. Signal is the unbiased line integral of a material of interest and noise is estimated using propagation of the Cramer-Rao Lower Bound through the weighted sum. SNR and DQE are unimodal with the optimal energy dependent on the mean and width of the measured spectrum, on the spectral response, and system, and weakly on the material of interest. For the CdTe detectors simulated, DQE(0) at the optimal energy is relatively tolerant of spectral degradation (85-92% depending on pixel size), but is highly dependent on effective energy, with maximum variation (in 250 μm pixels) of 22-85% for effective energies between 30 to 120 keV. Study of effect of spectral distribution on DQE shows that a wider spectrum shifts the optimum to lower energy and weakens the energy dependence. In comparison to dual kV and dual layer spectral CT, PCDs have lower optimal effective energy and show higher DQE at low effective energies than energy integrating detectors with dual kV spectra.
Three-material decomposition in multi-energy CT: impact of prior information on noise and bias
In order to perform material decomposition for a three-material mixture, dual-energy CT (DECT) has to incorporate an additional condition, typically the prior information related to certain physical constraints such as volume or mass conservation. With the introduction of photon-counting CT and other multi-energy CT (MECT) platform, more than 2 energy bins can be simultaneously acquired, which in principle can solve a three-material problem without the need of additional prior information. The purpose of this work was to investigate the impact of prior information on noise and bias properties of three-material decomposition in both DECT and MECT, and to evaluate if the prior information is still needed in MECT. Computer simulation studies were performed to compare basis image noise and quantification accuracy among DECT with prior information, and MECT with/without prior information. For given spectral configurations, the simulation results showed that significant noise reductions can be achieved in all the basis material images when prior information was included in the material decomposition process. Compared to DECT with prior information, MECT (N=3) with prior information had slightly better noise performance due to additional beam measurement and well preserved spectral separation. In addition, when wrong prior information ([-2.0%, 2.0%]) was intentionally introduced, the quantification accuracy evaluated by root-mean-square-error (RMSR) using MECT with prior information was less than 1.5mg/cc for gadolinium quantification and 1.2mg/cc for iodine quantification.
Calcium decomposition and phantomless bone mineral density measurements using dual-layer-based spectral computed tomography
Kai Mei, Benedikt J. Schwaiger, Felix K. Kopp, et al.
Dual-layer spectral computed tomography (CT) provides a novel clinically available concept for material decomposition (calcium hydroxyapatite, HA) and thus to estimate the bone mineral density (BMD) based on non-dedicated clinical examinations. In this study, we assessed whether HA specific BMD measurements with dual-layer spectral CT are accurate in phantoms and vertebral specimens.

Dual-layer spectral CT was performed at different tube current settings (500, 250, 125 and 50 mAs) with a tube voltage of 120 kVp. Ex-vivo human vertebrae (n = 13) and a phantom containing different known HA concentrations were placed in a semi-anthropomorphic abdomen phantom. BMD was derived with an in-house developed algorithm from spectral-based virtual monoenergetic images at 50 keV and 200 keV. Values were compared to the HA concentrations of the phantoms and conventional quantitative CT (QCT) measurements using a reference phantom, respectively.

Above 125 mAs, which is the radiation exposure level of clinical examinations, errors for phantom measurements based on spectral information were less than 5%, compared to known concentrations. In vertebral specimens, high correlations were found between BMD values assessed with spectral CT and conventional QCT (correlation coefficients > 0.96; p < 0.001 for all).

These results suggest a high accuracy of quantitate HA-specific BMD measurements based on dual-layer spectral CT examinations without the need for a reference phantom, thus demonstrating their feasibility in clinical routine.
Spectral imaging of iodine and gadolinium nanoparticles using dual-energy CT
C. T. Badea, M. Holbrook, D. P. Clark, et al.
Advances in CT hardware have propelled the development of novel CT contrast agents. Combined with the spectral capabilities of X-ray CT, molecular imaging is possible using multiple heavy-metal contrast agents. Nanoparticle platforms make particularly attractive agents because of (1) their ability to carry a large payload of imaging moieties, and (2) their ease of surface modification to facilitate molecular targeting. While several novel imaging moieties based on high atomic number elements are being explored, iodine (I) and gadolinium (Gd) are particularly attractive because they are already in clinical use. In this work, we investigate the feasibility for in vivo discrimination of iodine and gadolinium nanoparticles using dual energy micro-CT. Phantom experiments were performed to measure the CT enhancement for I and Gd over a range of voltages from 40 to 80 kVp using a dual-source micro-CT system with energy integrating detectors having cesium iodide scintillators. The two voltages that provide maximum discrimination between I and Gd were determined to be 50 kVp with Cu filtration and 40 kVp without any filtration. Serial dilutions of I and Gd agents were imaged to determine detection sensitivity using the optimal acquisition parameters. Next, an in vivo longitudinal small animal study was performed using Liposomal I (Lip-I) and Liposomal Gd (Lip-Gd) nanoparticles. The mouse was intravenously administered Lip-Gd and imaged within 1 h post-contrast to visualize Gd in the vascular compartment. The animal was reimaged at 72 h post-contrast with dual-energy micro-CT at 40 kVp and 50 kVp to visualize the accumulation of Lip-Gd in the liver and spleen. Immediately thereafter, the animal was intravenously administered Lip-I and re-imaged. The dual energy sets were used to estimate the concentrations of Gd and I via a two-material decomposition with a non-negativity constraint. The phantom results indicated that the relative contrast enhancement per mg/ml of I to Gd was 0.85 at 40 kVp and 1.79 at 50 kVp. According to the Rose criterion (CNR<5), the detectability limits were 2.67 mg/ml for I and 2.46 mg/ml for Gd. The concentration maps confirmed the expected biodistribution, with Gd concentrated in the spleen and with I in the vasculature of the kidney, liver, and spleen. Iterative reconstruction provided higher sensitivity to detect relatively low concentrations of gadolinium. In conclusion, dual energy micro-CT can be used to discriminate and simultaneously image probes containing I and Gd.
Deep Learning for CT
icon_mobile_dropdown
Deep learning based cone beam CT reconstruction framework using a cascaded neural network architecture (Conference Presentation)
In this work, a novel cascaded neural network architecture was developed to perform cone beam CT image reconstruction using the deep learning method. The proposed architecture consists four individual stages: a manifold learning stage to perform projection data pre-processing, a convolutional neural network (CNN) stage to perform data filtration, a fully connected layer with sparse regularization to perform single-view backprojection, and a final fully connected layer with linear activation to generate the target image volume. In manifold learning stage, a novel feature combining technique was proposed to improve noise properties of the final reconstructed images. These 13-layer deep neural network work trained using extensive numerical phantom with noise contaminated projection data and ground truth image in a stage-by-stage pretraining stage. After pretraining with numerical phantom data, the cascaded neural network model was fine tuned using physical phantom data from a diagnostic MDCT scanner. After training, the trained neural network model was used to reconstruct low dose CT images for human subjects from a prospective low dose CT protocol. In these studies, it was found that the proposed cascaded neural network based deep learning method can (1) enable low dose CT reconstruction without noise streaks and with reduced noise amplitude; (2) well maintain reconstruction accuracy at reduced dose levels; and (3) unlike the currently available statistical model based image reconstruction (MBIR) methods, the proposed deep learning reconstruction method can well maintain the similar dose-normalized noise power spectrum (NPS) with that of the FBP reconstructed images.
Improve angular resolution for sparse-view CT with residual convolutional neural network
Kaichao Liang, Hongkai Yang, Kejun Kang, et al.
Sparse-view CT imaging has been a hot topic in the medical imaging field. By decreasing the number of views, dose delivered to patients can be significantly reduced. However, sparse-view CT reconstruction is an illposed problem. Serious streaking artifacts occur if reconstructed with analytical reconstruction methods. To solve this problem, many researches have been carried out to optimize in the Bayesian framework based on compressed sensing, such as applying total variation (TV) constraint. However, TV or other regularized iterative reconstruction methods are time consuming due to iterative process needed. In this work, we proposed a method of angular resolution recovery in projection domain based on deep residual convolutional neural network (CNN) so that projections at unmeasured views can be estimated accurately. We validated our method by a disjointed data set new to trained networks. With recovered projections, reconstructed images have little streaking artifacts. Details corrupted due to sparse view are recovered. This deep learning based sinogram recovery can be generalized to more data insufficient situations.
Deep scatter estimation (DSE): feasibility of using a deep convolutional neural network for real-time x-ray scatter prediction in cone-beam CT
Joscha Maier, Yannick Berker, Stefan Sawall, et al.
The contribution of scattered x-rays to the acquired projection data is a severe issue in cone-beam CT (CBCT). Due to the large cone angle, scatter-to-primary ratios may easily be in the order of 1. The corresponding artifacts which appear as cupping or dark streaks in the CT reconstruction may impair the diagnostic value of the CT examination. Therefore, appropriate scatter correction is essential. The gold standard is to use a Monte Carlo photon transport code to predict the distribution of scattered x-rays which can be subtracted from the measurement subsequently. However, long processing times of Monte Carlo simulations prohibit them to be used routinely. To enable fast and accurate scatter estimation we propose the deep scatter estimation (DSE). It uses a deep convolutional neural network which is trained to reproduce the output of Monte Carlo simulations using only the acquired projection data as input. Once the network is trained, DSE performs in real-time. In the present study we demonstrate the feasibility of DSE using simulations of CBCT head scans at different tube voltages. The performance is tested on data sets that significantly differ from the training data. Thereby, the scatter estimates deviate less than 2% from the Monte Carlo ground truth. A comparison to kernel-based scatter estimation techniques, as they are used today, clearly shows superior performance of DSE while being similar in terms of processing time.
Assessment of diagnostic image quality of computed tomography (CT) images of the lung using deep learning
John H. Lee, Byron R. Grant, Jonathan H. Chung, et al.
For computed tomography (CT) imaging, it is important that the imaging protocols be optimized so that the scan is performed at the lowest dose that yields diagnostic images in order to minimize patients’ exposure to ionizing radiation. To accomplish this, it is important to verify that image quality of the acquired scan is sufficient for the diagnostic task at hand. Since the image quality strongly depends on both the characteristics of the patient as well as the imager, both of which are highly variable, using simplistic parameters like noise to determine the quality threshold is challenging. In this work, we apply deep learning using convolutional neural network (CNN) to predict whether CT scans meet the minimal image quality threshold for diagnosis. The dataset consists of 74 cases of high resolution axial CT scans acquired for the diagnosis of interstitial lung disease. The quality of the images is rated by a radiologist. While the number of cases is relatively small for deep learning tasks, each case consists of more than 200 slices, comprising a total of 21,257 images. The deep learning involves fine-tuning of a pre-trained VGG19 network, which results in an accuracy of 0.76 (95% CI: 0.748 – 0.773) and an AUC of 0.78 (SE: 0.01). While the number of total images is relatively large, the result is still significantly limited by the small number of cases. Despite the limitation, this work demonstrates the potential for using deep learning to characterize the diagnostic quality of CT scans.
Deep learning angiography (DLA): three-dimensional C-arm cone beam CT angiography generated from deep learning method using a convolutional neural network
Juan C. Montoya, Yinsheng Li, Charles Strother, et al.
Current clinical 3D-DSA requires the acquisition of two image volumes, before and after the injection of contrast media (i.e. mask and fill scans). Deep learning angiography (DLA) is a recently developed technique that enables the generation of mask-free 3D angiography using convolutional neural networks (CNN). In this work, the quantitative performance of DLA as a function of the number of layers in the deep neural network and the DLA inference computation time are investigated. Clinically indicated rotational angiography exams of 105 patients scanned with a C-arm conebeam CT system using a standard 3D-DSA imaging protocol for the assessment of cerebrovascular abnormalities were retrospectively collected. More than 185 million labeled voxels from contrast-enhanced images of 43 subjects were used as training and testing dataset. Multiple deep CNNs were trained to perform DLA. The trained DLA models were then applied in a validation cohort consisting of the remaining image volumes from 62 subjects and accuracy, sensitivity, precision and F1-scores were calculated for vasculature classification in relevant anatomy. The implementation of the best performing model was optimized for accelerated DLA inference and the computation time was measured under multiple hardware configurations.

Vasculature classification accuracy and 95% CI in the validation dataset were 98.7% ([98.3, 99.1] %) for the best performing model. DLA inference user time was 17 seconds for a throughput of 23 images/s. In conclusion, a 30-layer DLA model outperformed shallower networks and DLA inference computation time was demonstrated not be a limiting factor for current clinical practice.
Multi-energy CT decomposition using convolutional neural networks
D. P. Clark, M. Holbrook, C. T. Badea
Spectral CT can provide accurate tissue composition measurements by utilizing the energy dependence of x-ray attenuation in different materials. We have introduced image reconstruction and material decomposition algorithms for multi-energy CT data acquired either with energy integrating detectors (EID) or photon counting detectors (PCD); however, material decomposition is an ill-posed problem due to the potential overlap of spectral measurements and to noise. Recently, convolutional neural networks (CNN) have generated excitement in the field of machine learning and computer vision. The goal of this work is to develop CNN-based methods for material decomposition in spectral CT. The CNN for decomposition had a U-net structure and was trained with either five-energy PCD-CT or DE-CT. As targets for training, we used simulated phantoms constructed from random combinations of water and contrast agents (iodine, barium, and calcium for five-energy PCD-CT; iodine and gold for DE EID-based CT). The experimentally measured sensitivity matrix values for iodine, barium, and calcium or iodine and gold were used to recreate the CT images corresponding to both PCD and DE-CT cases. These CT images were used to train CNNs to generate material maps at each pixel location. After training, we tested the CNNs by applying them to experimentally acquired DE-EID and PCD-based micro-CT data in mice. The predicted material maps were compared to the absolute truth in simulations and to sensitivity-based decompositions for the in vivo mouse data. The CNN-based decomposition provided higher accuracy and lower noise. In conclusion, our U-net performed a more robust spectral micro-CT decomposition because it inherently better exploits spatial and spectral correlations.
Neuroimaging
icon_mobile_dropdown
Task-driven optimization of an experimental photon counting detector CT system for intracranial hemorrhage detection
Xu Ji, Ran Zhang, Guang-Hong Chen, et al.
In this work, the potential application of photon counting detector CT (PCD-CT) in intracranial hemorrhage (ICH) imaging was investigated. An experimental PCD-CT imaging system was constructed and optimized for the detection of low contrast intraparenchymal bleeding. The system uses a CdTe-based PCD that provides 51 cm axial coverage and excellent DQE performance. A customized anthropomorphic head phantom with a built-in ICH model was used to evaluate the performance of the PCD-CT system for the ICH detection task. The nominal contrast between the ICH model and brain parenchyma is 10 HU. The phantom was also scanned by a commercial multi-detector CT (MDCT) system to obtain gold standard images. For the sake of fair comparison, radiation dose level, tube potential, slice thickness, reconstruction pixel size were matched between the two CT systems. The nonprewhitening observer detectability index d' was used as the figure-of-merit for optimizing the detector binning mode and reconstruction kernel for the PCD-CT system. Compared with the gold-standard MDCT images, the optimized PCD-CT images demonstrated higher d0 value for the detection of the ICH model in the head phantom.
Design and evaluation of a diffusion MRI fibre phantom using 3D printing
Serene O. Abu-Sardanah, Uzair Hussain, John Moore, et al.
Diffusion weighted magnetic resonance imaging (dMRI) has enabled the in vivo imaging of structures with a highly fibrous composition, such as brain white matter due to the ability to detect the hindered and restricted diffusion of water along its defined tracts. In order to increase this non-invasive technique’s sensitivity to the intricate fibrous structure and to better calibrate diffusion pulse sequences and validate fibre reconstruction modelling techniques, physical diffusion phantoms have been developed. These phantoms have a known structure and diffusion behaviour. This work aims to simplify the process of creating complex fibre-based diffusion phantoms using 3D printing material to model and mimic brain white matter fiber architecture for dMRI. We make use of a printing material consisting of a mixture of polyvinyl alcohol (PVA) and a rubber-elastomeric polymer (Gel-Lay by Poro-Lay), printed using a fused deposition modelling (FDM) printer. It is 3D printed as a rigid object but, following immersion in room-temperature water, the PVA dissolves away leaving behind the porous rubber-elastomeric polymer component to mimic the structure of brain white matter tracts. To test the validity of the methodology, two preliminary main phantoms were created: a linear 10mm × 10mm × 30mm block phantom and an orthogonal fibercrossing phantom where two blocks cross at a 90-degree angle. This was followed by creating 3 disk phantoms with fibres crossing at 30, 60 and 90 degrees. Results demonstrate reproducible high diffusion anisotropy (FA= 0.56 and 0.60) for the phantoms aligned with the fibre direction for the preliminary linear blocks. With multi-fibre ball & stick modelling in the orthogonal fibre-crossing phantom and the disk phantoms at 30, 60 and 90 degrees, image post-processing yielded crossing fibre populations that reflected the known physical architecture. These preliminary results reveal the potential of 3D-printed phantoms to validate fibre-reconstruction models and optimize acquisition protocols, paving the way for more complex phantoms and the investigation of long-term stability and reproducibility.
Evaluation of the reconstruction-of-difference (RoD) algorithm for cone-beam CT neuro-angiography
P. Wu, J. W. Stayman, M. Mow, et al.
Purpose: Timely detection of neurovascular pathology such as ischemic stroke is essential to effective treatment, and systems for cone-beam CT (CBCT) could provide CT angiography (CTA) assessment in a timely manner close to the point of care. CBCT systems suffer from slow rotation speed and readout speed, which leads to inconsistent or sparse dataset. This work describes a new reconstruction method using a reconstruction of difference (RoD) approach that is robust against such factors.

Methods: Important aspects of CBCT angiography were investigated, weighting tradeoffs among the magnitude of iodine enhancement (peak contrast), the degree of data consistency, and the degree of data sparsity. Simulation studies were performed across a range of CBCT half-scan acquisition speed ranging ~3 – 17 s. Experiments were conducted using a CBCT prototype and an anthropomorphic neurovascular phantom incorporating a vessel with contrast injection with a time-attenuation (TAC) injection giving low data consistency but high peak contrast. Images were reconstructed using filtered back-projection (FBP), penalized likelihood (PL), and the RoD algorithm. Data were evaluated in terms of root mean square error (RMSE) in image enhancement as well as overall image noise and artifact.

Results: Feasibility was demonstrated for 3D angiographic assessment in CBCT images acquired across a range of data consistency and sparsity. Compared to FBP, the RoD method reduced the RMSE in reconstructed images by 50.0% in simulation studies (fixed peak contrast; variable data consistency and sparsity). The improvement in RMSE compared to PL reconstruction was 28.8%. The phantom experiments investigated conditions of low data consistency, RoD provided a 15.6% reduction in RMSE compared to FBP and a 16.3% reduction compared to PL, showing the feasibility of RoD method for slow-rotating CBCT-A system.

Conclusions: Simulations and phantom experiments show the feasibility and improved performance of the RoD approach compared to FBP and PL reconstruction, enabling 3D neuro-angiography on a slowly rotating CBCT system (e.g., 17.1s for a half-scan). The algorithm is relatively robust against data sparsity and is sensitive in detecting low levels of contrast enhancement from the baseline (mask) scan. Tradeoffs among peak contrast, data consistency, and data sparsity are demonstrated clearly in each experiment and help to guide the development of optimal contrast injection protocols for future preclinical and clinical studies.
Artifacts reduction in low-contrast neurological imaging with C-arm system
Dan Xia, Yu-Bing Chang, Adnan H. Siddiqui, et al.
C-arm cone-beam CT (CBCT) is adopted rapidly for imaging-guidance in interventional and surgical procedures. However, measured CBCT data are truncated often due to the limited detector size especially in the presence of additional interventional devices outside the imaging field of view (FOV). In our previous work, it has been demonstrated that a constrained optimization-based reconstruction with an additional data-derivative fidelity term can effectively suppress the truncation artifacts. In this work, in attempt to evaluate the optimization-based reconstruction, two task-relevant metrics, are proposed for characterization of the recovery of the low-contrast objects and the reduction of streak artifacts. Results demonstrate that the optimization program and the associated CP algorithms can significantly reduce streak artifacts, leading to improved visualization of lowcontrast structures in the reconstruction relative to clinical FDK reconstruction.
Time-resolved C-arm cone beam CT angiography using SMART-RECON: quantification of temporal resolution and reconstruction accuracy
Time-resolved cone beam CT angiography (CBCTA) imaging in the interventional suite has the potential to identify occluded vessels and the collaterals of symptomatic ischemic stroke patients. However, traditional C-arm gantries offer limited rotational speed and thus the temporal resolution is limited when the conventional filtered backprojection (FBP) reconstruction is used. Recently, a model based iterative image reconstruction algorithm: Synchronized MultiArtifact Reduction with Tomographic reconstruction (SMART-RECON) was proposed to reconstruct multiple CBCT image volumes per short-scan CBCT acquisition to improve temporal resolution. However, it is not clear how much temporal resolution can be improved using the SMART-RECON algorithm or what the corresponding reconstruction accuracy is. In this paper, a novel fractal tree based numerical timeresolved angiography phantom with ground truth temporal information was introduced to quantify temporal resolution using a temporal blurring model analysis along with other two quantification metrics introduced to quantify reconstruction accuracy: the relative root mean square error (rRMSE) and the Kullback-Leibler Divergence (DKL). The quantitative results show that the temporal resolution is 0.8 s for SMART-RECON and 3.6 s for the FBP reconstruction. The reconstruction fidelity with SMART-RECON was substantially improved with the rRMSE improved by at least 70% and the DKL was improved by at least 40%.
Cardiothoracic
icon_mobile_dropdown
Realistic lesion simulation: application of hyperelastic deformation to lesion-local environment in lung CT
Thomas J. Sauer, Ehsan Abadi, Justin Solomon, et al.
Lesion simulation programs can be used to insert realistic, generated lesions into anatomical images for further study. Most lesion simulation programs rely on insertion of a mask within an otherwise unchangeable, i.e., static surrounding—in reality, lesions deform their immediate surroundings. The goal of the current study was to develop a lesion model based on realistic morphology, but with additional hyperelastic modification of the lesion-local environment in accordance with lesion morphology and location. Physical displacement of the existing tissue was modeled by finite element application of hyperelastic theory to a lung tissue segmentation, incorporating the material properties for both parenchymal and stromal tissue. An observer study was conducted with the data generated from this model to ascertain the realism of hyperelastic and static lesion insertions compared to real lesions. The comparisons were characterized in terms of the area under the ROC curve, AUC. The results indicate that observers are less able to distinguish between hyperelastically-inserted lesions and real ones (AUC=0.62) compared to statically-inserted lesions (AUC=0.75). The findings indicate that hyperelastic deformation offers an improvement in the realism of simulated lesions in CT imaging.
CT investigation of patient-specific phantoms with coronary artery disease
Lauren M. Shepard, Kelsey N. Sommer, Erin Angel, et al.
Purpose: To develop coronary phantoms that mimic patient geometry and coronary blood flow conditions for CT imaging optimization and software validation. Materials and Methods: Five patients with varying degrees of coronary artery disease underwent 320-detector row coronary CT angiography (Aquilion ONE, Canon Medical Systems). The aorta and coronary arteries were segmented using a Vitrea Workstation (Vital Images). Patient anatomy was manipulated in Autodesk Meshmixer and 3D printed in Tango+, a flexible polymer, using an Eden260V printer (Stratasys). Phantoms were connected to a pump that simulates physiologic pulsatile flow waveforms, correlated with a simulated ECG signal. Distal resistance was optimized for all three coronary vessels until physiologically accurate flow rates and pressure were observed. Phantoms underwent coronary CT Angiography (CTA) using a standard acquisition protocol and contrast mixed in the flow loop. Image data from the phantoms were input to a CT-FFR research software and compared to those derived from the clinical data. Results: All five patient-specific phantoms were successfully imaged with CTA and the images were analyzed by the CTFFR software. The phantom CT-FFR results had a mean difference of -5.4% compared to the patient CT-FFR results. Patient and phantom CT-FFR agreed for all three coronary vessels, with Pearson correlations r = 0.83, 0.68, 0.62 (LAD, LCX, RCA). Conclusions: 3D printed patient-specific phantoms can be manipulated through material properties, flow regulations, and a pulsatile waveform to create accurate flow conditions for CT based experimentation.
Evaluation of radiation dose reduction via myocardial frame reduction in dynamic cardiac CT for perfusion quantitation
Michael Bindschadler, Kelley R. Branch, Adam M. Alessio
Dynamic contrast enhanced cardiac CT acquisitions can quantify myocardial blood flow (MBF) in absolute units (ml/min/g), but repeated scans increase X-ray radiation dose to the patient. We propose a novel approach using high temporal sampling of the input function with reduced temporal sampling of the myocardial tissue response. This type of data could be acquired with current bolus tracking acquisitions or with new acquisition sequences offering reduced radiation dose and potentially easier data processing and flow estimation. To evaluate this type of data, we prospectively acquired a full dynamic series [12 -18 frames (mean 14.5±1.4) over 23 to 44 seconds (mean 31.3±5.0 sec)] on 28 patients at rest and stress (N=56 studies) and examined the relative performance of myocardial perfusion estimation when the myocardial data is subsampled down to 8, 4, 2 or 1 frame(s). Unlike previous studies, for all frame rates, we consider a well-sampled input function. As expected, subsampling linearly reduces radiation dose while progressively decreasing estimation accuracy, with the typical absolute error in MBF (as compared to the full-length series) increasing from 0.22 to 0.30 to 0.35 to 1.12 ml/min/g as the number of frames used for estimation decreases from 8 to 4 to 2 to 1, respectively. These results suggest that high temporal sampling of the input function with low temporal sampling of the myocardial response can provide much of the benefit of dynamic CT for MBF quantification with dramatic reductions in the required number of myocardial acquisitions and the associated radiation dose (e.g. 77% dose reduction for 2-frame case).
Stack transition artifact removal for cardiac CT using patch-based similarities
Sergej Lebedev, Eric Fournie, Karl Stierstorfer, et al.
Cardiac CT can be achieved by performing short scans with prospective gating. As the collimation of multi–slice CT scanners generally does not allow for a coverage of the entire heart, sequence scans, also known as step-andshoot, can be used, where irradiation is performed multiple times for varying positions. Each of these short scans yields data, generally with a longitudinal overlap, that can be reconstructed into a sub-volume, or stack. The latter ideally corresponds to the same phase. The issue addressed in this work is irregular motion, such as irregular heart motion. It leads to stacks that do not represent exactly the same volume, resulting in discontinuities at stack transitions when assembling the complete CT volume. We propose a stack transition artifact removal method including a simple symmetric registration approach. Originating from a set of control points in overlap regions between adjacent stacks, the algorithm symmetrically searches for matching sub volumes in the two neighboring stacks, respectively. The offsets to the respective control points of matching sub volumes is used to compute two deformation vector fields that match the two stacks to each other. The deformation vector fields are extended from the overlapping regions in order to maintain smooth and anatomically meaningful images. We validated the method using clinical data sets. By applying a straightforward symmetric registration method to cardiac data, we show that the stack transition artifacts can be addressed in this fashion. The artifact removal was able to considerably improve image quality and constitutes a framework that can be enhanced and expanded on in future.
Serial assessment of CT coronary calcifications for regression/progression analysis (Conference Presentation)
We are creating specialized software to enable serial assessment of individual coronary calcifications. Calcifications give reliable, cost-effective, non-invasive, contrast-agent-free evidence of coronary artery disease. From pathobiology and clinical observations, it is likely that progression of small, spotty, low-density calcifications will provide better evidence of disease progression than typically applied whole-heart Agatston, which is numerically dominated by large, potentially stable, calcifications. In our new analysis paradigm, we analyze individual calcification progression/regression/formation measures. Processing consisted of segmentation; initial rigid body registration using gray scale normalized cross correlation with the last scan as reference; and a modified iterative closest point (ICP) algorithm. Rather than using two, 3D point clouds, we assigned morphological characteristics (e.g. center, divergence, and maximum HU) to each calcification and performed ICP in a high dimensional space so that calcifications of similar shape and size matched. New measurements were enabled. For one example patient over 96-weeks, three new calcifications were formed and one was lost. Eleven of twelve calcifications increased in volume. We applied a new partial volume correction method to normal 3-mm thick scans and compared results to separate high resolution (0.67-mm) scans in a cadaveric heart. Mass score differences were improved from ≈12% to <2% with partial volume correction. Reproducibly of measurements from repeated scans was improved. Preliminary results suggest that we can use advanced computational methods to make detailed serial assessments of coronary artery calcifications, thereby enabling new studies with other biomarkers, response to drug therapy, effects of environmental factors, genes, and improvements to risk estimates.
Bias and variability in morphology features of lung lesions across CT imaging conditions
Jocelyn Hoye, Justin B. Solomon, Thomas Sauer, et al.
CT imaging method can influence radiomic features. The purpose of this study was to characterize the intra-protocol and inter-protocol variability and bias of quantitative morphology features of lung lesions across a range of CT imaging conditions. A total of 15 lung lesions were simulated (five in each of three spiculation classes: low, medium, and high). For each lesion, a series of simulated CT images representing different imaging conditions were synthesized by applying 3D blur and adding correlated noise based on the measured noise and resolution properties of five commercial multi-slice CT systems, representing three dose levels (CTDIvol of 1.90, 3.75, 7.50 mGy), three slice thicknesses (0.625, 1.25, 2.5 mm), and 33 clinical reconstruction kernels. Five repeated image volumes were synthesized for each lesion and imaging condition. A series of 21 shape-based morphology features were extracted from both “ground truth” (i.e., pre-blur without noise) and “image rendered” lesions (i.e., post-blur and with noise). For each morphology feature, the intra-protocol and inter-protocol variability was characterized by calculating the average coefficient of variation (COV) across repeats and imaging conditions, respectively (average was across all lesions). The bias was quantified by comparing the percent relative error in the morphology metric between the imaged lesions and ground truth lesions. The average intra-protocol COV metric ranged from 0.2% to 3%. The average inter-protocol COV ranged from 3% to 106% with most features being around 30%. Percent relative error was most biased at 73% for Ellipsoid Volume and least biased at -0.27% for Flatness. Results of the study indicate that different reconstructions can lead to significant bias and variability in the measurements of morphological features.
Phase Contrast Imaging
icon_mobile_dropdown
Monochromatic breast CT: absorption and phase-retrieved images
Luca Brombal, Bruno Golosio, Fulvia Arfelli, et al.
A program devoted to perform the first in-vivo monochromatic breast computed tomography (BCT) is ongoing at the Elettra Synchrotron Facility. Since the synchrotron radiation provides high energy resolution and spatial coherence, phase-contrast (PhC) imaging techniques can be used. The latest high resolution BCT acquisitions of breast specimens, obtained with the propagation-based PhC approach, are herein presented as part of a wider framework, devoted to the optimization of acquisition and reconstruction parameters towards the clinical exam. Images are acquired with a state-of-the-art dead-time-free single-photon-counting CdTe detector with a 60 µm pixel size. The samples are imaged at 32 and 38 keV in continuous rotating mode, delivering 5-20 mGy of mean glandular dose (MGD). Contrast-to-noise ratio (CNR) and spatial resolution performances are evaluated for both absorption and phase-retrieved images considering tumor/adipose tissue interfaces. We discuss two different phase-retrieval approaches, showing that a remarkable CNR increase (from 0.5 to 3.6) can be obtained without a significant loss in spatial resolution. It is shown that, even if the non-phase-retrieved image has a poorer CNR, it is useful for evaluating the spiculation of a microcalcification: in this context, absorption and phase-retrieved images have to be regarded as complementary information. Furthermore, the first full volume acquisition of a mastectomy, with a 9 cm diameter and 3 cm height, is reported. This investigation on surgical specimens indicates that monochromatic BCT with synchrotron radiation in terms of CNR, spatial resolution, scan duration and scan volume is feasible.
Towards a dual phase grating interferometer on clinical hardware
Johannes Bopp, Veronika Ludwig, Michael Gallersdörfer, et al.
In the last decades, several interferometric phase sensitive X-ray imaging setups with highly incoherent sources were developed. One of the clinically most promising setups is the Talbot-Lau interferometer. However, these systems still suffer from some challenges that prevent their clinical use. One challenge is the post-patient attenuation of the analyzer grating, that doubles the effective dose. To address this issue, new setup designs were proposed using a second phase grating, instead of the absorbing analyzer grating. Those two phase gratings together can create a beat pattern at the detector that can be resolved by the detector directly. In this paper the simulation tool CXI is validated for dual phase grating setups. Using the simulation, we found an optimal setup using existing gratings. A first feasibility study is shown with two phase gratings of 4.12 and 4.37 μm. The computed visibility of 4.6 % in simulation is in good accordance with the experimental visibility of 4 %. The final visibility is a trade-off between the inter-grating distance, grating-detector distance, the beat period and the point spread function of the detector.
Joint-reconstruction-enabled data acquisition design for single-shot edge-illumination x-ray phase-contrast tomography
Edge illumination X-ray phase-contrast tomography (EIXPCT) is an emerging imaging technology capable of estimating the complex-valued refractive index distribution with laboratory-based X-ray sources. Conventional image reconstruction approaches for EIXPCT require multiple images to be acquired at each tomographic view angle. This contributes to prolonged data-acquisition times and potentially elevated radiation doses, which can hinder in-vivo applications. A new “single-shot” method has been proposed for joint reconstruction (JR) of the real and imaginary-valued components of the refractive index distribution from a tomographic data set that contains only a single image acquired at each view angle. The JR method does not place restrictions on the types of measurement data that it can be applied to and therefore enables the exploration of innovation single-shot data-acquisition designs. However, there remains an important need to identify data-acquisition designs that will permit accurate JR. In this study, innovative, JR-enabled, single-shot data-acquisition designs for EIXPCT are proposed and characterized quantitatively in simulation studies.
Two-dimensional single grating phase contrast system
Phase contrast X-ray not only provides attenuation of tissue, but two other modalities (phase and scatter) in same scan. Scatter (dark-field) images provided by the technology are far more sensitive to structural and density changes of tissue such as lungs and can identify lung disease where conventional X-ray fails. Other areas poised to benefit greatly are mammography and bone joint imaging (eg. imaging arthritis). Of the various interferometer techniques, the two at the forefront are: Far-field Interferometry (FFI) (Miao et al, Nat. Phy. 2015) and Talbot-Lau interferometry (TLI) (Momose JJAP 2005, Pfeiffer Nature 2006). While the TLI has already made clinical strides, the newer FFI has advantage of not requiring an absorption grating (“analyzer”) and provides few-fold higher scatter sensitivity. In this work, a novel 2D single phase-grating (not requiring the analyzer), near-field phase contrast system was simulated using Sommerfeld- Rayleigh diffraction integrals. We observed 2D fringe patterns (pitch 800nm) at 50mm distance from the grating. Such a pattern period of 0.05mm, can be imaged by the LSU-interferometers with CT detector resolution (0.015mm) or Philips mammography detector resolution (0.05mm) making this practical system. Our design has a few advantages over Miao et al FFI system. We accomplish in one X-ray grating the functionality that requires 2-3 phase-grating in their design. And our design can also provide a compact system (source to detector distance < 1m) with control over the fringe pattern by fine tuning grating structure. We retain all the benefits of far-field systems -- of not requiring analyzer and high scatter sensitivity over Talbot-Lau interferometers.
Examining phase contrast sensitivity to signal location and tissue thickness in breast imaging
X-Ray phase contrast imaging (PCI) is being developed as an alternative to overcome the poor contrast sensitivity of existing attenuation imaging techniques. The “phase sensitivity” can be achieved using a number of phase-enhancing geometries such as free space propagation, grating interferometry and edge illumination (also known as coded aperture) technique. The enhanced contrast in the projected intensities (that combine absorption and phase effect) can vary by object shape, size and its material properties as well as the particular PCI method used. We show a comparison of this signal enhancement for both FSP and coded aperture (CA) PCI. Our data shows that the phase enhancement is significantly higher for CA in comparison to FSP. Our preliminary results indicate that the enhanced phase effect decreases in all PCI techniques with increasing background thickness. Investigations involving signal location and background tissue thickness dependent signal enhancement (and/or loss of this signal) are very important in determining the true benefit of PCI methods in a practical application involving thick organs like breast imaging.
Image Reconstruction
icon_mobile_dropdown
Fast low-dose compressed-sensing (CS) image reconstruction in four-dimensional digital tomosynthesis using on-board imager (OBI)
Sunghoon Choi, Scott S. Hsieh, Chang-Woo Seo, et al.
Patient respiration induces motion artifacts during cone-beam CT (CBCT) imaging in LINAC-based radiotherapy guidance. This could be relieved by retrospectively sorting the projection images, which could be called as a respiratorycorrelated CBCT or a four-dimensional (4D) CBCT imaging. However, the slowness of the LINAC head gantry rotation limits a rapid scan time so that 4D-CBCT usually involves large amounts of radiation dose. Digital tomosynthesis (DTS) which utilizes limited angular range would present a faster 4D imaging with much lower radiation dose. One drawback of 4D-DTS is strong streak artifacts represented in the tomosynthesis reconstructed images due to sparsely sampled projections in each phase bin. The authors suggest a fast low-dose 4D-DTS image reconstruction method in order to effectively reduce the artifacts in a sparse imaging tomosynthesis condition. We used a flat-panel detector to acquire tomosynthesis projections of respiratory moving phantom in anterior-posterior (AP) and lateral views. We entered a sinusoidal periodic respiratory motion for an input signal to the phantom. An external monitor of Varian real-time position management (RPM) system was used to estimate the input respiratory motion, thereby four respiratory gating phases were determined to retrospectively arrange the projections. For streak line reduction, we introduced a simple iterative scheme suggested by McKinnon and Bates (MKB) and regarded it as a prior input image of the proposed lowdose compressed sensing (CS) method. Three different 4D-DTS image reconstruction schemes of conventional Feldkamp (FDK), MKB, and MKB-CS were implemented to phase-wise projections of both AP and lateral views. All reconstructions were accelerated by using a GPU card to reduce the computation times. For assessment of algorithmic performance, we compared a streak reduction ratio (SRR) and a contrast-to-noise-ratio (CNR) among the outcome images. The results showed that SRRs for MKB and MKB-CS schemes were 0.24 and 0.69, respectively, which indicates that the proposed MKB-CS method provided smaller streaking artifacts than conventional one by factor of 2.88. The resulted CNRs of coronal tomosynthesis images at peak-inhale phase were 3.24, 6.36, and 10.56 for FDK, MKB, and MKB-CS, respectively. This shows that the proposed method provides better image quality compared to others. The reconstruction time for MKB-CS was 196.07 sec, showing that our GPU-accelerated programming would enhance the algorithmic performance to match clinically feasible times (~3 min). In conclusion, the proposed low-dose 4D-DTS reconstruction scheme may provide better outcomes than the conventional methods with fast speed, and could thus it could be applied in practical 4D imaging for radiotherapy.
Organ-specific context-sensitive CT image reconstruction and display
Sabrina Dorn, Shuqing Chen, Stefan Sawall, et al.
In this work, we present a novel method to combine mutually exclusive CT image properties that emerge from different reconstruction kernels and display settings into a single organ-specific image reconstruction and display. We propose a context-sensitive reconstruction that locally emphasizes desired image properties by exploiting prior anatomical knowledge. Furthermore, we introduce an organ-specific windowing and display method that aims at providing a superior image visualization. Using a coarse-to-fine hierarchical 3D fully convolutional network (3D U-Net), the CT data set is segmented and classified into different organs, e.g. the heart, vasculature, liver, kidney, spleen and lung, as well as into the tissue types bone, fat, soft tissue and vessels. Reconstruction and display parameters most suitable for the organ, tissue type, and clinical indication are chosen automatically from a predefined set of reconstruction parameters on a per-voxel basis. The approach is evaluated using patient data acquired with a dual source CT system. The final context-sensitive images simultaneously link the indication-specific advantages of different parameter settings and result in images joining tissue-related desired image properties. A comparison with conventionally reconstructed and displayed images reveals an improved spatial resolution in highly attenuating objects and air while maintaining a low noise level in soft tissue in the compound image. The images present significantly more information to the reader simultaneously and dealing with multiple volumes may no longer be necessary. The presented method is useful for the clinical workflow and bears the potential to increase the rate of incidental findings.
Sensitivity and specificity of a sparse reconstruction algorithm for superparamagnetic relaxometry
Ovarian cancer survival rates could be greatly improved through effective early detection. However, several clinical studies have shown that proposed screening methodologies have no impact on overall survival. Our lab is participating in the development of a novel nanoparticle imaging device that can be incorporated as a third-line test to improve the specificity and sensitivity of the overall screening program. The device’s highly sensitive detectors can detect the residual magnetic field of only those nanoparticles that have become bound to cancer cells via specific antibody interactions. However, the reconstruction of the bound particle distribution from this residual field map is challenging due to the highly ill-posed nature of the inverse problem. Our lab has developed a sparse reconstruction algorithm to overcome this challenge. Here, we present the results of a blinded phantom study to simulate the pre-clinical scenario of detecting a tumor signal in the presence of a large signal from bound particles in the liver. Overall, our algorithm identified the correct location of bound particle sources with 84% accuracy. We were able to detect as little as 1.6ug of bound particles with 100% accuracy when the source was alone, and as little as 3.13ug when there was a stronger source present. We also show the effect of manual and automatic parameter selection on the performance of the algorithm. These results provide valuable information about the expected performance of the algorithm that we can use to optimize the design of future small animal studies as we work to bring this novel technology to the clinic.
Content-oriented sparse representation (COSR) denoising in CT images
Denoising has been a challenging research subject in medical imaging in general and in CT imaging in particular, because the suppression of noise conflicts with the preservation of texture and edges. The purpose of this paper is to develop and evaluate a content-oriented sparse representation (COSR) denoising method in CT to effectively address this challenge. A CT image is firstly segmented by thresholding into several content-areas with similar materials, such as the air, soft tissues and bones. After being ex-painted smoothly outside it boundary, each content-area is sparsely coded by an atom from the dictionary that learnt from the image patches extracted from the corresponding content-area. The regenerated content-areas are finally aggregated to form the denoised CT image. The efficiency of image denoising and the ability of preserving texture and edges are demonstrated with a cylinder water phantom generated by simulation. The denoising performance of the proposed method is further tested with images of a pediatric head phantom and an anonymous pediatric patient that scanned by a state-of-the-art CT scanner, which shows that the proposed COSR denoising method can effectively preserve texture and edges while reducing noise. It is believed that this method would find its utility in extensive clinical and pre-clinical applications, such as dedicated and low dose CT, image segmentation and registration, and computer aided diagnosis (CAD) etc.
Prospective image quality analysis and control for prior-image-based reconstruction of low-dose CT
Purpose: Prior-image-based reconstruction (PIBR) is a powerful tool for low-dose CT, however, the nonlinear behavior of such approaches are generally difficult to predict and control. Similarly, traditional image quality metrics do not capture potential biases exhibited in PIBR images. In this work, we identify a new bias metric and construct an analytical framework for prospectively predicting and controlling the relationship between prior image regularization strength and this bias in a reliable and quantitative fashion. Methods: Bias associated with prior image regularization in PIBR can be described as the fraction of actual contrast change (between the prior image and current anatomy) that appears in the reconstruction. Using local approximation of the nonlinear PIBR objective, we develop an analytical relationship between local regularization, fractional contrast reconstructed, and true contrast change. This analytic tool allows prediction bias properties in a reconstructed PIBR image and includes the dependencies on the data acquisition, patient anatomy and change, and reconstruction parameters. Predictions are leveraged to provide reliable and repeatable image properties for varying data fidelity in simulation and physical cadaver experiments. Results: The proposed analytical approach permits accurate prediction of reconstructed contrast relative to a gold standard based on exhaustive search based on numerous iterative reconstructions. The framework is used to control regularization parameters to enforce consistent change reconstructions over varying fluence levels and varying numbers of projection angles – enabling bias properties that are less location- and acquisition-dependent. Conclusions: While PIBR methods have demonstrated a substantial ability for dose reduction, image properties associated with those images have been difficult to express and quantify using traditional metrics. The novel framework presented in this work not only quantifies this bias in an intuitive fashion, but it gives a way to predict and control the bias. Reliable and predictable reconstruction methods are a requirement for clinical imaging systems and the proposed framework is an important step translating PIBR methods to clinical application.
Data-efficient methods for multi-channel x-ray CT reconstruction
D. P. Clark, C. T. Badea
The superiority of iterative reconstruction techniques over analytical ones is documented in a variety of CT imaging applications. However, iterative reconstruction techniques generally require a substantial increase in data processing time and required resources, slowing adoption of the state of the art. This problem is exacerbated in multi-channel CT reconstruction problems (e.g. dynamic and spectral CT) where the gap between the amount of data acquired and to be reconstructed is often exaggerated. To facilitate adoption of iterative reconstruction techniques, we propose methods which seek to improve data efficiency. Specifically, we define data-efficient methods as those that produce reliable results with respect to task-specific metrics while managing the total x-ray exposure, sampling time, computation time, and computational resources required. The development of such methods unifies several themes in CT research, including dose management, task-based optimization, clinically relevant timelines for data processing, and reconstruction from undersampled data. In this paper, we present complementary, data-efficient methods for cardiac CT reconstruction. We present a reconstruction algorithm which requires minimal parameter tuning to solve temporal reconstruction problems. The algorithm exploits spatially localized, voxel-centric, distance-driven projection and backprojection operators to promote computational efficiency. We validate the algorithm with numerical simulations, using the MOBY mouse phantom, and with in vivo mouse data. For the in vivo data, localized reconstruction reduces computation time by 50% and system RAM requirements by 60% relative to non-localized reconstruction. Following validation of our algorithm, we present preliminary in vivo temporal denoising results using a convolutional neural network which promise to further improve the fidelity and speed of our reconstructions in future work.
Poster Session: New Imaging Systems
icon_mobile_dropdown
Ex-vivo mice mammary glands characterization using energy-dispersive x-ray diffraction and spacially resolved CdZnTe detectors
Vera Feldman, Joachim Tabary, Caroline Paulus, et al.
False-positive mammography results in breast cancer diagnosis can lead to unnecessary biopsies which are invasive and time-consuming. X-Ray Diffraction (XRD) has the potential of providing diagnosis-relevant information and thus can be used after a mammography to verify its results and possibly avoid needless biopsy. We present an energy-dispersive X-Ray Diffraction (EDXRD) system and data analysis method which allowed us to characterize healthy and cancerous mice mammary glands ex-vivo. Our technique showed decent glad localization along the z-axis as well as scatter signatures coherent with ones previously described in literature. We used an in-house spatially resolved CdZnTe detector, and a subpixelation technique which enhances spatial resolution. Acquisition time and dose delivered are to be optimized yet, however our results demonstrate the potential of EDXRD systems for depth-resolved breast imaging. Different geometries and processing algorithms are currently being investigated in the development of a future EDXRD clinical system for breast cancer diagnosis.
Compton scatter tomography with photon-counting detector: a preliminary study
Kai Wang, Haitao Cheng, Xi Chen, et al.
Compton scatter tomography can reconstruct the electron density distribution using the first-order Compton scattered photons, and has the potential of identifying different materials. However, in Compton scatter tomography, the detected photons may come from all voxels illuminated by the x-ray beam and the information is blended. Although the mixing could be reduced using mechanical collimation, the detected photon number will decrease seriously, which tampers the reconstructed image quality greatly. This paper proposes a Compton scatter tomography scheme based on the scatter physics and photon-counting detector. The Compton scatter photons could be detected without mechanical collimation while fan-beam CT scanning, and scattered signal can be separated into signals emitted from subsets of the entire volume due to geometry constraints associated with energy selection of the photon-counting detector. An analytical model of first-order Compton scatter projection procedure is constructed, and a compressed sensing based method is utilized to reconstruct the electron density distribution. Experiment results demonstrate the accuracy of the signal acquisition model, and the proposed imaging scheme can represent the anatomical structure of the object in electron density.
Contrast-enhanced x-ray microscopy of bovine articular cartilage
Ying Zhu, Dragana Ponjevic, John R. Matyas, et al.
Osteoarthritis (OA) is a common chronic disease of joints typically characterized by degenerative changes of articular cartilage, which is comprised of chondrocytes embedded in a composite of water-imbibing proteoglycans restrained by fibrillar collagen network. Early diagnosis of OA requires sensitive imaging, ideally at the cellular-molecular level. Whereas cartilage histopathology is destructive, time-consuming and limited to 2D views, contrast-enhanced x-ray microscopy (XRM) brings the possibility to non-destructively image the cartilage collagen network in 3D at high resolution. This study establishes a correlation between contrast-enhanced XRM and the gold-standard histology for the evaluation of the cartilage collagen network.

Cartilage with subchondral bone was excised in 3 x 3 mm2 cross-sectional area from healthy bovine knees and stained in phosphotungstic acid (PTA) for 0, 4, 8, 12, 16, 20, 24, 28 and 32 hours. XRM imaging was performed after each staining time, analyzed and determined an optimal staining time of 16 hrs and a saturated staining time of 24 hrs for this sample. Polarized light microscopy and second harmonic generation dual-photon microscopy of a histology section from the same sample were analyzed and compared with the matching XRM slice.

Cartilage collagen network from PTA-enhanced XRM was well correlated with histology. We validated the PTAenhanced XRM for the evaluation of the cartilage collagen network non-destructively. The 3D cartilage volume from this technique will provide a non-destructive approach to investigate OA pathology.
Towards bimodal intravascular OCT MPI volumetric imaging
Sarah Latus, Florian Griese, Matthias Gräser, et al.
Magnetic Particle Imaging (MPI) is a tracer-based tomographic non-ionizing imaging method providing fully three-dimensional spatial information at a high temporal resolution without any limitation in penetration depth. One challenge for current preclinical MPI systems is its modest spatial resolution in the range of 1 mm - 5 mm. Intravascular Optical Coherence Tomography (IVOCT) on the other hand, has a very high spatial and temporal resolution, but it does not provide an accurate 3D positioning of the IVOCT images. In this work, we will show that MPI and OCT can be combined to reconstruct an accurate IVOCT volume. A center of mass trajectory is estimated from the MPI data as a basis to reconstruct the poses of the IVOCT images. The feasibility of bimodal IVOCT and MPI imaging is demonstrated with a series of 3D printed vessel phantoms.
Projection tomography in the NIR-IIa window: challenges, advantages, and comparison with classical optical approach
A. Marcos-Vidal, D. Ancora, G. Zacharakis, et al.
Optical Projection Tomography (OPT) techniques are limited by the nature of light interaction with tissues. Some of these limitations arise from the scattering phenomena or the weak transparency of the samples, restricting the depth of penetration and the differentiation of structures due to image blurriness. Recent advances in detection technologies enable the use of new promising light spectrum imaging windows such as the Near-Infrared (NIR) II-a, which comprises wavelengths within the range between 1300 to 1400 nm. Light in this frequency band has interesting properties that could increase the depth of penetration, reduce the auto fluorescence and the scattering.

In this work we aim to explore the possibilities of this wavelengths and compare them with imaging in shorter wavelengths. At first, we have examined its optical properties in detail, finding which band of the spectrum works best for tissue imaging. Afterwards we built an OPT setup using two lasers with wavelengths from different windows to analyze the benefits of the IIa near infrared experimentally. Finally, we discuss the results and we propose ways to exploit all the advantages that the use of these wavelengths can bring into the state-of-the-art optical imaging techniques.
A fully vacuum-sealed miniature x-ray tube with carbon nanotube field emitters for compact portable dental x-ray system
Sora Park, Jin-Woo Jeong, Jae-Woo Kim, et al.
There have been many efforts to develop x-ray sources using field electron emitters instead of conventional thermionic cathodes for digital controlling of x-rays in medical imaging. Specially, portable x-ray systems need a miniature x-ray tube with less-power consumption, easy insulation of high voltage and light shielding of x-rays. Carbon nanotube (CNT) has attracted much attention as the most promising field emitter due to its geometric high aspect ratio, high physical and chemical inertness. To date, however, CNT field emitters have not been satisfactorily incorporated into a fully vacuum-sealed x-ray tube due to their instability and/or unreliability. We successfully developed a fully vacuum-sealed, miniature x-ray tube with CNT emitters for portable dental x-ray systems. The x-ray tube was designed in a triode configuration with a self-electron focusing gate and reliable CNT emitters, and was fully vacuum-sealed within a miniaturized volume of 15 mm in diagonal and 65 mm in length, very tiny and small as compared with conventional thermionic one. The nominal focal spot size of the x-ray tube is 0.4 mm with an operational tube voltage of 65 kV and a current of 3 mA, which offers quite good x-ray images of a human tooth phantom. No heating the miniature x-ray tube for electron emission leads to easy insulation of high tube voltage and light shielding of x-rays, giving a compact and light portable x-ray system. Furthermore, digital operation of the x-ray tube through an active-current control could provide a commercial lifetime along with pretty good stability.
Proton radiography for relativistic proton beam therapy
Matthew S. Freeman, Michelle A. Espy, Per E. Magnelind, et al.
Proton beam radiation therapy is at the forefront of modern techniques for cancer treatment, due to its high level of radiation-dose deposition accuracy. This treatment accuracy, however, is limited by the ability to position the treatment beam within the patient's anatomy. Typically, the patient's position is registered orthogonally, using X-ray imaging. However, if instead, beam's-eye-view imaging were enabled by proton radiography, dose deposition measurements could be improved with intrinsically-registered patient positioning. At a typical treatment facility, with a maximum proton energy on the order of 250 MeV, imaging capabilities are limited by the high degree of multiple-Coulomb scatter protons accumulate before exiting the patient. However, in increasing the proton energy from 250 MeV to 800 MeV, the accumulated scatter is reduced by a factor of five, and, coupled with a magnetic-lens, collimated imaging system, enables high-resolution, high-contrast imaging. Further, based on results from Geant4, the dose profile of this higher energy beam is tightly constrained, a fact which may be exploited in the future to further increase treatment accuracy. The full-width half-max of the dose-deposition kernels at 33.4 cm depth (the 250-MeV Bragg peak) are 1.56 cm for 250-MeV protons, compared with 0.52 cm for 800-MeV protons. At 1-cm downstream of this point, the 250-MeV kernel has dropped off to 32%, while the 800-MeV dose is still at 95%. An 800-MeV proton treatment plan exploiting the constrained lateral profile would utilize techniques developed for photon therapy, to deliver the dose from 360° and tightly constrain the dose, stereotactically.
Transrectal ultrasound-waveform tomography using plane-wave ultrasound reflection data for prostate cancer imaging
Ultrasound tomography is to reconstruct tissue mechanical properties using ultrasound signals for cancer characterization. We study the capability of plane-wave ultrasound-waveform inversion to reconstruct sound-speed values of prostate tumors. Our ultrasound-waveform inversion algorithm iteratively fits synthetic ultrasound waveforms with recorded ultrasound waveforms starting from an initial model. We verify the algorithm using synthetic ultrasound data for numerical prostate phantoms consisting of multiple tumors in homogeneous and heterogeneous background prostate tissues. Our reconstruction results demonstrate that our new plane-wave transrectal ultrasound-waveform tomography has the potential to accurately reconstruct the sound-speed values of prostate tumors for cancer characterization. In addition, we build a new transrectal ultrasound tomography prototype using a 256-channel Verasonics Vantage system and a GE intracavitary curved linear array to acquire plane-wave ultrasound reflection data for transrectal ultrasound tomography.
Real-time in-situ monitoring of electrotherapy process using electric pulse-induced acoustic tomography (EpAT)
Ali Zarafshani, Nicklas Dang, Pratik Samant, et al.
The use of electrical energy in applying reversible or irreversible electropermobilization for biomedical therapies is growing rapidly. This technique uses an ultra-short and high-voltage electric pulse (μs-to-nsEP) to improve the permeability of cell membranes, thus allowing drug delivery to the cytosol via nanopores in the membrane. Since the treatment subject varies in size, location, shape and tissue environment, it is necessary to visualize this mechanism by monitoring electric field distributions in real-time. Previous studies suggested various techniques for monitoring electroporation, however, none of these techniques are so far capable for real-time monitoring of the electric field. In this study, we propose an innovative real-time, monitoring technique of electric field distributions based on electric field-induced acoustic emissions. For the first time, we demonstrate the capability of an electric field that used in electrotherapy to induce acoustic waves, which can be suggested for realtime monitoring. We tested this technique by generating a variety of electric field distributions (μs-to-nsEP with intensity up to 120V/cm) to energize two electrodes in a bi-polar configuration (d1=100μm and d2=200μm). The electric field transmits a short burst of ultrasonic energy. We used ultrasonic receivers for collecting acoustic signals around the subject under test. Acoustic signals were collected through different intensities of electric field distributions and repositioning the electric field from the receiver in 3D structure. An electric field utilized in electrotherapy produces high resolution images that directly can improve the efficiency of electrotherapy treatments in real-time.
Sensitivity of diffuse correlation spectroscopy to flow rates: a study with tissue simulating optical phantoms
Sara Zanfardino, Karthik Vishwanath
Diffuse Correlation Spectroscopy (DCS) is a non-invasive and easy to operate device for determining tissue perfusion in clinical applications. DCS detects temporal fluctuations in the diffusely reflected intensity from an incident coherent laser source and relates these fluctuations theoretically to calculate the mean-square displacement of moving light scattering particles. The objective of these studies was to experimentally investigate DCS signals from a turbid optical phantom containing a flow channel. By changing the depth of the flow channel (from the surface of the phantom) we investigated the depth sensitivity of DCS with changes in the optical properties of the phantom media containing the flow channel. Two sets of experiments were conducted: in the first task the sensitivity of the depth dependence of DCS measurements was investigated. The second task was to then determine how varying optical properties, within ranges measured in real tissue, altered the DCS measurements in regions of zero, low, and high relative flow rates. Concentrations of scattering and absorbing particles in the phantom surrounding the flowing solution were varied and the resulting changes in the autocorrelation curves were monitored. We report here that varying the concentrations of the absorbing and scattering particles in the phantom impacted the DCS autocorrelation decay measurements. Thus, it will be important to have robust estimates of the surrounding tissue optical properties to extract absolute flow-rates using DCS.
Assessment of image quality parameters of a novel micro-CT system compared to a conventional CT geometry
K. Kumar, N. Saeid Nezhad, B H. Mueller, et al.
A novel fourth generation micro-CT (WATCH-CT) with a unique scanning geometry, that collects parallel projections from a standard x-ray source without the requirement to interpolate or rebin the data, is studied and evaluated for its imaging qualities and performance characteristics. For a comparative analysis of the WATCH micro-CT system and the conventional CT geometry, the local noise power spectrum and the modulation transfer function is derived from the same initial parameters. The spatial resolution (MTF), characterized by the response of the system, is determined by the MTF derived by the oversampling method. The calculations involve varying the parameters like the region of evaluation (ROE) position, FOV magnification, angular sampling, pixel size, filtration and reconstruction algorithm to provide an extensive analogy between these systems. The spatial resolution of the scanning geometries is evaluated and compared. The MTF curves illustrate a higher relative resolving capacity for the WATCH micro-CT compared to the conventional geometries which is due to the characteristics of this unique geometry. The WATCH system exhibits higher resolutions explicitly at the regions away from the center. The NPS curves of WATCH geometry shows higher noise content in comparison to the conventional geometry.
Use of silicon photomultipliers for detection of Cherenkov light from Compton scattered electrons for medical imaging
Matthias Mielke, Reimund Bayerlein, Christian Gibas, et al.
Radioactive isotopes with energies up to 0.5 MeV are used in nuclear medicine for imaging. However several isotopes with energies up to 10 MeV exist that have interesting properties for medical applications, but conventional detectors are inefficient for these energies. A Compton camera setup, consisting of a radiator and an absorption layer, can be used to detect such high energy gamma radiation. In a Compton camera an incident gamma ray undergoes a Compton scattering in the radiator creating a high energetic Compton electron e. By determining the point of interaction and measuring the energy and the direction of the scattered gamma ray it is possible to confine the origin of the incident gamma ray to the surface of a cone. The greatest challenge lies in the coincident detection of electron and scattered gamma. Previous research proposed the use of Silicon Photomultipliers arrays (SiPM) to detect Cherenkov Light (CL) produced by e for determining es properties based on the directional properties of CL. Since only few photons of CL are produced, the high noise floor of the SiPM affects the detection negatively. In this contribution an estimation of SiPMs noise floor is presented, that bases on a behavioural simulation of noise processes in the SiPM. With the simulation it is possible to determine properties of the SiPM, to assess the effectiveness of filter and to build stimuli for other simulations.
High-concentration gadolinium nanoparticles for pre-clinical vascular imaging
Charmainne Cruje, David W. Holdsworth, Elizabeth R. Gillies, et al.
Gadolinium-based contrast agents that have long circulation times in small animals have always been of interest in preclinical imaging. Although gadolinium-based contrast media are used clinically in MRI, these agents are composed of small molecules; by renal clearance, these molecules exit the blood pool of small animals before imaging can be completed. Long circulation times that are appropriate for microimaging – in the order of tens of minutes – can be achieved by using nanoparticles that are large enough to evade rapid renal clearance (i.e. over 10 nm in diameter). The encapsulation of nanoparticles within polymers is also required to minimize their detection by the immune system, thus delaying hepatic clearance. Hence, the objective of our work was to develop a gadolinium-based contrast agent that will circulate long enough for pre-clinical computed tomography (CT) while maintaining blood pool detectability in the image (i.e. initial gadolinium content of around 100 mg/mL). We synthesized a contrast agent in the form of polymerencapsulated gadolinium nanoparticles by following a method that our group has reported. The nanoparticles in the contrast agent were characterized to have an average diameter of 110 ± 3 nm, and contains 94 ± 7 mg/mL of gadolinium. Our in vivo results in 2 mice show blood pool contrast enhancements of 220 ± 22 HU and circulates for up to an hour after tail vein injection. Given that the contrast agent stays in the blood pool of mice for up to an hour, the contrast agent has promising utility in pre-clinical vascular research.
Towards non-invasive electrocardiographic imaging using regularized neural networks
Abhejit Rajagopal, Vincent Radzicki, Hua Lee, et al.
We present a new data-driven technique for non-invasive electronic imaging of cardiovascular tissues using routinely-measured body-surface electrocardiogram (ECG) signals. While traditional ECG imaging and 3D reconstruction algorithms typically rely on a combination of linear Fourier theory, geometric and parametric modeling, and invasive measurements via catheters, we show in this work that it is possible to learn the complicated inverse map, from body-surface potentials to epicardial or endocardial potentials, by exploiting the powerful approximation properties of neural networks. The key contribution here is a formulation of the inverse problem that allows historical data to be leveraged as ground-truth for training the inverse operator. We provide some initial experiments, and outline a path for extending this technique for real-time diagnostic applications.
Hyperspectral imaging: comparison of acousto-optic and liquid crystal tunable filters
Ramy Abdlaty, Samir Sahli, Joseph Hayward, et al.
In this work, we report a performance comparison of an acousto-optic tunable filter (AOTF), and a liquid crystal tunable filter (LCTF) based on a novel dual-arm hyperspectral imaging (HSI) configuration. The main purpose of this work is to highlight the leverage points of each tunable filter, in order to facilitate filter choice in HSI design. Three main parameters are experimentally examined: spectral resolution, out-of-band suppression, and image quality in the sense of spatial resolution. The experimental results, using wideband illumination, laser lines, and a spatial test target (USAF-1951) emphasized the superiority of AOTF in spectral resolution, out-of –band suppression and random switching speed between wavelengths. The same experiments demonstrated LCTF to have better performance in terms of the spatial image resolution, both horizontal and vertical, and high definition quality. In conclusion, the efficient design of an HSI system is application-dependent. For medical applications, for instance, if the tissue of interest has undefined optical properties, or contains close spectral features, AOTF might be the better option. Otherwise, LCTF is more convenient and simpler to use, especially if the tissue chromophore’s spatial mapping is needed.
Poster Session: Quantitative Imaging
icon_mobile_dropdown
An investigation into how the radiotherapy dose response of normal appearing brain tissue in glioma patients influences ADC measurements
Haris Shuaib, Lucy Brazil, Thomas C. Booth
Functional diffusion map (fDM) analysis has been shown to be potentially useful in categorizing treatment response in glioma patients.1, 2 Such fDMs are created using subtracted longitudinal ADC images, which are segmented to quantify regions of significant increases and decreases in ADC values. Thresholds demarcating significant change have been defined in the literature by sampling longitudinal changes in contralateral normal appearing brain tissue (NABT) in the patient population that was being studied1, 2 . A recent publication has shown that the use of contralateral NABT to aid absolute quantification of CBV and CBF values due to radiation induced changes to dynamic susceptibility contrast MRI (DSC) parameters is inappropriate.3 We have investigated how the radiotherapy (RT) dose response of contralateral NABT influences ADC measurements.
Quantitative image-based phosphorus-31 MR spectroscopy for evaluating age-based differences in skeletal muscle metabolites
Shenweng Deng, Erika M. Ripley, Juan A. Vasquez, et al.
Purpose: To develop and validate a novel in-vivo phosphorus-31 magnetic resonance spectroscopy (31P-MRS) method to determine the absolute concentrations of phosphocreatine [PCr], inorganic phosphate [Pi], and adenosine triphosphate [ATP] in the vastus lateralis muscle. Materials and Methods: An external 6 mL plastic vial with 850 mM of methylenediphosphonic acid (MDP), fixed to the center of a commercial dual-tuned transmit/receive surface coil, was used to calibrate metabolite concentrations from spectral areas. A 15cm diameter, 4 L cylindrical phantom (35 mM H3PO4) was scanned on a custom coil holder using the same parameters. Reproducibility of the 31P-MRS measurements was determined in volunteers (2M/3F, age = 39.2±21.9 years) while accuracy was determined using phantoms of known concentrations. Eight young subjects (24.5+4.2 years; 4M/4F) and eight older subjects (59.6+4.5 years; 4M/4F) were scanned. Student’s t-test was used to compare older versus younger subjects. Results: The percent error between the calculated and known molarity of phantoms was 3.3±1.9%. The mean coefficient of variation for the measurements of [PCr] was 5.2±3.7 %. Phosphorus metabolite concentrations, including [PCr] (25.2±3.4 mM vs. 28.5±3.4 mM, p < 0.005), [ATP] (6.68±0.84 mM vs. 7.71±0.61 mM, p<0.05) and [Pi] (3.18±0.46 mM vs. 2.56±0.55 mM, p<0.05) were significantly lower in older versus younger subjects. A significant, negative correlation was found between [PCr] and BMI (r = -0.50, p < 0.05). Conclusion: Quantitative 31P-MRS measurements reveal previously unappreciated differences in skeletal muscle phosphorus metabolite concentrations between young and older subjects and may provide unique insights when combined with other metabolic tests.
A rapid, robust multi-echo phase unwrapping method for quantitative susceptibility mapping (QSM) using strategically acquired gradient echo (STAGE) data acquisition
Yongsheng Chen, Saifeng Liu, Yan Kang, et al.
Purpose: To unwrap multi-echo phase images for quantitative susceptibility mapping (QSM) in a rapid and robust manner without using complicated search algorithms. Background: Since QSM requires unaliased phase images as input, a reliable 3D phase unwrapping step is essential to reconstruct susceptibility maps. However, this is usually one of the most time-consuming steps in QSM, especially for multi-echo data acquisition. Methods: Strategically acquired gradient echo (STAGE) data are used to provide six flow compensated images with echo times of 2.5 ms, 7.5 ms, 8.75 ms, 12.5 ms, 17.5 ms and 18.75 ms. An unaliased phase image with an effective echo time of 1.25 ms can be created by a complex division between 7.5 ms and 8.75 ms. Using this short pseudo-echo data along with the acquired 2.5 ms data, all other echoes can be unwrapped using a bootstrapping approach. Results: The six echoes (384 × 288 × 64 × 6 voxels) acquired using STAGE data acquisition were unwrapped successfully. This resulted in reliable self-consistent QSM images in only 1 second compared to the quality guided 3DSRNCP algorithm, which took 137 seconds, and the Laplacian based algorithm, which took 23 seconds on the same computer. Conclusions: The proposed bootstrapping multi-echo unwrapping method provides a rapid, robust phase unwrapping method on a voxel-by-voxel basis for online QSM reconstruction.
Poster Session: Phantoms
icon_mobile_dropdown
Anatomical DCE-MRI phantoms generated from glioma patient data
Several digital reference objects (DROs) for DCE-MRI have been created to test the accuracy of pharmacokinetic modeling software under a variety of different noise conditions. However, there are few DROs that mimic the anatomical distribution of voxels found in real data, and similarly few DROs that are based on both malignant and normal tissue. We propose a series of DROs for modeling Ktrans and Ve derived from a publically-available RIDER DCEMRI dataset of 19 patients with gliomas. For each patient’s DCE-MRI data, we generate Ktrans and Ve parameter maps using an algorithm validated on the QIBA Tofts model phantoms. These parameter maps are denoised, and then used to generate noiseless time-intensity curves for each of the original voxels. This is accomplished by reversing the Tofts model to generate concentration-times curves from Ktrans and Ve inputs, and subsequently converting those curves into intensity values by normalizing to each patient’s average pre-bolus image intensity. The result is a noiseless DRO in the shape of the original patient data with known ground-truth Ktrans and Ve values. We make this dataset publically available for download for all 19 patients of the original RIDER dataset.
Development of solid water phantom using wax
Tomoe Hagio, Qin Li, Bahaa Ghammraoui, et al.
In this work, we develop wax-based solid water phantoms for CT systems that have radiation characteristics close to those of water for clinically appropriate CT tube voltages. Most commercial solid water phantoms are made with non-water-based materials for durability. However, manufacture-grade solid water is difficult to replicate in a common lab setting and ingredients can be toxic. We have developed a wax-based solid water phantom for a coronary calcified plaque phantom for our ongoing research project where calcium-based material and fatty material are mixed into the water-equivalent material. We chose soy wax as a main ingredient because it is non-toxic and can be easily used to develop more realistic coronary plaques with a variety of compositions (e.g., more wax for fattier plaque). We determined additional ingredients and concentrations needed to make solid water phantoms via least squares method where the mass fraction of each material was estimated by minimizing the difference between the linear attenuation coefficients of water and the mixture. Based on the analytical calculation, physical solid water phantoms were developed and were scanned using micro-CT and clinical CT systems to experimentally verify the optimal concentration of the mixed material and phantom homogeneity. The CT values of our solid water phantom at the optimal concentration was found comparable to that of water and had similar variance. In addition to supporting our plaque phantom development, this solid water phantom could be used as a basis to develop a variety of other complex tissue-mimicking phantoms for use in clinical CT scanner.
Development of a novel respiring phantom for motion correction studies in PET imaging
W. Scott-Jackson, S. McQuaid, K. Wells, et al.
Several commercial phantoms exist for evaluation of the effects of respiratory motion on diagnostic image quality and therapeutic effect. However, the range of motion that phantoms are able to reproduce is limited to simple uniform linear correlations of external vs. internal motion; not intended for realistic studies where there is intrinsic variability in respiratory motion. This paper presents a novel respiratory phantom, intended for use in PET-CT studies; which is fully programmable for internal and external motion. It is demonstrated by actuating a moving target using a correlation model for a rigid organ in the abdominal-thoracic cavity. Moreover the phantom replicates the inter-cycle variation seen in real human behaviour by basing its actuation data from real volunteer data from an existing. A study is presented wherein the phantoms’ performance is validated by comparing the motion exhibited by the phantom against the volunteers the original driving signal it was derived from. We also demonstrate performance in reproducing inter-cycle variation and displacement.
Scatter profiles of the MIRD phantom with Geant4
M. Buğrahan Oktay, Frederic Noo
Scatter is an important source of image artifacts in cone beam computed tomography. In this study, we investigate the complexity of the scatter signal using Monte Carlo simulations with the anthropomorphic MIRD phantom. We assess the differences in scatter intensities for two angular positions of the source at a given bed position. We identified the contribution of single-scatter (coherent and in-coherent) to total-scatter signal as well as the relative contribution multiple scatter events, and we also identified the scatter-to-primary ratio (SPR). Simulations were performed at different monochromatic beam energies ranging from 40 keV to 120 keV.
Investigating the depth-sensitivity of laser speckle contrast imaging in flow phantoms
Laser speckle contrast imaging (LSCI) is a wide-field, optical technique capable of assessing changes in flow rates of scattering fluids. In biomedical applications, LSCI has been used to quantify changes of blood perfusion in various tissue. One limitation of LSCI is its limited depth sensitivity- it can only sense blood flow in superficial layers of tissue. The goal of this study was to experimentally investigate the depth-sensitivity of LSCI for detecting fluid flow embedded in a turbid optical phantom. LSCI was used to image a flow channel buried by the scattering medium at incremental depths ranging from 0 mm to 2.4 mm. The flow measurements were successively repeated using two illumination wavelengths, 633 nm and 785 nm. Images were captured with and without flow present through the phantom for each wavelength and analyzed to develop a flow-sensitivity parameter. This provided a metric of LSCI’s ability for detecting flow as a function of channel depth. At a depth of 1.5 mm, the flow sensitivity decreased by 80% with the 633 nm illumination and 65% for the 785 nm illumination relative to a depth of 0 mm. The results demonstrate that the flow sensitivity of the 785 nm source diminished at a slower rate as the buried depth was increased than the sensitivity of the 633 nm source. This study suggests that the flow depth and illumination wavelength should be considered while using LSCI.
Poster Session: Image Reconstruction
icon_mobile_dropdown
Noise and spatial resolution characteristics of unregularized statistical iterative reconstruction: an experimental phantom study
John Hayes, Daniel Gomez-Cardona, John Garrett, et al.
One of the main challenges in low dose x-ray computed tomography (CT) is the presence of highly structured noise. Model based iterative reconstruction methods (MBIR) have shown great potential to overcome this problem; however, they have also introduced an additional challenge: highly nonlinear behavior. One example is the noise variance vs. dose power-law, σ2 α (dose)−β, for which quasilinear FBP-based systems have a β value equal to 1, while MBIR methods have values in the range 0.4-0.6.1 This nonlinearity is attributed mainly to the regularization term of the objective function rather than the data fidelity term. Therefore, if statistical iterative reconstruction was performed in the absence of the regularization term, it could be possible to minimize the nonlinear imaging performance of these methods, while still taking advantage of the benefits from the data fidelity term. Once the image is reconstructed, an additional shift-invariant filter could be implemented to reduce the overall noise magnitude. In this work, the potential benefits of performing (I) unregularized statistical iterative reconstruction with additional image domain denoising are explored and compared against (II) regularized statistical iterative reconstruction using a total variation (TV) regularizer. Rigorous repeated phantom studies were performed at 5 exposure levels to assess the imaging performance in terms of noise and spatial resolution. Results regarding the power-law showed that for FBP reconstruction and for paradigm I, β= 1, while for paradigm II β= 0.6. Additionally, noise was independent of contrast in paradigm I, but was contrast dependent in paradigm II.
Motion compensated reconstruction of the aortic valve for computed tomography
T. Elss, R. Bippus, H. Schmitt, et al.
Motion compensated cardiac reconstruction in computed tomography (CT) has traditionally been focused on coronary arteries. However, with the increasing number of cardiac CT scans being performed for the diagnosis and treatment planning of valvular diseases, there is a clear need for motion correction of the aortic valve region to assist with the reproducibility of aortic annulus measurements. A second pass approach for aortic valve motion compensation on retrospective ECG-gated CT scans is introduced here. The processing chain is comprised of four steps. A gated multi-phase cardiac reconstruction is first performed, followed by a gradient based filter to enhance the edges in the resulting time series of volume images. Subsequently these normalized filtered results are made to undergo an elastic registration and finally followed by a motion compensated reconstruction that includes the estimated motion vector fields. The method was applied to twelve clinical cases and tested for systolic (30% R-R interval) and diastolic (70% R-R interval) imaging of the aortic valve. This second pass approach leads to a significant reduction of motion artifacts especially in late systole.
Real-time image reconstruction for low-dose CT using deep convolutional generative adversarial networks (GANs)
Kihwan Choi, Sung Won Kim, Joon Seok Lim
This paper introduces a deep learning network that reconstructs low-dose CT images into CT images of a high quality comparable to adaptive statistical iterative reconstruction (ASIR) as fast as filtered backprojection (FBP). Fully convolutional networks (FCNs) are adopted to denoise the low-dose CT images reconstructed with FBP. In contrast to patch-based convolutional neural networks (CNNs), we train the FCN-based denoising network with full-size images, which is computationally efficient due to the reuse of feature maps from the lower layers. To guarantee that the resultant high-quality images are consistent with the input images, a CNN-based classifier is added to the denoising network during the training phase. The classifier incorporates the CT noise model and evaluates the consistency between the images reconstructed with FBP and those of the denoising network. This supplementary structure makes the entire network a class of generative adversarial networks (GANs). For training and testing the network, we use a dataset of 18 patients who have undergone abdominal low-dose CT with both FBP and ASIR, which we split into a training set of 12 patients and a validation set of the remaining 6 patients. After being trained with FBP and ASIR image pairs, the GAN successfully recovers the high-quality images from the noisy CT images reconstructed with FBP. The network, by using a moderate GPU, is computationally efficient in recovering each image within 0.1 second. It is also remarkable that the GAN successfully preserves the image details, whereas ASIR is known for its occasional failure to recover small low-contrast features.
Motion-compensated reconstruction for limited-angle multiphase cardiac CT
Mingwu Jin, Cong Zhao, Xun Jia, et al.
ECG-gated multiphase cardiac CT (MP-CCT) is free of motion artifacts and can provide the extra information of cardiac functions for the diagnosis of coronary artery disease. Used majority as a screening tool, the excessive radiation dose becomes a concern of MP-CCT. In this work, we propose a motion-compensated reconstruction method for MP-CCT with the limited angle acquisition for each phase in order to substantially reduce the radiation dose. MP-CCT of the XCAT phantom with eight cardiac phases was simulated with 90° coverage for each phase. The motion was estimated from images reconstructed by combining projections from two phases (for a full 180° coverage). To reconstruct the image at phase k, the motion fields were used to warp images at phase k to phase j (j = 1~8 and j ≠ k) so that all projection data could be used for a total variation (TV) constrained reconstruction. The results show that TV reconstruction for each cardiac phase independently (TV-Single Phase) suffered severe limited-angle artifacts. TV reconstruction with two-phase projections (TV-Two Phases) improved the image quality and was suitable for good motion estimation. Motion-compensated TV reconstruction (TV-MC) achieved almost artifact-free images, although some motion errors could be seen. The root mean square error values are 254 HU, 64 HU and 47 HU for TV-Single Phase, TV-Two Phases, and TV-MC, respectively.
Low-dose CT reconstruction with MRF prior predicted from patch samples of normal-dose CT database
Markov random field (MRF) model-based penalty is widely used in statistical iterative reconstruction (SIR) of low dose CT (LDCT) reconstruction for noise suppression and edge-preserving. In this strategy, normal dose CT scans are usually used as a priori information to further improve the LDCT quality. However, repeated CT scans are needed and registration or segmentation is usually applied first when misalignment between the low-dose and normal-dose scans exists. The study aims to propose a new MRF prior model of SIR based on the NDCT database without registration. In the proposed model, MRF weights are predicted using optimal similar patch samples from the NDCT database. The patch samples are determined by evaluating the similarity with Euclidean distance between patches from NDCT and the target patch of LDCT. The proposed prior term is incorporated into the SIR cost function, which is to be minimized for LDCT reconstruction. The proposed method is tested on an artificial LDCT data based on a high-dose patient data. Preliminary result has proved its potential performance in edge and structure detail preservation.
A deep learning-enabled iterative reconstruction of ultra-low-dose CT: use of synthetic sinogram-based noise simulation technique
Effective elimination of unique CT noise pattern while preserving adequate image quality is crucial in reducing radiation dose to ultra-low-dose level in CT imaging practice. In this study, we present a novel Deep Learning-enable Iterative Reconstruction (Deep IR) approach for CT denoising which incorporate a synthetic sinogram-based noise simulation technique for training of Convolutional Neural Network (CNN). Regular dose CT images from 25 patients were used from Seoul National University Hospital. The CT scans were performed at 140 kVp, 100 mAs, and reconstructed with standard FBP technique using B60f kernel. Among them, 20 patients were randomly selected as a training set and the rest 5 patients were used for a test set. We applied a re-projection technique to create a synthetic sinogram from the DICOM CT image, and then a simulated noise sinogram was generated to match the noise level of 10mAs according to Poisson statistic and the system noise model of the given scanner (Somatom Sensation 16, Siemens). We added the simulated noise sinogram to the re-projected synthetic sinogram to generate a simulated sinogram of ultra-low dose scan. We also created the simulated ultra-low-dose CT image by applying FBP reconstruction of the simulated noise sinogram with B60f kernel. A CNN model was created using a TensorFlow framework to have 10 consecutive convolution layer and activation layer. The CNN was trained to learn the noise in sinogram domain: the simulated noisy sinogram of ultra-low dose scan was fed into its input nodes with the output node being fed by the simulated noise sinogram. At test phase, the noise sinogram from the CNN output was reconstructed with using B60f kernel to create a noise CT image, which in turn was subtracted from the simulated ultra-low-dose CT image to produce a Deep IR CT image. The performance was evaluated quantitatively in terms of structural similarity (SSIM) index, peak signal-to-noise ratio (PSNR) and noise level measurement and qualitatively in CT image by comparing the noise pattern and image quality. Compared to low-dose image, denoising image of the SSIM and the PSNR were improved from 0.75 to 0.80, 28.61db to 32.16 respectively. The noise level of denoising image was reduced to an average of 56 % of that of low-dose image. The noise pattern in reconstructed noise CT was indistinguishable from that of real CT images, and the image quality of Deep IR CT image was overall much higher than that of simulated ultra-low-dose CT.
Cone-beam x-ray luminescence computed tomography reconstruction from single-view based on total variance
As an emerging hybrid imaging modality, cone-beam X-ray luminescence computed tomography (CB-XLCT) has been proposed based on the development of X-ray excitable nanoparticles. Fast three-dimensional (3-D) CB-XLCT imaging has attracted significant attention for the application of XLCT in fast dynamic imaging study. Currently, due to the short data collection time, single-view CB-XLCT imaging achieves fast resolving the three-dimensional (3-D) distribution of X-ray-excitable nanoparticles. However, owing to only one angle projection data is used in the reconstruction, the single-view CB-XLCT inverse problem is inherently ill-conditioned, which makes image reconstruction highly susceptible to the effects of noise and numerical errors. To solve the ill-posed inverse problem, using the sparseness of the X-ray-excitable nanoparticles distribution as the prior, a new reconstruction approach based on total variance is proposed in this study. To evaluate the performance of the proposed approach, a phantom experiment was performed based on a CB-XLCT imaging system. The experiments indicate that the reconstruction from single-view XCLT can provide satisfactory results based on the proposed approach. In conclusion, with the reconstruction approach based on total variance, we implement a fast XLCT reconstruction of high quality with only one angle projection data used, which would be helpful for fast dynamic imaging study. In future, we will focus on how to applying the proposed TV-based reconstruction method and CB-XLCT imaging system to image fast biological distributions of the X-ray excitable nanophosphors in vivo.
CPCT-LRTDTV: cerebral perfusion CT image restoration via a low rank tensor decomposition with total variation regularization
Jiangjun Peng, Dong Zeng, Jianhua Ma, et al.
Ischemic stroke is the leading cause of serious and long-term disability worldwide. The cerebral perfusion computed tomography (CPCT) is an important imaging modality for diagnosis in case of an ischemic stroke event by providing cerebral hemodynamic information. However, due to the dynamic sequential scans in CPCT, the associative radiation dose unavoidably increases compared with conventional CT. In this work, we present a robust CPCT image restoration algorithm with a spatial total variation (TV) regularization and a low rank tensor decomposition (LRTD) to estimate high-quality CPCT images and corresponding hemodynamic map in the case of low-dose, which is termed “CPCT-LRTDTV”. Specifically, in the LRTDTV regularization, the spatial TV is introduced to describe local smoothness of the CPCT images, and the LRTD is adopted to fully characterize spatial and time dimensional correlation existing in the CPCT images. Subsequently, an alternating optimization algorithm was adopted to minimize the associative objective function. To evaluate the presented CPCT-LRTDTV algorithm, both qualitative and quantitative experiments are conducted on digital perfusion brain phantom and clinical patient. Experimental results demonstrate that the present CPCT-LRTDTV algorithm is superior to other existing algorithms with better noise-induced artifacts reduction, resolution preservation and accurate hemodynamic map estimation.
Model-based image reconstruction with a hybrid regularizer
Jingyan Xu, Frédéric Noo
Model based image reconstruction often includes regularizers to encourage a priori image information and stabilize the ill-posed inverse problem. Popular edge preserving regularizers often penalize the first order differences of image intensity values. In this work, we propose a hybrid regularizer that additionally penalizes the gradient of an auxiliary variable embedded in the half-quadratic reformulation of some popular edge preserving functions. As the auxiliary variable contain the gradient information, the hybrid regularizer penalizes both the first order and the second order image intensity differences, hence encourages both piecewise constant and piecewise linear image intensity values. Our experimental data using combined physical data acquisition and computer simulations demonstrate the effectiveness of the hybrid regularizer in reducing the stair-casing artifact of the TV penalty, and producing smooth intensity variations.
Reconstruction of micro CT-like images from clinical CT images using machine learning: a preliminary study
Kyohei Takeda, Yutaro Iwamoto, Keisuke Uemura, et al.
High-resolution medical images are crucial for medical diagnosis, and for planning and assisting surgery. Micro computed tomography (micro CT) can generate high-resolution 3D images and analyze internal micro-structures. However, micro CT scanners can only scan small objects and cannot be used for in-vivo clinical imaging and diagnosis. In this paper, we propose a super-resolution method to reconstruct micro CT-like images from clinical CT images based on learning a mapping function or relationship between the micro CT and clinical CT. The proposed method consists of following three steps: (1) Pre-processing: This involves the collection of pairs of clinical CT images and micro CT images for training and the registration and normalization of each pair. (2) Training: This involves learning a non-linear mapping function between the micro CT and clinical CT by using training pairs. (3) Processing (testing) step: This involves enhancing a new CT image, which is not included in the training data set, by using the learned mapping function.
Information theory optimization of acquisition parameters for improved synthetic MRI reconstruction
Drew Mitchell, Ken-Pin Hwang, Tao Zhang, et al.
Synthetic magnetic resonance imaging (MRI) is a method for obtaining parametric maps of tissue properties from one scan and using these to reconstruct multiple contrast weighted images. This reduces the scan time necessary to produce multiple series of different contrast weightings and potentially provides additional diagnostic utility. For synthetic MRI, current acquisition parameter selection and subsampling approaches (such as variable density Poisson disc sampling) are heuristic in nature. We develop a mutual information-based mathematical framework to quantify the information content of a parameter space composed of k-space and several pulse sequence acquisition parameters of interest for model-based image reconstruction. We apply this framework to the signal model for a multi-contrast inversion- and T2-prepared gradient echo sequence. This pulse sequence is modeled for in silico data and used for the acquisition of phantom data. Mutual information between parametric map uncertainty and measured data is determined for variable acquisition parameters to characterize the performance of each acquisition. Mutual information is calculated by Gauss-Hermite quadrature and a global search over acquisition parameter space. We demonstrate the possibility of mutual informationguided subsampling schemes on phantom image data. Fully-sampled images of a silicone gel phantom and a water phantom are acquired on a 3T imager. Subsampling methods are applied to this data before it is reconstructed using the Berkeley Advanced Reconstruction Toolbox (BART). This framework allows for the strategic selection of synthetic MR acquisition parameters and subsampling schemes for specific applications and also provides a quantitative understanding of parameter space information content in an acquisition for multi-parameter mapping.
A novel reconstruction algorithm for arm-artifact reduction in computed tomography
Hiroki Kawashima, Katsuhiro Ichikawa, Tadanori Takata, et al.
Arm artifact, which is type of streak artifact frequently observed in computed tomography (CT) images of polytrauma patients at the arms-down positioning, are known to degrade the image quality. The existing streak artifact reduction algorithms are not effective for arm artifact, as they have not been designed for this purpose. The effects of the latest iterative reconstruction techniques (IRs), which are effective for noise and streak artifact reductions, have not been evaluated for the arm-artifact reduction. In this study, we developed a novel reconstruction algorithm for arm-artifact reduction using an arm-induced noise filtering of the projection data. A phantom resembling a standard adult abdomen with two arms was scanned using a 16-row CT scanner, and then the projection data was downloaded. The proposed algorithm consisted of an arm recognition step after the conventional reconstruction and arm-induced noise filtering (frequency split and attenuation-dependent filtering) of the projection data. The artifact reduction capabilities and image blurring as a side effect of the filtering were compared with those of the latest three IRs (IR1, IR2, and IR3). The proposed algorithm and IR1 significantly reduced the artifacts by 89.4% and 83.5%, respectively. The other two IRs were not effective in terms of arm-artifact reduction. In contrast to IR1 that yielded an apparent image blurring combined with a different noise texture, the proposed algorithm mostly suppressed the image blurring. The proposed algorithm, designed for an arm-artifact reduction, was effective and it is expected to improve the image quality of abdominal CT examinations at the arms-down positioning.
A feasibility study of extracting tissue textures from a previous normal-dose CT database as prior for Bayesian reconstruction of current ultra-low-dose CT images
Yongfeng Gao, Zhengrong Liang, Yuxiang Xing, et al.
Tremendous research efforts have been devoted to lower the X-ray radiation exposure to the patient in order to expand the utility of computed tomography (CT), particularly to pediatric imaging and population-based screening. When the exposure dosage goes down, both the X-ray quanta fluctuation and the system electronic background noise become significant factors affecting the image quality. Conventional edge-preserving noise smoothing would sacrifice tissue textures and compromise the clinical tasks. To relieve these challenges, this work models the noise problem by pre-log shifted Poisson statistics and extracts tissue textures from previous normal-dose CT scans as prior knowledge for texturepreserving Bayesian reconstruction of current ultralow-dose CT images. The pre-log shift Poisson model considers accurately both the X-ray quanta fluctuation and the system electronic noise while the prior knowledge of tissue textures removes the limitation of the conventional edge-preserving noise smoothing. The Bayesian reconstruction was tested by experimental studies. One patient chest scan was selected from a database of 133 patients’ scans at 100mAs/120kVp normal-dose level. From the selected patient scan, ultralow-dose data was simulated at 5mAs/120kVp level. The other 132 normal-dose scans were grouped according to how close their lung tissue texture patterns are from that of the selected patient scan. The tissue textures of each group were used to reconstruct the ultralow-dose scan by the Bayesian algorithm. The closest group to the selected patient produced almost identical results to the reconstruction when the tissue textures of the selected patient’s normal-dose scan were used, indicating the feasibility of extracting tissue textures from a previous normal-dose database to reconstruct any current ultralow-dose CT image. Since the Bayesian reconstruction can be time consuming, this work further investigates a strategy to efficiently store the projection matrix rather than computing the line integrals on-flight. This strategy accelerated the computing speed by more than 18 times.
Low-dose computed tomography image reconstruction via structure tensor total variation regularization
The X-ray computer tomography (CT) scanner has been extensively used in medical diagnosis. How to reduce radiation dose exposure while maintain high image reconstruction quality has become a major concern in the CT field. In this paper, we propose a statistical iterative reconstruction framework based on structure tensor total variation regularization for low dose CT imaging. An accelerated proximal forward-backward splitting (APFBS) algorithm is developed to optimize the associated cost function. The experiments on two physical phantoms demonstrate that our proposed algorithm outperforms other existing algorithms such as statistical iterative reconstruction with total variation regularizer and filtered back projection (FBP).
Motion-compensated reconstruction based on filtered backprojection for helical head CT
Seokhwan Jang, Seungeon Kim, Yongjin Chang, et al.
In head CT imaging, it is assumed that the patient’s head does not move during the CT acquisition. In clinical practice, however, the head sometimes moves and thereby causes considerable motion artifacts on the reconstructed image. To solve this motion artifact problem, motion estimation (ME) and motion-compensated (MC) reconstruction are needed. Reliable MC reconstruction is especially critical, because it is usually used for ME in addition to motion compensation. In this work, we propose a novel MC reconstruction algorithm for helical head CT, under the assumption that the head motion is rigid. CT acquisition of a rigidly moving object in a helical scan geometry can be considered as the acquisition of a static object in the scan geometry virtually transformed according to the motion. Based on this consideration, we propose a MC reconstruction algorithm by assuming that the head motion is already estimated. The algorithm consists of three consecutive steps, namely, MC rebinning, tangential filtering, and weighted backprojection. In the rebinning step, a virtually transformed helical geometry according to the motion is carefully taken account of, and a new weighting function is introduced to the backprojection step to minimize unwanted artifacts. To evaluate the proposed algorithm, we perform simulations by using a numerical phantom with pre-defined motion in a helical scan geometry. The proposed MC algorithm well restores a reconstructed image that is corrupted by motion, and thereby achieves the image quality comparable to that of the phantom with no motion.
High-resolution CT image retrieval using sparse convolutional neural network
Yang Lei, Dong Xu, Zhengyang Zhou, et al.
We propose a high-resolution CT image retrieval method based on sparse convolutional neural network. The proposed framework is used to train the end-to-end mapping from low-resolution to high-resolution images. The patch-wise feature of low-resolution CT is extracted and sparsely represented by a convolutional layer and a learned iterative shrinkage threshold framework, respectively. Restricted linear unit is utilized to non-linearly map the low-resolution sparse coefficients to the high-resolution ones. An adaptive high-resolution dictionary is applied to construct the informative signature which is highly connected to a high-resolution patch. Finally, we feed the signature to a convolutional layer to reconstruct the predicted high-resolution patches and average these overlapping patches to generate high-resolution CT. The loss function between reconstructed images and the corresponding ground truth highresolution images is applied to optimize the parameters of end-to-end neural network. The well-trained map is used to generate the high-resolution CT from a new low-resolution input. This technique was tested with brain and lung CT images and the image quality was assessed using the corresponding CT images. Peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and mean absolute error (MAE) indexes were used to quantify the differences between the generated high-resolution and corresponding ground truth CT images. The experimental results showed the proposed method could enhance images resolution from low-resolution images. The proposed method has great potential in improving radiation dose calculation and delivery accuracy and decreasing CT radiation exposure of patients.
LdCT-Net: low-dose CT image reconstruction strategy driven by a deep dual network
Ji He, Yongbo Wang, Yan Yang, et al.
High radiation dose in CT imaging is a major concern, which could result in increased lifetime risk of cancers. Therefore, to reduce the radiation dose at the same time maintaining clinically acceptable CT image quality is desirable in CT application. One of the most successful strategies is to apply statistical iterative reconstruction (SIR) to obtain promising CT images at low dose. Although the SIR algorithms are effective, they usually have three disadvantages: 1) desired-image prior design; 2) optimal parameters selection; and 3) high computation burden. To address these three issues, in this work, inspired by the deep learning network for inverse problem, we present a low-dose CT image reconstruction strategy driven by a deep dual network (LdCT-Net) to yield high-quality CT images by incorporating both projection information and image information simultaneously. Specifically, the present LdCT-Net effectively reconstructs CT images by adequately taking into account the information learned in dual-domain, i.e., projection domain and image domain, simultaneously. The experiment results on patients data demonstrated the present LdCT-Net can achieve promising gains over other existing algorithms in terms of noise-induced artifacts suppression and edge details preservation.
Poster Session: Algorithms
icon_mobile_dropdown
Motion compensation for non-gated helical CT: application to lung imaging
M. Grass, R. Bippus, A. Thran, et al.
Computed tomography (CT) imaging of the thorax is a common application of CT in radiology. Most of these scans are performed with a helical scan protocol. A significant number of images suffer from motion artefacts due to the inability of the patients to hold their breath or due to hiccups or coughing. Some images become nondiagnostic while others are simply degraded in quality. In order to correct for these artefacts a motion compensated reconstruction for non-periodic motion is required.

For helical CT scans with a pitch smaller or equal to one the redundancy in the helical projection data can be used to generate images at the identical spatial position for multiple time points. As the scanner moves across the thorax during the scan, these images do not have a fixed time point, but a well-defined temporal distance inbetween the images. Using image based registration a motion vector field can be estimated based on these images. The motion artefacts are corrected in a subsequent motion compensated reconstruction. The method is tested on mathematical phantom data (reconstruction) and clinical lung scans (motion estimation and reconstruction).
Use of a CMOS-based micro-CT system to validate a ring artifact correction algorithm on low-dose image data
The imaging of objects using high-resolution detectors coupled to CT systems may be made challenging due to the presence of ring artifacts in the reconstructed data. Not only are the artifacts qualitatilvely distracting, they reduce the SNR of the reconstructed data and may lead to a reduction in the clinical utility of the image data. To address these challenges, we introduce a multistep algorithm that greatly reduces the impact of the ring artifacts on the reconstructed data through image processing in the sinogram space. First, for a single row of detectors corresponding to one slice, we compute the mean of every detector element in the row across all projection view angles and place the reciprocal values in a vector with length equal to the number of detector elements in a row. This vector is then multiplied with each detector element value for each projection view angle, obtaining a normalized or corrected sinogram. This sinogram is subtracted from the original uncorrected sinogram of the slice to obtain a difference map, which is then blurred with a median filter along the row direction. This blurred difference map is summed back to the corrected sinogram, to obtain the final sinogram, which can be back projected to obtain an axial slice of the scanned object, with a greatly reduced presence of ring artifacts. This process is done for each detector row corresponding to each slice. The performance of this algorithm was assessed using images of a mouse femur. These images were acquired using a micro-CT system coupled to a high-resolution CMOS detector. We found that the use of this algorithm led to an increase in SNR and a more uniform line-profile, as a result of the reduction in the presence of the ring artifacts.
Windmill artifact reduction based on the combined reconstructed image
Yongyi Shi, Yanbo Zhang, Xiaogang Chen, et al.
Thin slice thickness reconstructions from helical Multi Detector-row CT (MDCT) scanning may suffer from windmill artifacts because of the under-sampling of the data in the z- or detector-row direction (which is essentially a Nyquist sampling issue). There are two strategies for windmill artifacts reduction: one is focusing on the CT system hardware design such as flying focal spot (FFS), the other is committed to correction using algorithms. Recently, numerous algorithms have been proposed to address this issue. One method aims to recover high-resolution images from thick-slice low-resolution images which are without windmill artifacts. Another method is an image domain post-processing method which can suppress windmill artifacts by using prior information such as total variation (TV). However, both two methods blur sharp edges and are unable to recover fine details. In this work, a super-resolution (SR) reconstruction method is developed by combining low rank and TV regularization (LRTV) to improve the z-axis resolution of MDCT in the post processing step. Hence, the SR reconstruction is formulated as an optimization problem which is solved effectively via alternating direction method of multipliers (ADMM). Thereafter, combining the high-resolution image with original reconstructed image, which is affected by windmill artifacts, can obtain a more accurate image. We evaluated our algorithm on Anke 16-slice helical CT scanner. The results demonstrate that the proposed method can achieve better windmill artifacts removal performance than the competing methods and simultaneously preserve fine details.
Scatter correction for multi-slice CT system
Yang Wang, Karl Stierstorfer, Martin Petersilka, et al.
Scatter radiation in multi-slice CT system nowadays is playing a more and more important role, as the slice number is getting larger and larger, e.g. from 16 to 64 and even more. Scatter radiation may downgrade image quality e.g. create inhomogeneity and decrease the image contrast. Although in current multi-slice CT, anti-scatter collimation is widely used to reduce the scatter radiation received, as the detector is getting wider, its efficiency is getting weaker. Although beam hardening correction can somehow guarantee image homogeneity, as beam hardening effect and scatter radiation have different physical origin, a delegated scatter correction algorithm is desired. In this paper, we would like to propose a scatter correction algorithm working during data pre-processing before image reconstruction. After we implemented this algorithm in Siemens latest released Somatom Go. CT system, we obtain good image quality especially under certain clinical cases.
Optimal cardiac phase in prospectively gated axial cardiac CT scans
Alexander Zamyatin, Basak Ulker Karbeyaz, Charles Shaughnessy, et al.
To improve temporal resolution in prospectively gated axial cardiac CT scans, short scan (half-scan, partial scan) is used for image reconstruction. While some vendors offer scanners with 16cm collimation, capable of collecting entire heart data in a single rotation, the majority of routine cardiac scans are still done with 4cm collimation. In case of a prospective axial cardiac scanning, four or more axial acquisitions are performed at staggered patient table positions to cover the entire heart. At each acquisition, raw data is collected at the prescribed phase of cardiac R-R interval with the range of the x-ray source angles covering one or less than one rotation. If this angle range is greater than what is required for a short scan reconstruction, it allows some room for optimizing the reconstruction phase. Often, such optimization is done by manually reviewing images at slightly different reconstruction phase angles, and selecting the images with the least pronounced motion artifacts. Considering there are at least four acquisitions for each prospective cardiac scan, this may become a tedious time-consuming process. This paper proposes an automated process to select the best short-scan view range within full rotation acquisitions that minimizes motion artifacts at each table position. The proposed method was tested with a motion phantom which was connected to an ECG simulator and clinical cardiac data. Results show that the proposed method reliably provides reduction of motion artifact in reconstructed images.
A denoising auto-encoder based on projection domain for low dose CT
There are growing concerns on the effect of the radiation, which can be decreased by reducing X-ray tube current. However, this manner will lead to the degraded image due to the quantum noise. In order to alleviate the problem, multiple methods have been explored both during reconstruction and in post-processing. Recently, Denoising Auto-Encoder(DAE) has drawn much attention which can generate clean images from corrupted input. Inspired by the idea of DAE, during the low dose acquisition, the noisy projection can be regarded as corrupted images. In this paper, we proposed a denoising method based on projection domain. First, the DAE is train from stimulation noisy data coupled with original data. Then utilize the DAE to correct noisy projection and get denoised image from statistical iterative reconstruction. With the implement of DAE in projection domain, the reconstructions show clearer details in soft tissue and have higher SSIM (structural similarity index) than other denoising methods in image domain.
Phantom-based field maps for gradient nonlinearity correction in diffusion imaging
Baxter P. Rogers, Justin Blaber, Allen T. Newton, et al.
Gradient coils in magnetic resonance imaging do not produce perfectly linear gradient fields. For diffusion imaging, the field nonlinearities cause the amplitude and direction of the applied diffusion gradients to vary over the field of view. This leads to site- and scan-specific systematic errors in estimated diffusion parameters such as diffusivity and anisotropy, reducing reliability especially in studies that take place over multiple sites. These errors can be substantially reduced if the actual scanner-specific gradient coil magnetic fields are known. The nonlinearity of the coil fields is measured by scanner manufacturers and used internally for geometric corrections, but obtaining and using the information for a specific scanner may be impractical for many sites that operate without special-purpose local engineering and research support. We have implemented an empirical field-mapping procedure using a large phantom combined with a solid harmonic approximation to the coil fields that is simple to perform and apply. Here we describe the accuracy and precision of the approach in reproducing manufacturer gold standard field maps and in reducing spatially varying errors in quantitative diffusion imaging for a specific scanner. Before correction, median B value error ranged from 33 - 41 relative to manufacturer specification at 100 mm from isocenter; correction reduced this to 0 - 4. On-axis spatial variation in the estimated mean diffusivity of an isotropic phantom was 2.2% - 4.1% within 60 mm of isocenter before correction, 0.5% - 1.6% after. Expected fractional anisotropy in the phantom was 0; highest estimated fractional anisotropy within 60 mm of isocenter was reduced from 0.024 to 0.012 in the phase encoding direction (48% reduction) and from 0.020 to 0.006 in the frequency encoding direction (72% reduction).
Deep residual learning enabled metal artifact reduction in CT
Many clinical scenarios involve the presence of metal objects in the CT scan field-of-view. Metal objects tend to cause severe artifacts in CT images such as shading, streaks, and a loss of tissue visibility adjacent to metal components, which is often the region-of-interest in imaging. Many existing methods depend on synthesized projections and classification of in-vivo materials whose results can sometimes be subject to error and miss details, while other methods require additional information such as an accurate model of metal component prior to reconstruction. Deep learning approaches have advanced rapidly in recent years and achieved tremendous success in many fields. In this work, we develop a deep residual learning framework that trains a deep convolution neural network to detect and correct for metal artifacts from image content. Training sets are generated from simulation that incorporates modeling of physical processes related to metal artifacts. Testing scenarios included the presence of a surgical screw within the transaxial plane and two rod implants in the craniocaudal direction. The proposed network trained by polychromatic simulation data demonstrates the capability to largely reduce or, in some cases, almost entirely remove metal artifacts caused by beam hardening effects. The proposed method also showed largely reduced metal artifacts on data collected from a multi-slice CT system. These findings suggest deep residual learning enabled methods present a new type of promising approaches for reducing metal artifacts and support further development of the method in more clinically realistic scenarios.
A deeper convolutional neural network for denoising low-dose CT images
In recent years, CNN has been gaining attention as a powerful denoising tool after the pioneering work [7], developing 3-layer convolutional neural network (CNN). However, the 3-layer CNN may lose details or contrast after denoising due to its shallow depth. In this study, we propose a deeper, 7-layer CNN for denoising low-dose CT images. We introduced dimension shrinkage and expansion steps to control explosion of the number of parameters, and also applied the batch normalization to alleviate difficulty in optimization. The network was trained and tested with Shepp-Logan phantom images reconstructed by FBP algorithm from projection data generated in a fan-beam geometry. For a training set and a test set, the independently generated uniform noise with different noise levels was added to the projection data. The image quality improvement was evaluated both qualitatively and quantitatively, and the results show that the proposed CNN effectively reduces the noise without resolution loss compared to BM3D and the 3-layer CNN.
Automated algorithms for improved pre-processing of magnetic relaxometry data
W. Stefan, K. Mathieu, S. L. Thrower, et al.
We present a novel method to pre-process magnetic relaxation (MRX) data. The method is used to estimates the initial magnetic field generated by Super Paramagnetic Nano Particles (SPIONs) from decay curves measured by superconducting quantum interference devices (SQUIDs). The curves are measured using a MagSense MRX Instrument (PrecisionMRX, Imagion Biosystems, Albuquerque, NM). We compare the initial field estimates to the standard method used by Imagion Biosystems. As compared to the standard method our new method results in more stable estimates in the presence of noise and allows monitoring of the long term stability of the MagSense MRX instrument. We demonstrate these findings with phantom scans conducted over the period of about one year.
Algorithm enabled TOF-PET imaging with reduced scan time
Zheng Zhang, Sean Rose, Jinghan Ye, et al.
Time-of-flight (TOF) positron emission tomography (PET) has gained remarkable development recently due to the advances in scintillator, silicon photomultipliers (SiPM), and fast electronics. However, current clinical reconstruction algorithms in TOF-PET are still based on ordered-subset-expectation-maximization (OSEM) and its variants, which may face challenges in non-conventional imaging applications, such as fast imaging within short scan time. In this work, we propose an image-TV constrained optimization problem, and tailor a primal- dual algorithm for solving the problem and reconstructing images. We collect list-mode data of a Jaszczak phantom with a prototype digital TOF-PET scanner. We focus on investigating image reconstruction from data collected within reduced scan time, and thus of lower count levels. Results of the study indicate that our proposed algorithm can 1) yield image reconstruction with suppressed noise, extended axial volume coverage, and improved spatial resolution over that obtained in conventional reconstructions, and 2) yield reconstructions with potential clinical utility from data collected within shorter scan time.
Poster Session: CT Image Quality and Dose
icon_mobile_dropdown
Development of a fast, voxel-based, and scanner-specific CT simulator for image-quality-based virtual clinical trials
Ehsan Abadi, Brian Harrawood, Anuj Kapadia, et al.
This study aimed to develop a simulation framework to synthesize accurate and scanner-specific Computed Tomography (CT) images of voxel-based computational phantoms. Two phantoms were used in the simulations, a geometry-based Mercury phantom and a “textured” anthropomorphic XCAT phantom, both with an isotropic voxel size of 0.25 mm. The simulator geometry and physics were based on a clinical scanner. The projection images were calculated by computing each detector’s signal using the Beer-Lambert law. To avoid aliasing artifacts, the focal spot and detectors were subsampled four and nine times, respectively. The simulator was designed to function both axially and helically, and account for “Z” and in-plane flying focal spots and various bowtie filters. Quantum and electronic noise were added to the detector signals as a function of the tube current using experimental measurements. The resulting projection images were calibrated to suppress the beam hardening artifact using a 4th-order polynomial water correction. The simulation procedure was accelerated using multi-threading and graphics processing unit (GPU) computing. The projection images were reconstructed using clinical reconstruction software. To evaluate the accuracy of the simulator, the reconstructed images of the computational Mercury phantom were compared against experimental CT scans of its physical counterpart in terms of resolution, noise, and HU values. Results showed that our proposed simulator can generate CT images with image quality attributes close to real clinical data. The new CT simulator, combined with anthropomorphic “textured” phantoms, provides a new way to generate clinically realistic CT data and has the potential to enable virtual clinical studies in advance or in lieu of costly clinical trials.
A rapid GPU-based Monte Carlo simulation tool for individualized dose estimations in CT
The rising awareness towards the risks associated with CT radiation has pushed forward the case for patient- specific dose estimation, one of the prerequisites for individualized monitoring and management of radiation exposure. The established technique of using Monte Carlo simulations to provide such dose estimates is computationally intensive, thus limiting their utility towards timely assessment of clinically relevant questions. To overcome this impediment, we have developed a rapid Monte Carlo simulation tool based on the MC-GPU frame- work for individualized dose estimation in CT. This tool utilizes the multi-threaded x-ray transport capability of MC-GPU, scanner-specific geometry and voxelized patient-specific models to produce realistic estimates of radiation dose. To demonstrate its utility, we utilized this tool to provide scanner-specific (LightSpeed VCT, GE Healthcare) organ dose estimates in abdominopelvic CT for a virtual population of 58 adult XCAT patient models. To gauge the accuracy of these estimates, the organ dose values from this new tool were compared against those from a previously published tool based on PENELOPE framework. The comparisons demonstrated the capability of our new simulation tool to produce dose estimates that agree with the published data within 5% for organs within primary field while simultaneously providing speedups as high as 70x over a CPU cluster-based execution model. This high accuracy of dose estimates coupled with the demonstrated speedup provides a viable model for rapid and personalized dose estimation.
How reliable are texture measurements?
The purpose of this study was to assess the bias (objectivity) and variability (robustness) of computed tomography (CT) texture features (internal heterogeneities) across a series of image acquisition settings and reconstruction algorithms. We simulated a series of CT images using a computational phantom with anatomically-informed texture. 288 clinically-relevant simulation conditions were generated representing three slice thicknesses (0.625, 1.25, 2.5 mm), four in-plane pixel sizes (0.4, 0.5, 0.7, 0.9 mm), three dose levels (CTDIvol = 1.90, 3.75, 7.50 mGy), and 8 reconstruction kernels. Each texture feature was sampled with 4 unique volumes of interest (VOIs) (244, 1953, 15625, 125000 mm3). Twenty-one statistical texture features were calculated and compared between the ground truth phantom (i.e., pre-imaging) and its corresponding post-imaging simulations. Metrics of comparison included (1) the percent relative difference (PRD) between the post-imaging simulation and the ground truth, and (2) the coefficient of variation (%COV) across simulated instances of texture features. The PRD and %COV ranged from -100% to 4500%, and 0.8% to 49%, respectively. PRD decreased with increased slice thickness, in-plane pixel size, and dose. The dynamic range of results indicate that image acquisition and reconstruction conditions (i.e., slice thicknesses, in-plane pixel sizes, dose levels, and reconstruction kernels) can lead to significant bias and variability in texture feature measurements.
Focal spot rotation for improving CT resolution homogeneity
Focal spot characteristics are one of the key determinants for system resolution. Focal spot size drives source blurring, focal spot aspect ratio using the line focus principle drives resolution loss away from isocenter, and focal spot deflection improves sampling. The purpose of this work is to introduce focal spot rotation as a possible new mechanism to fine-tune resolution tradeoffs. A conventional design orients a rectangular focal spot towards isocenter, with resolution decreasing with distance from isocenter. We propose rotating the focal spot so that it is pointed a small distance from isocenter (for example, to a point 10 cm right of isocenter). This improves resolution to the right of isocenter, decreases resolution slightly at isocenter and decreases resolution significantly to the left of isocenter. In a full scan, each ray passing through a point far from isocenter will be sampled twice, once with a larger and once with a smaller effective focal spot. This data can be appropriately combined during reconstruction to boost the limiting radial resolution of the scan, improving the resolution homogeneity of the scanner. Dynamic rotation of the focal spot, similar to dynamic deflection, can be implemented using electromagnetic steering and has other advantages.
Technical considerations for automated low-pitch spiral 4D CT scanning protocol selection
René Werner, Thilo Sothmann, Frederic Madesta, et al.
Respiration-correlated CT (4D CT) represents the basis of radiotherapy treatment planning for thoracic and abdominal tumor patients. A common approach is low-pitch spiral 4D CT. Similar to standard spiral 3D CT, CT projection data are continuously acquired while the patient couch is moving through the gantry. To ensure sufficient projection data coverage for 4D CT reconstruction, the so-called 4D CT data sufficiency condition (DSC) has to be fulfilled: For a fixed pitch factor and gantry rotation time, the patient breathing rate must be above a certain threshold; otherwise, artifacts impair image quality. For the current Siemens 4D CT scanner generation, three 4D CT protocols can be selected manually, associated with DSC thresholds of 6, 9 and 12 breaths per minute (BPM). Due to, e.g., a limited achievable z-range during scanning with lower BPM protocols, these options are, however, often not selected in practice. As a result, a high fraction of artifact-affected 4D CT data are reported. Aiming to optimize respective 4D CT workflows and improve image quality, this study systematically investigates the influence of parameters to be considered for automated scanning protocol selection and their interrelation (e.g. severity of artifacts due to DSC violation vs. clinically required z-scan range).
Variability of stenosis characterization: impact of coronary vessel motion in cardiac CT
Taylor Richards, Paul Segars, Ehsan Samei
Despite much advancement, quantitative optimization of cardiac CT has remained an elusive challenge. The purpose of this study was to quantify the stenosis measurement variability introduced by the relative motion of coronary vessels in cardiac CT. Even with general motion vectors of normal coronary vasculature known, relative in-plane motion direction with respect to the source angle during acquisition can be random. The random motion direction results in varying degrees of image degradation and visualized vessel deformation. We simulated CT scans of coronary vessels in motion in both the parallel and orthogonal directions with respect to the x-ray source at the central projection angle. We measured the diameter of the visualized vessel from the reconstructed images using an automated adaptive threshold operator. On average, the variability of all measured vessel attributes (diameter, circularity and contrast) in dual-source acquisition modes were less variable than the vessel attributes for all single-source acquisition modes. This difference was most pronounced at the fastest simulated vessel velocity (16 mm/s). The measurement range for vessel diameter, circularity and contrast were all positively correlated with vessel velocity for the single-source half-scan mode. Motion induced vessel deformation was most extreme when relative motion was directed parallel to the central projection angle of the scan range. Dual-source acquisition remedied the directionally asymmetry by simultaneously acquiring orthogonal projections. The relative direction of vessel motion during cardiac CT remains a significant source of uncertainty in vessel characterization. The methodology enables optimization of CT acquisition and reconstruction for targeted cardiac quantification.
Diagnostic value of sparse sampling computed tomography for radiation dose reduction: initial results
Felix K. Kopp, Rolf Bippus, Andreas P. Sauter, et al.
Computed Tomography (CT) is one of the most important imaging modalities in the medical domain. Ongoing demand for reduction of the X-ray radiation dose and advanced reconstruction algorithms induce ultra-low dose CT acquisitions more and more. However, though advanced reconstructions lead to improved image quality, the ratio between electronic detector noise and incoming signal decreases in ultra-low dose scans causing a degradation of the image quality and, therefore, building a boundary for radiation dose reduction. Future generations of CT scanners may allow sparse sampled data acquisitions, where the source can be switched on and off at any source position. Sparse sampled CT acquisitions could reduce photon starvation in ultra-low dose scans by distributing the energy of skipped projections to the remaining ones. In this work, we simulated sparse sampled CT acquisitions from clinical projection raw data and evaluated the diagnostic value of the reconstructions compared to conventional CT. Therefore, we simulated radiation dose reduction with different degrees of sparse sampling and with a tube current simulator. Up to four experienced radiologists rated the diagnostic quality of each dataset. By a dose reduction to 25% of the clinical dosage, images generated with 4-times sparse sampling – meaning a gap of three projections between two sampling positions – were consistently rated as diagnostic, while about 20% of the ratings for conventional CT were non-diagnostic. Therefore, our data give an initial indication that with sparse sampling a reduction to 25% of the clinical dose is feasible without loss of diagnostic value.
The effects of tube current modulation on the noise power spectra of patients with different size: consequences for quality monitoring
Alexandro Fulco, Lesley Cockmartin, Hilde Bosmans, et al.
Aim: Since dose reduction techniques such as tube current modulation may impact noise and consequently the performance that can be achieved with CT images, it is important to establish quality monitoring. We have studied whether it would be possible and relevant to implement an (automatical) procedure to retrieve and possibly alert for patients with relatively high noise levels in CT in comparison to similar cases. Proper alerting would make clinical quality supervision more efficient. Material and methods: Two homogeneous phantoms consisting of different diameters were scanned following a routine CT thorax protocol on a Siemens SOMATOM Force scanner and noise power spectra were calculated for the different phantom sizes. Next, forty-four patients, scanned with the same CT thorax protocol and reconstructed with a hard kernel (lung) and soft kernel (liver), were retrieved from PACS. Noise power spectra (NPS) were calculated for regions in the lung and liver, and evaluated over different frequency ranges. We hypothesized that the high frequency part correlates better with dose than the low frequency part that is determined by anatomical noise. Therefore we focused on the correlation of high frequency noise and dose versus patient size. Water equivalent diameters (WED) were calculated as a metric of patient size. Additionally, all patients were rated subjectively by an experienced thorax radiologist for their overall image quality and presence of diagnostically acceptable noise. Statistical correlations and outliers were investigated. Results: While the correlations between NPS and dose and patient size were not significant for the lung, a positive correlation of NPS measured in the liver with CTDIvol and WED was found (e.g. R2 = 0.31 for NPS(high frequencies) versus WED). The combined visualization of NPS at high frequencies, WED and CTDIvol showed some interesting outliers, however they did not receive lower image quality ratings. Conclusions: This work described how the Siemens SOMATOM Force scanner balances patient size, dose and image noise for a routine CT thorax protocol. However, since the outliers in both dose and (high frequency) noise levels still result in adequate to very good image quality scores, it is suggested that (straightforward) dose outlier based alerting should be the first task in dose-quality surveys on the particular scanner.
Evaluation of SparseCT on patient data using realistic undersampling models
Baiyu Chen, Matthew Muckley, Aaron Sodickson, et al.
Compressed sensing (CS) requires undersampled projection data, but CT x-ray tubes cannot be pulsed quickly enough to achieve reduced-view undersampling. We propose an alternative within-view undersampling strategy, named SparseCT, as a practical CS technique to reduce CT radiation dose. SparseCT uses a multi-slit collimator (MSC) to interrupt the x-ray beam, thus acquiring undersampled projection data directly. This study evaluated the feasibility of SparseCT via simulations using a standardized patient dataset. Because the projection data in the dataset are fully sampled, we retrospectively undersample the projection data to simulate SparseCT acquisitions in three steps. First, two photon distributions were simulated, representing the cases with and without the MSC. Second, by comparing the two distributions, detector regions with more than 80% of x-ray blocked by the MSC were identified and the corresponding projection data were not used. Third, noise was inserted into the rest of the projection data to account for the increase in quantum noise due to reduced flux (partial MSC blockage). The undersampled projection data were then reconstructed iteratively using a penalized weighted least squares cost function with the conjugate gradient algorithm. The image reconstruction problem promotes sparsity in the solution and incorporates the undersampling model. Weighting factors were applied to the projection data during the reconstruction to account for the noise variation in the undersampled projection. Compared to images acquired with reduced tube current (provided in the standardized patient dataset), SparseCT undersampling presented less image noise while preserving pathologies and fine structures such as vessels in the reconstructed images.
Can a 3D task transfer function accurately represent the signal transfer properties of low-contrast lesions in non-linear CT systems?
The purpose of this study was to investigate how accurately the task-transfer function (TTF) models the signal transfer properties of low-contrast features in a non-linear CT system. A cylindrical phantom containing 24 anthropomorphic liver lesions (modeled from patient lesions) was designed using computer-aided design software (Rhinoceros 3D). Lesions had irregular shapes, 2 sizes (523, 2145 mm3), and 2 contrast levels (80, 100 HU). The phantom was printed with a state-of-the-art multimaterial 3D printer (Stratasys J750). CT images were acquired on a clinical CT scanner (Siemens Flash) at 4 dose levels (CTDIVol, 32 cm phantom: 1.5, 3, 6, 22 mGy) and reconstructed using 2 FBP kernels (B31f, B45f) and 2 iterative kernels (SAFIRE, strength-2: I31f, and I44f). 3D-TTFs were estimated by combining TTFs measured using low-contrast rod inserts (in-plane) and a slanted edge (z-direction) printed in-phantom. CAD versions of lesions were blurred by 3D-TTFs and virtually superimposed into corresponding phantom images using a previously validated technique. We compared lesion morphometry (i.e., size and shape) measurements between 3D printed “physical” and TTF-blurred “simulated” lesions across multiple acquisitions. Lesion size was quantified using a commercial segmentation software (Syngo.via). Lesion shape was quantified by measuring the Jaccard index between the segmented masks of paired physical and simulated lesions. The relative volume difference D between physical and simulated lesions was mostly less than the natural variability COV of the physical lesions. For large and small lesions, the COV1,𝑘,𝑙 was greater or similar to D𝑘,𝑙 in 12 and 13 out of 16 imaging scenarios, respectively. Simulated and physical lesion shapes were similar, with an average simulated-physical Jaccard index of 0.70 (out of max value of unity). These results suggest 3D-TTFs closely models the signal transfer properties of linear and non-linear CT conditions for low-contrast objects.
Accurate centroid determination for evaluating the modulation transfer function with a circular edge in CT images
The in-plane modulation transfer function (MTF) for multi-slice computed tomography (CT) can be found by scanning a phantom with cylindrical contrast inserts and making use of the circular edges presented in reconstructed axial images. Pixel data across the edge are used to establish an edge spread function, which is then used to obtain the line spread function and finally the MTF. A crucial step in this approach is to accurately locate the centroid of the circular region. Since the ESF is usually established in subpixel scale, slight deviation of the centroid may result in large errors. It has been a common practice to apply a preset threshold and calculate the center of mass in the binary output on each individual slice. It has also been suggested to locate the centroid on each slice by maximizing the sum of pixel values lying under a predefined template. In this paper, we propose a new algorithm based on registering the entire cylindrical object in 3D space. In a test on a high-noise low-contrast edge, both the threshold and the maximization algorithm showed scattered distribution of centroids across consecutive slices, resulting in underestimation of the MTF up to 10% at intermediate frequencies. In comparison, the method based on 3D registration has been found more robust to noise and the centroid locations are more consistent in the longitudinal direction. It is therefore recommended to use the proposed algorithm for centroid determination in evaluating the MTF with a circular edge in CT images.
Poster Session: Cone-Beam CT
icon_mobile_dropdown
Improving image quality of cone-beam CT using alternating regression forest
We propose a CBCT image quality improvement method based on anatomic signature and auto-context alternating regression forest. Patient-specific anatomical features are extracted from the aligned training images and served as signatures for each voxel. The most relevant and informative features are identified to train regression forest. The welltrained regression forest is used to correct the CBCT of a new patient. This proposed algorithm was evaluated using 10 patients’ data with CBCT and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross correlation (NCC) indexes were used to quantify the correction accuracy of the proposed algorithm. The mean MAE, PSNR and NCC between corrected CBCT and ground truth CT were 16.66HU, 37.28dB and 0.98, which demonstrated the CBCT correction accuracy of the proposed learning-based method. We have developed a learning-based method and demonstrated that this method could significantly improve CBCT image quality. The proposed method has great potential in improving CBCT image quality to a level close to planning CT, therefore, allowing its quantitative use in CBCT-guided adaptive radiotherapy.
Spatial resolution and noise prediction in flat-panel cone-beam CT penalized-likelihood reconstruction
Purpose: Model based iterative reconstruction (MBIR) algorithms such as penalized-likelihood (PL) methods have data-dependent and shift-variant image properties. Predictors of local reconstructed noise and resolution have found application in a number of methods that seek to understand, control, and optimize CT data acquisition and reconstruction parameters in a prospective fashion (as opposed to studies based on exhaustive evaluation). However, previous MBIR prediction methods have relied on idealized system models. In this work, we develop and validate new predictors using accurate physical models specific to flat-panel CT systems.

Methods: Novel predictors for estimation of local spatial resolution and noise properties are developed for PL reconstruction that include a physical model for blur and correlated noise in flat-panel cone-beam CT (CBCT) acquisitions. Prospective predictions (e.g., without reconstruction) of local point spread function and and local noise power spectrum (NPS) model are applied, compared, and validated using a flat-panel CBCT test bench. Imaging conditions investigated include two acquisition strategies (an unmodulated X-ray technique and automatic exposure control) as well as varying regularization strength.

Results: Comparisons between prediction and physical measurements show excellent agreement for both spatial resolution and noise properties. In comparison, traditional prediction methods (that ignore blur/correlation found in flat-panel data) fail to capture important data characteristics and show significant mismatch.

Conclusion: Novel image property predictors permit prospective assessment of flat-panel CBCT using MBIR. Such predictors enable standard and task-based performance assessments, and are well-suited to evaluation, control, and optimization of the CT imaging chain (e.g., x-ray technique, reconstruction parameters, novel data acquisition methods, etc.) for improved imaging performance and/or dose utilization.
Simulating low-dose cone-beam CT: a phantom study
Andrea Ferrero, Ken Fetterly, Lifeng Yu, et al.
Our institution routinely uses limited-angle cone-beam CT (CBCT) from a C-arm with 3D capabilities to diagnose and treat cardiovascular and orthopedic diseases in both adult and pediatric patients. While CBCT contributes to qualitative and quantitative assessment of both normal and abnormal patient anatomy, it also contributes substantially to patient radiation dose. Reducing the dose associated with CBCT exams while maintaining clinical utility can be considered to be of benefit to patients for whom CBCT is routinely used and may extend its adoption to clinical tasks and patient populations where the dose is currently considered prohibitive. In this work we developed and validated a method to simulate low-dose CBCT images from standard-dose projection images. The method was based on adding random noise to real projection images. The method was validated using an anthropomorphic thorax phantom of variable size with a custom-made insert containing iodine contrast rods of variable concentration. Images reconstructed from the low-dose simulations were compared to the actually acquired lower-dose images. Subtraction images of the simulated and acquired lower-dose images demonstrated a lack of residual structure patterns, indicating that differences between the image sets were consistent with random noise only. Noise power spectrum (NPS) and iodine signal-difference-to-noise ratio (SDNR) showed good agreement between simulated and acquired lower-dose images for dose levels between 70% and 30% of the routine dose. The average difference in iodine SDNR between simulated and acquired low-dose images was below 5% for all dose levels and phantom sizes. This work demonstrates the feasibility of accurately simulating low-dose CBCT based on real images acquired using standard dose and degrading the images by adding noise.
Motion artifacts reduction in 4DCBCT based on motion-compensated robust principal component analysis
Conventional Cone-Beam Computed Tomography (CBCT) acquisition suffers from motion blurring problems of moving organs, especially the respiratory motion at thorax region, and consequently it may result in inaccuracy in the localizing the target of treatment and verifying delivered dose in radiation therapy. Although 4D-CBCT reconstruction technology is available to alleviate the motion blurring artifacts with the strategy of projection sorting tuned by respiratory bins, it introduces under-sampled problems. Aiming to precisely estimate the motion information of individual 4D-CBCT reconstructions, the proposed method combines the motion variable matrixes extracted from independent 4D-CBCT reconstructions using Robust Principal Component Analysis (RPCA) and the prior reconstructed image from fullsampled projections together and incorporate into iterative reconstruction framework, defining the Motion Compensated RPCA (MC-RPCA) method. Both simulation data and real data have been tested to verify the improvement in image quality at individual reconstructed phases by MC-RPCA. It can be obviously observed that the image quality the MC-RPCA method is improved with distinct features, especially in two regions of interest (ROI) with moving tissues. Quantitative evaluations indicate that large improvements in the Structural Similarity Index (SSIM) and Contrast-to-Noise Ratio(CNR) are achieved at the diaphragm slice by our method when comparing with MKB and the Prior Image Constraint Compressed Sensing (PICCS) algorithm, respectively.
Influence of data completion on scatter artifact correction for truncated cone-beam CT data
Nadine Waltrich, Stefan Sawall, Joscha Maier, et al.
X-ray scatter leads to one of the major artifacts limiting the image quality in cone-beam CT (CBCT). Hence the interest to perform an accurate scatter correction is very high. A particularly large amount of scatter is created in CBCT, due to the large cone-angle and the small distance between the rotation axis and the detector. Even if an anti-scatter grid is used, a scatter correction is necessary. The performance of an accurate scatter correction is difficult, especially when the data are additionally truncated due to a small field of measurement (FOM) (e.g. dental CT systems or C-Arm CT systems). In addition to the image degradation due to scatter artifacts, numerous CBCT artifacts like beam-hardening artifacts and cone-beam artifacts contribute to a further reduction in image quality. In this paper different detruncation methods are compared with respect to scatter to find a quantitative scatter correction approach for truncated CBCT data. The evaluation shows that a precise detruncation is crucial for an appropriate scatter correction. Additionally, the general image quality limit is enhanced by performing further artifact correction methods to reconstruct a nearly artifact-free CBCT volume.
Optimization-based design for artifact reduction in advanced diagnostic CT
Dan Xia, Yan Liu, Zhou Yu, et al.
Cone-beam artifact may be observed in the images reconstructed from circular trajectory data by use of the FDK algorithm or its variants for an imaged subject with longitudinally strong contrast variation in advanced diagnostic CT with a large number of detector rows. Existing algorithms have limited success in correcting for the effect of the cone-beam artifacts especially on the reconstruction of low-contrast soft-tissue. In the work, we investigate and develop optimization-based reconstruction algorithms to compensate for the cone-beam artifacts in the reconstruction of low-contrast anatomies. Specifically, we investigate the impact of optimization-based reconstruction design based upon different data-fidelity terms on the artifact correction by using the Chambolle- Pock (CP) algorithm tailored to each of the specific data-fidelity terms considered. We performed numerical studies with real data collected with the 320-slice Canon Medical System CT scanner, demonstrated the effectiveness of the optimization-based reconstruction design, and identified the optimization-based reconstruction that corrects most effectively for the cone-beam artifacts.
Gantry rotational motion-induced blur in cone-beam computed tomography
As neuro-endovascular image-guided interventions (EIGIs) make use of higher resolution detectors, gantry rotational motion-induced blur becomes more noticeable in acquired projections as well as reconstructed images by reducing the visibility of vascular and device features whose visualization could be critical in the treatment of vascular pathology. Motion-induced blur in projections views is a function of an object’s position in the field-of-view (FOV), gantry rotational speed, and frame capture or exposure time. In this work different frame rates were used to investigate the effects of blurring from individual projections on the reconstructed image. To test the effects of these parameters on reconstructed images, a regular pattern phantom of small objects was simulated and projection views were generated at various different frame rates for a given simulated rotational velocity. The reconstruction was made using a linear interpolation of filtered backprojections. Images reconstructed from lower frame rates showed significant blurring in the azimuthal direction, increasingly worse towards the periphery of the image. However, those reconstructed from higher frame rates showed significantly less blur throughout the entire FOV. While lower frame rates could be used with slower gantry speeds this would increase the risk of voluntary or involuntary patient motion contributing to blur over the entire FOV. A high frame rate used with high gantry speeds could reliable provide images without gantry-motion blur while reducing the risk of patient-motion blur. Frame rates exceeding 2000 fps available with photon counting detectors such as the X-counter Actaeon are available.
Intensity-modulated dental CBCT: noise and absorbed dose study
Dental computed tomography (CT) typically uses a cone-beam geometry with a flat-panel detector. Although the flat-panel detector normally covers the maxillary and mandibular jaws, the cone-beam scan can deliver the dose to organs that are located out of the direct beam path but sensitive to radiation damage, such as eye lens. For typical dentoalveolar cone-beam CT (CBCT) scans, this study investigates the absorbed dose distributions in the head and neck using the Monte Carlo technique, and quantifies specific organ doses. Then, we design an intensity-modulated CBCT scan protocol that can provide a higher tomographic image quality at a lesser patient dose. The beam-intensity modulation includes the changes of tube current (mA) and/or voltage (kVp) during circular scanning, and the modulation scenarios are designed considering the cervical spine through which x-ray beam attenuates largely. We assess the noise-to-dose performances for various intensity-modulation scenarios, and compare the results with that obtained for the conventional scan.
Investigation of organ dose variation with adult head size and pediatric age for neuro-interventional projections
The purpose of this study was to evaluate the effect of patient head size on radiation dose to radiosensitive organs, such as the eye lens, brain and spinal cord in fluoroscopically guided neuro-interventional procedures and CBCT scans of the head. The Toshiba Infinix C-Arm System was modeled in BEAMnrc/EGSnrc Monte-Carlo code and patient organ and effective doses were calculated in DOSxynrc/EGSnrc for CBCT and interventional procedures. X-ray projections from different angles, CBCT scans, and neuro-interventional procedures were simulated on a computational head phantom for the range of head sizes in the adult population and for different pediatric ages. The difference of left-eye lens dose between the mean head size and the mean ± 1 standard deviation (SD) ranges from 20% to 300% for projection angles of 0° to 90° RAO. The differences for other organs do not vary as much and is only about 10% for the brain. For a LCI-High CBCT protocol, the difference between mean and mean ± 1 SD head size is about 100% for lens dose and only 10% for mean and peak brain dose; the difference between 20 and 3 year-old mean head size is an increase of about 200% for the eye lens dose and only 30% for mean and peak brain dose. Dose for all organs increases with decreasing head size for the same reference point air kerma. These results will allow size-specific dose estimates to be made using software such as our dose tracking system (DTS).
Image quality, scatter, and dose in compact CBCT systems with flat and curved detectors
A. Sisniega, W. Zbijewski, P. Wu, et al.
Purpose: A number of cone-beam CT (CBCT) applications demand increasingly compact system designs for smaller footprint and improved portability. Such compact geometries can be achieved via reduction of air gap and integration of novel, curved detectors; however, the increased x-ray scatter in presents a major challenge to soft-tissue image quality in such compact arrangements. This work investigates pre-patient modulation (bowtie filters) and antiscatter grids to mitigate such effects for compact geometries with curved detectors. Methods: The effects of bowtie filters on dose and x-ray scatter were investigated in a compact geometry (180 mm air gap), for three detector curvatures: Flat, Focused at source, and Compact focused at isocenter. Experiments used bowtie filters of varying curvature combined with antiscatter grids (GR: 8:1, 80 lpmm). Scatter was estimated via GPU-accelerated Monte Carlo simulation in an anthropomorphic head phantom. Primary fluence was estimated with a polychromatic Siddon projector. Realistic Poisson noise (total dose: 20 mGy) was added to the total signal. Scatter magnitude and distribution were evaluated in projection data, and CT image quality was assessed in PWLS reconstructions. Results were validated in physical experiments on an x-ray test-bench for CBCT. Results: Moderate bowties combined with grids reduced average scatter magnitude and SPR, reduced cupping from 90 to 5 HU, and yielded net benefit to CNR despite attenuation of primary fluence. Dose to sensitive organs (eye lens) was reduced by 27%. More aggressive bowties showed further potential for dose reduction (35%) but increased peripheral SPR and increased non-uniformity and artifacts at the periphery of the image. Curved detector geometry exhibited slightly improved uniformity but a slight reduction in CNR compared to conventional flat detector geometry. Conclusion: Highly portable, soft-tissue imaging, CBCT systems with very compact geometry and curved detectors appear feasible, despite elevated x-ray scatter, through combination of moderate pre-patient collimation and antiscatter grids.
Poster Session: Phase Contrast Imaging
icon_mobile_dropdown
Hairline fracture detection using Talbot-Lau x-ray imaging
C. Hauke, G. Anton, S. Auweter, et al.
Talbot-Lau X-ray imaging (TLXI) provides information about scattering and refractive features of objects – in addition to the well-known conventional X-ray attenuation image. We investigated the potential of TLXI for the detection of hairline fractures in bones, which are often initially occult in conventional 2D X-ray images. For this purpose, hairline fractures were extrinsically provoked in a porcine trotter (post-mortem) and scanned with a TLXI system. In the examined case, hairline fractures caused dark-field and differential-phase signals, whereas they were not evident in the conventional X-ray image. These findings motivate a comprehensive and systematic investigation of the applicability of TLXI for diagnosing hairline fractures.
Application of a test object free method for determination of the modulation transfer function in grating-based phase-contrast imaging
Assessing the image resolution of the differential phase (dP) and the dark field (DF) image in grating-based phase-contrast imaging (GB-PCI) requires a precision machined wedge and edge respectively in order to generate an accurate edge spread function (ESF) from which the modulation transfer function (MTF) can be calculated. Imperfect test objects, i.e. truncated wedges (dP), unsharp or too thick edges (DF), can lead both to an over-or an underestimation of image sharpness, making the MTF potentially test object dependent. Here the object free method of Kuhls-Gilcrist, which resolves the MTF from noise power spectra (NPS method), is applied to the transmission, dark field and differential phase image in GB-PCI. In parallel to the NPS method, each MTF was also determined using a test object based approach. Good agreement was found between both approaches. Moreover, it was possible to distinguish unsuitable test objects and this work illustrated that selecting DF test objects can be difficult. This sharpness measurement method offers a robust alternative to test object approaches and can be used to verify sharpness of test objects or to generate an accurate MTF if no such object is available.
Minimizing the scatter contribution and spatial spread due to the absorption grating G2 in grating-based phase-contrast imaging
J. Vignero, S. Rodríguez-Pérez, N. W. Marshall, et al.
In previous research1 it was shown that in grating-based phase-contrast imaging (GB-PCI) for low scatter objects, G2 is the dominant scattering source. This scatter is manifested in a different way compared to object scatter, as scattered photons that remain local to the interaction site may even increase object contrast, but reduce system visibility. In this work the magnitude and the spatial distribution of scattered photons from G2 are studied for different conditions using Monte Carlo simulations: (1) The effect of G2 orientation on the scatter-to-primary ratios (SPRs), (2) the impact of reducing the G2-to-detector distance (D) from 1.21 cm (current setting) to 0.5 cm on the spatial scatter distribution, and (3) the possibility to apply the G2 scatter probability to predict the scatter images from any primary object image. It was shown that flipping the G2 grating with its substrate away from the detector reduces the scatter-to-primary ratio by a factor 1.15. Furthermore, when D is 1.21 cm, 50% of the scattered photons fell within the first 18 pixels, while for D equal to 0.5 cm, 50% fell within the first 9 pixels, with however a slightly increased SPR. It was shown that convolution of these spatial distributions with the primary images of low scattering objects allows prediction of scatter images with a mean percentage deviation of 21% and 16% for D is 0.5 and 1.21 cm respectively. This work therefore illustrates that small optimization steps can have a notable impact on the magnitude and spatial distribution of scattered radiation at the level of the detector in GB-PCI. An approach to estimate scattered radiation images for objects that produce low levels of scattered radiation was presented.
3D imaging of theranostic nanoparticles in mice organs by means of x-ray phase contrast tomography
E. Longo, A. Bravin, F. Brun, et al.
Theranostics is an innovative research field that aims to develop high target specificity cancer treatments by administering small metal-based nanoparticles (NPs). This new generation of compounds exhibits diagnostic and therapeutic properties due to the high atomic number of their metal component. In the framework of a combined research program on low dose X-ray imaging and theranostic NPs, X-ray Phase Contrast Tomography (XPCT) was performed at ESRF using a 3 μm pixel optical system on two samples: a mouse brain bearing melanoma metastases injected with gadolinium NPs and, a mouse liver injected with gold NPs. XPCT is a non-destructive technique suitable to achieve the 3D reconstruction of a specimen and, widely used at micro-scale to detect abnormalities of the vessels, which are associated to the tumor growth or to the development of neurodegenerative diseases. Moreover, XPCT represents a promising and complementary tool to study the biodistribution of theranostic NPs in biological materials, thanks to the strong contrast with respect to soft tissues that metal-based NPs provide in radiological images. This work is relied on an original imaging approach based on the evaluation of the contrast differences between the images acquired below and above K-edge energies, as a proof of the certain localization of NPs. We will present different methods aiming to enhance the localization of NPs and a 3D map of their distribution in large volume of tissues.
Poster Session: Multi-Energy X-Ray and CT
icon_mobile_dropdown
A multi-energy material decomposition method for spectral CT using neural network
Chuqing Feng, Kejun Kang, Yuxiang Xing
Spectral Computed Tomography (CT) has an advantage of providing energy spectrum information, which is valued on multi-energy material decomposition for material discrimination and accurate image reconstruction. However, due to the non-ideal physical effects of photon counting detectors (PCDs), such as charge sharing, pulse pileup and K-escape, serious spectral distortion is unavoidable in practical systems. The degraded spectrum will induce error into the decomposition model and affect the accuracy of material decomposition. Recently, artificial neural network has demonstrated great potential in the tasks of image segmentation, object detection, natural language processing, and etc. By adjusting the interconnection relationship among a large number of internal nodes, a neural network provides us a way to mine information from huge data depending on the complexity of the network system. Considering the difficulty of modeling the spectral CT system spectrum including the response function of a PCD and the superiority of data-driven characteristics of a neural network, we proposed a novel multi-energy material decomposition method using a neural network without the knowledge of spectral information. On one hand, specific linear attenuation coefficients can be obtained directly through our method. It would help further material recognition and spectral CT reconstruction. On the other hand, the network outputs show outstanding performance on image denoising and artifacts suppression. Our method can fit for different selections of training materials and different settings of imaging systems such as different number of energy bins and energy bin thresholds. According to our test results, the trained neural network has a good generalization ability.
A sensitivity analysis on parameters that affect a multi-step material decomposition for spectral CT
When using a photon counting detector (PCD) for material decomposition problems, a major issue is the low-count rate per energy bin which may lead to high image-noise with compromised contrast and accuracy. We recently proposed a multi-step algorithmic method of material decomposition for spectral CT, where the problem is formulated as a series of simpler and dose efficient decompositions rather than solved simultaneously. While the method offers higher flexibility in the choice of energy bins for each material type, there are several aspects that should be optimized for effective utility of these methods. A simple domain of four materials: water, calcium, iodine and gold was explored for testing these. The results showed an improvement in accuracy with low-noise over the single-step method where the materials were decomposed simultaneously. This paper presents a comparison of contrast-to-noise ratio (CNR) and retrieval accuracy in both single-step and multi-step methods under varying acquisition and reconstruction parameters such as Wiener filter kernel size, pixel binning, signal size and energy bin overlap.
Intrinsic limitations on quantification accuracy of dual energy CT at low dose levels
Juan P. Cruz-Bastida, Ran Zhang, Ke Li, et al.
Radiation dose is still a topic of concern in current computed tomography (CT) clinical practice. While dose reduction strategies have been developed and proven to provide acceptable imaging performance, a recent work1 have demonstrated that filtered-backprojection based CT will lead to inaccurate CT numbers at low dose levels. This conclusion suggest that dual energy CT (DECT) material decomposition at low dose levels may also be strongly biased. The purpose of this work was to systematically investigate the relationship between image-based DECT decomposition accuracy and the mA level. To achieve this goal, a Catphan phantom with different material inserts of known composition was scanned in a benchtop CT system with two different x-ray spectra (60 and 100 kV). Different tube current levels, ranging from 0.5 to 10 mAs were used, and 50 realizations per data set were acquired. Image domain material decomposition was performed with the acquired data, using acrylic and Teflon as material basis. The resulting decompositions were compared to reference values, obtained from the decomposition of averaged scans at a reference dose level. It was observed that both phantom composition and mA levels strongly impact the decomposition accuracy. Particularly, when either the low- or high- energy scan was acquired with a high mA level, a low mA level for the conjugate measurement led to biased decomposition estimates. Fundamentally different trends were identified when comparing decomposition -accuracy and -precision as function of the mA level. Our results also suggest that certain tube current combinations may be optimal in terms of accuracy.
Multiscale dual energy micro-CT for imaging using iodinated and gold nanoparticles
M. Holbrook, D. P. Clark, C. T. Badea
Dual energy (DE) micro-CT shows great potential to provide accurate tissue composition by utilizing the energy dependence of x-ray attenuation in different materials. This is especially well-suited for pre-clinical imaging using nanoparticle-based contrast agents in situations where quantitative material decomposition helps probe processes which are otherwise limited by poor soft tissue contrast. We have previously proposed optimal in vivo DE micro-CT methods for imaging using iodinated and gold nanoparticles. However, in vivo studies are limited in spatial resolution due to constraints in sampling time and radiation dose. Ex vivo dual energy imaging can provide much higher resolution and can serve as a validation of in vivo studies. Our study proposes multiscale in vivo and ex vivo DE micro-CT of the same subjects using two in-house developed micro-CT systems. We use a dual source micro-CT system to scan a mouse that has been injected with both iodinated and gold nanoparticles for in vivo DE scanning at 63 micron resolution. The same mouse is then scanned ex vivo with DE on a separate single source micro-CT system at a spatial resolution of 22 microns. We perform reconstructions using filtered back projection followed by noise reduction via joint bilateral filtration. A dynamic flat field correction method has been applied on the ex vivo micro-CT data to correct for image artifacts. A DE post-reconstruction decomposition is used to create iodine and gold material maps which are used to measure accumulation of contrast agent within the body. We evaluate challenges associated with each imaging methodology. Our results compare image quality and material maps. Overall, our methods represent a substantial tool for multiscale DE micro-CT imaging using wellcharacterized contrast agents and serving various applications in biological research.
Pseudo dual energy CT imaging using deep learning-based framework: basic material estimation
Dual energy computed tomography (DECT) usually scans the object twice using different energy spectrum, and then DECT is able to get two unprecedented material decompositions by directly performing signal decomposition. In general, one is the water equivalent fraction and other is the bone equivalent fraction. It is noted that the material decomposition often depends on two or more different energy spectrum. In this study, we present a deep learning-based framework to obtain basic material images directly form single energy CT images via cascade deep convolutional neural networks (CD-ConvNet). We denote this imaging procedure as pseudo DECT imaging. The CD-ConvNet is designed to learn the non-linear mapping from the measured energy-specific CT images to the desired basic material decomposition images. Specifically, the output of the former convolutional neural networks (ConvNet) in the CD-ConvNet is used as part of inputs for the following ConvNet to produce high quality material decomposition images. Clinical patient data was used to validate and evaluate the performance of the presented CD-ConvNet. Experimental results demonstrate that the presented CD-ConvNet can yield qualitatively and quantitatively accurate results when compared against gold standard. We conclude that the presented CD-ConvNet can help to improve research utility of CT in quantitative imaging, especially in single energy CT.
Algorithmic scatter correction based on physical model and statistical iterative reconstruction for dual energy cone beam CT
Dual energy cone beam computed tomography (DE-CBCT) can provide more accurate material characterization than conventional CT by taking advantages of two sets of projections with high and low energies. X-ray scatter leads to erroneous values of the DE-CBCT reconstructed images. Moreover, the reconstructed image of DECT is extremely sensitive to noise. Iterative reconstruction methods using regularization are capable to suppress the noise effects and hence improve the image quality. In this paper, we develop an algorithmic scatter correction based on physical model and statistical iterative reconstruction for DE-CBCT. With the assumption that the attenuation coefficients of the soft tissues are relatively stable and uniform and the scatter component is dominated by low frequency signal, scatter components were calculated while updating the reconstructed images in each iteration. Finally, the CBCT image was reconstructed by scatter corrected projections using statistical iterative reconstruction algorithm. Experiment shows that the proposed method can effectively remove the artifacts caused by x-ray scatter. The CT value accuracy in the reconstructed images has been improved.
Determination of the limit of detection for iodinated contrast agents with multi-energy computed tomography
Megan C. Jacobsen, Xinhui Duan, Dianna D. Cody, et al.
Multiple studies in the literature have proposed diagnostic thresholds based on Multi-Energy Computed Tomography (MECT) iodine maps. However, it is critical to determine the minimum detectable iodine concentration for MECT systems to assure the clinical accuracy for various measured concentrations for these image types. In this study, seven serial dilutions of iohexol were made with concentrations from 0.03 to 2.0 mg Iodine/mL in 50 mL centrifuge tubes. The dilutions and one blank vial were scanned five times each in two scatter conditions: within a 20.0 cm diameter (Head) phantom, and a 30.0 cm x 40.0 cm elliptical (Body) phantom. This was repeated on a total of six scanners from three vendors: fast-kVp switching, dual-source dual-energy CT, dual-layer detector CT, and split-filter CT. Scan parameters and dose were matched as closely as possible across systems, and iodine maps were reconstructed. Regions-of-Interest (ROIs) were placed on 5 consecutive images within each vial, for a total of 25 measurements per sample. The mean and standard deviation were calculated for each sample. The Limit of Detection (LOD) was defined as the concentration that had a 95% chance of having a signal above the 95% confidence interval of the measured blank samples. The range of LODs was 0.021 – 0.484 mg I/mL in the head phantom and 0.125 – 0.547 mg I/mL in the body phantom. The LOD for iodinated contrast using MECT systems changed with scatter and attenuation conditions. The limit of detection for all conditions was under 0.5 mg Iodine/mL.
Feasibility of material decomposition using non-radioactive Xe for pulmonary function test in spectral x-ray system: a Monte Carlo simulation study
Due to various factors, the number of chronic obstructive pulmonary disease (COPD) patients continues to increase. In addition, the mortality from COPD is increasing because of the difficult in the early detection of COPD. Radiologic and respiratory examinations should be performed simultaneously for improving the diagnostic accuracy of COPD. But a conventional respiratory examination leads to diagnostic inaccuracy and decreases the reproducibility of examinations because there is air leakage between spirometry and mouth. Also, it is difficult to apply for all ages. In this study, we confirmed the possibility of material decomposition for pulmonary function test by combining dual-energy X-ray images obtained from a photon counting detector. Non-radioactive Xe, which appears in X-ray images, was also used. The RMSE of each material in decomposed images was analyzed to quantitatively evaluate of the possibility of material decomposition for pulmonary function test. Results showed that the average RMSE values of PMMA, lung and nonradioactive Xe were 0.005, 0.0199 and 0.0217, respectively, and we observed the high accuracy of material decomposition. Therefore, the diagnosis of COPD can be simplified through the material decomposition imaging using non-radiologic Xe, and the lung function can be evaluated by decomposing the total lung and actual gas exchange areas.
Poster Session: Photon-Counting Imaging
icon_mobile_dropdown
Effect of electronic noise and lowest energy threshold selection in photon counting detectors
Photon counting detectors (PCD) are widely credited with having minimum degradation from electronic noise compared to energy integrating detectors. However, they are not immune. We characterized the effect of electronic noise in simulated CdTe PCDs (0.25-1mm pixels) for spectral and effective monoenergetic tasks. Electronic noise was modeled as two separable effects - spectral blurring modeled as convolution with a Gaussian kernel with standard deviation of 7 keV, and false triggering of the lowest energy bin (depending on the threshold). To model false triggering, noise was created by filtering white Gaussian noise with a Gaussian pulse shaping kernel of 40 ns peaking time and, scaled to have a standard deviation of 7 keV, and analyzed numerically to obtain the mean and variance of false triggers at thresholds from 3 to 45 keV with ±3.5 keV hysteresis. PCDs had 5 energy bins, were operated at maximum of 20 % of characteristic count rate unless otherwise specified, and pulse pileup was not modeled. We assume the expected number of false triggers can be predicted and subtracted but that the noise from those events remains. Quantum and false triggering noise were propagated into basis material images using the Cramer-Rao Lower Bound. In basis material images, at the optimal threshold (balancing false triggers and lost true events) there was an 18-24% variance penalty compared to a detector with no electronic noise. For effective monoenergetic imaging, capturing low energy pulses performs asymptotically as well as a detector without electronic noise, with the penalty increasing with increasing energy threshold.
Impact of radiation dose level on CT number accuracy in photon counting CT
Ran Zhang, Juan P. Cruz-Bastida, Daniel Gomez-Cardona, et al.
It is well known that when radiation dose is reduced in x-ray computed tomography, image noise increases and noise induced streaks may appear. However, to quantify the accuracy of the image reconstruction, another source of error, known as the statistical bias, also needs to be considered. In the projection domain, signal bias originates in the quantum nature of the x-ray photon fluctuations as well as the nonlinear nature of the logarithmic transform used to generate line integral data. The bias in the projection domain is then propagated into the image domain through the image reconstruction process to generate CT number biases. The purpose of this work is to experimentally study the dependence of CT number bias on the radiation dose.
Determination of optimal image type and lowest detectable concentration for iodine detection on a photon counting detector-based multi-energy CT system
Wei Zhou, Rachel Schornak, Gregory Michalak, et al.
Photon counting detector (PCD) based multi-energy CT is able to generate different types of images such as virtual monoenergetic images (VMIs) and material specific images (e.g., iodine maps) in addition to the conventional single energy images. The purpose of this study is to determine the image type that has optimal iodine detection and to determine the lowest detectable iodine concentration using a PCD-CT system. A 35 cm body phantom with iodine inserts of 4 concentrations and 2 sizes was scanned on a research PCD-CT system. For each iodine concentration, 80 repeated scans were performed and images were reconstructed for each energy threshold. In addition, VMIs at different keVs and iodine maps were also generated. CNR was measured for each type of images. A channelized Hotelling observer was used to assess iodine detectability after being validated with human observer studies, with area under the ROC curve (AUC) as a figure of merit. The agreement between model and human observer performance indicated that model observer could serve as an effective approach to determine optimal image type for the clinical practice and to determine the lowest detectable iodine concentration. Results demonstrated that for all size and concentration combinations, VMI at 70 keV had similar performance as that of threshold low images, both of which outperformed the iodine map images. At the AUC value of 0.8, iodine concentration as low as 0.2 mgI/cc could be detected for an 8 mm object and 0.5 mgI/cc for a 4 mm object with a 5 mm slice thickness.
Cascaded systems analysis of photon-counting field-shaping multi-well avalanche detectors (SWADs)
Single-photon-counting (SPC) x-ray detectors are expected to play a key role in the next generation of medical x-ray imaging. The spatial resolution of SPC x-ray detectors is an important design criterion, in particular for mammography in which one of the primary aims is to detect and differentiate micro-calcifications. The purpose of this abstract is to extend the cascaded systems approach to investigate the influence of reabsorption of characteristic x rays on SPC spatial resolution. A parallel-cascaded model is used to describe reabsorption of characteristic x rays following photoelectric interactions. We use our model to calculate the large-area gain and modulation transfer function (MTF) of amorphous selenium (a-Se) SPC detectors that use a field-Shaping multi-Well Avalanche Detector (SWAD) structure to overcome the low conversion gain of a-Se. Our model accounts for emission and reabsorption of characteristic x rays, x-ray conversion to electron-hole pairs, avalanche gain and gain variance, integration of secondary quanta in detector elements, electronic noise, and energy threshold. Theoretical predictions are compared with the results of Monte Carlo simulations. Our analysis shows that under mammographic imaging conditions, the a-Se/SWAD structure with an avalanche gain of 10 or greater results in minimal loss of photon counts below the electronic noise floor for electronic noise levels ~500 - 700 e-h pairs. Double counting of characteristic x-rays inflates the large-area gain by ~20% relative to the quantum efficiency, and results in modest MTF degradation relative to energy-integrating systems. Excellent agreement between theoretical and Monte Carlo analyses was observed. This approach provides a theoretical framework for understanding SPC detector performance and for system optimization
Increasing the dose efficiency in silicon photon-counting detectors utilizing dual shapers
Silicon photon-counting spectral detectors are promising candidates as the next generation detectors for medical CT. For silicon detectors, a low noise floor is necessary to obtain good detection efficiency. A low noise floor can be obtained by having a slow shaping filter in the ASIC, but this leads to a long dead-time, thus decreasing the count-rate performance. In this work, we evaluate the benefit of utilizing two sub-channels with different shaping times. It is shown by simulation that utilizing a dual shaper can increase the dose efficiency for equal count-rate capability by up to 17%.
Spatio-energetic cross-talk in photon counting detectors: N×N binning and sub-pixel masking
Katsuyuki Taguchi, Karl Stierstorfer, Christoph Polster, et al.
Smaller pixels sizes of x-ray photon counting detectors (PCDs) have the two conflicting effects as follows. On one hand, smaller pixel sizes improve the ability to handle high count rates of x-rays (i.e., pulse pileups) because incident rates onto each PCD pixel decreases with decreasing the size. On the other hand, smaller pixel sizes increase chances of crosstalk and double-counting (or n-tuple-counting in general) between neighboring pixels, because, while the same size of electron charge cloud generated by a photon is independent of PCD sizes, the charge cloud size relative to the PCD size increases with decreasing the PCD size. In addition, the following two aspects are practical configurations in actual PCD computed tomography systems: N×N-pixel binning and anti-scatter grids. When n-tuple-counting occurs and those data are binned/added during post-acquisition process, the variance of the data will be larger than its mean. The anti-scatter grids may eliminate or decrease the cross-talk and n-tuple-counting by blocking primary x-rays near pixel boundaries or for the width of one pixel entirely. In this study, we studied the effects of PCD pixel sizes, N×N-pixel binning, and pixel masking on soft tissue contrast visibility using newly developed Photon Counting Toolkit (PcTK version 3.2; https://pctk.jhu.edu).
Evaluation of a new photon-counting imaging detector (PCD) with various acquisition modes
The prospect of improved low noise, high speed, and dual-energy imaging that may be associated with the use of photon-counting imaging detectors (PCD) has motivated this evaluation of a newly upgraded version of a prototype PCD. The XCounter Actaeon was evaluated in its four acquisition modes each based upon varying signal processing firmware including a mode with charge sharing correction that enables neighboring pixels that share the energy from one incident x-ray photon detection to be counted only once at the proper summed energy in the pixel with the largest charge deposition. Since this PCD is a CdTe-based direct detector with 100 μm pixels, such charge sharing for typical medical x-ray energy photons may occur frequently and must be corrected to achieve more accurate counts. This charge sharing correction is achieved with an Anti-Coincidence Circuit (ACC) which prevents double pixel counting from one event as well as prevents counting from either event if they are below a preset threshold. Various physical parameters of the PCD were evaluated including linearity, sensitivity, pulse pile-up effects, dark noise, spatial resolution, noise power spectrum, and detective quantum efficiency.
Feasibility study of contrast enhanced digital mammography based on photon-counting detector by projection-based weighting technique: a simulation study
Contrast enhanced digital mammography (CEDM) using dual energy technique has been studied due to its ability of emphasizing breast cancer. However, when using CEDM the patient dose and the toxicity of iodine should be considered. A photon counting detector (PCD), which has the ability of energy discrimination, has been regarded as an alternative technique to resolve the problem of excessive patient dose. The purpose of this study was to confirm the feasibility of CEDM based on the PCD by using a projection-based energy weighting technique. We used Geant4 Application for Tomographic Emission (GATE) version 6.0. We simulated two different types of PCD which were constructed with silicon (Si) and cadmium zinc telluride (CZT). Each inner cylinder filled with four iodine with different low concentrations and thicknesses in cylindrical shape of breast phantom. For comparison, we acquired a convention integrating mode image and five bin images based on PCD system by projection-based weighting technique. The results demonstrated that CEDM based on the PCD significantly improved contrast to noise ratio (CNR) compared to conventional integrating mode. As a result of applying the dual energy technique to the projection-based weighing image, the CNR of low concentration iodine was improved. In conclusion, the CEDM based on PCD with projection-based weighting technique has improved a detection capability of low concentration iodine than integrating mode.
Spectrally varying spatial frequency properties of a small pixel photon counting detector
Photon counting spectral detectors (PCD) are being investigated for multiple applications such as material decomposition and X-ray phase contrast imaging. Many available detectors have fairly larger pixel sizes of about 150 µm or larger. The imaging performance is ultimately influenced by the choice of the sensor material, pixel pitch, contact type (Ohmic or Schottkey), spectral distortions due to charge sharing and pulse pile up. Several performance aspects must be optimal including energy and spatial resolution, frequency response, temporal stability etc. to fully utilize the advantages of a PCD. For any given design, understanding the interplay of various compromising features in the detector is very important to maximize spectral capability of these detectors. In this work, we examine spatial frequency performance of a small pixel PCD such as Medipix3RX with CdTe sensors. Measurements were conducted in single pixel mode (SPM) with no charge sharing correction as well as with charge summing mode (CSM) with built in hardware based charge-sharing correction, for both fine pitch (55 µm) and spectroscopic (110 µm) modes. While most of the simulations and measurements in the past use monochromatic x-ray to investigate these spatio-energetic correlations, our work shows preliminary results on these complex correlations when a polychromatic beam is used.
Multi-energy spectral photon-counting CT in crystal-related arthropathies: initial experience and diagnostic performance in vitro
Anais Viry, Aamir Y. Raja, Tracy E. Kirkbride, et al.
Purpose: We aimed to determine the in-vitro diagnostic performance of multi-energy spectral photon-counting CT (SPCCT) in crystal-related arthropathies. Methods: Four crystal types (monosodium urate, MSU; calcium pyrophosphate, CPP; octacalcium phosphate, OCP; and calcium hydroxyapatite, CHA) were synthesized and blended with agar at the following concentrations: 240, 88, 46, and 72 mg/mL, respectively. Crystal suspensions were scanned on a pre-clinical SPCCT system at 80 kVp using the following four energy thresholds: 20, 30, 40, and 50 keV. Differences in linear attenuation coefficients between the various crystal suspensions were compared using the receiver operating characteristic (ROC) paradigm. Areas under the ROC curves (AUC), sensitivities, specificities, and diagnostic accuracies were calculated. Crystal differentiation was considered successful if AUC>0.95. Results: For each paired comparison of crystal suspensions, AUCs were significantly higher in the first energy range (20-30 keV). In the first energy range, MSU was confidently differentiated from CPP (sensitivity, 0.978; specificity, 0.990; accuracy, 0.984) and CHA (sensitivity, 0.957; specificity, 0.970; accuracy, 0.964), while only moderately distinguished from OCP (sensitivity, 0.663; specificity, 0.714; accuracy, 0.688). CPP was confidently differentiated from OCP (sensitivity, 0.950; specificity, 0.954; accuracy, 0.952), while only moderately from CHA (sensitivity, 0.564; specificity, 0.885; accuracy, 0.727). OCP was accurately differentiated from CHA (sensitivity, 0.898; specificity, 0.917; accuracy, 0.907). Conclusions: Multi-energy SPCCT can accurately differentiate MSU from CPP and CHA, CPP from OCP, and OCP from CHA in vitro. The distinction between MSU and OCP, and CPP and CHA is more challenging.
Spectroscopy with a CdTe-based photon-counting imaging detector (PCD) having charge sharing correction capability
The spectroscopic capabilities of a newly upgraded version of a prototype imaging photon counting detector (PCD) was investigated. The XCounter Actaeon has four acquisition modes in which signal processing is varied including one mode having a charge sharing correction so that neighboring pixels that share a detected event will not be erroneously counted twice, hence it is designated the Anti-Coincidence Circuit On or ACC On mode. Since this CdTe-based direct conversion PCD has 100 μm pixels, such charge sharing may frequently occur for typical medical x-ray energies. Each pixel of this PCD has two scalers and two energy discriminators that enable counting without instrumentation noise of events above each threshold level; hence, a spectrum can be obtained by sequentially moving the thresholds of both discriminators. It became evident from the spectra for the various acquisition modes that only those obtained with the charge sharing correction enabled, compared favorably with theoretically predicted spectra. After verifying the energy calibration using the mono-energetic emissions from an Am-241 source, spectra at various kVps from a standard medical x-ray generator were obtained. The spectra generated by ACC On mode for 70 kVp and 110 kVp were the closest match to the theoretical spectra generated by SpekCal. For dual energy applications, ACC On mode with charge sharing correction circuitry would be the best choice among various acquisition modes. Also investigated was the dual energy imaging capability of the Actaeon PCD with ACC On mode to separate Aluminum and Iodine while imaging an artery stenosis phantom.
Spectral CT reconstruction with an explicit photon-counting detector model: a one-step approach
Pierre-Antoine Rodesch, V. Rebuffel, C. Fournier, et al.
Recent developments in energy-discriminating Photon-Counting Detector (PCD) enable new horizons for spectral CT. With PCDs, new reconstruction methods take advantage of the spectral information measured through energy measurement bins. However PCDs have serious spectral distortion issues due to charge-sharing, fluorescence escape, pileup effect… Spectral CT with PCDs can be decomposed into two problems: a noisy geometric inversion problem (as in standard CT) and an additional PCD spectral degradation problem. The aim of the present study is to introduce a reconstruction method which solves both problems simultaneously: a “one-step” approach. An explicit linear detector model is used and characterized by a Detector Response Matrix. The algorithm reconstructs two basis material maps from energy-window transmission data. The results prove that the simultaneous inversion of both problems is well performed for simulation data. For comparison, we also perform a standard “two-step” approach: an advanced polynomial decomposition of measured sinograms combined with a filtered-back projection reconstruction. The results demonstrate the potential uses of this method for medical imaging or for non-destructive testing in industry.
Development of virtual monochromatic imaging technique with spectral CT based on a photon-counting detector
Seungwan Lee, Sooncheol Kang, Jisoo Eom, et al.
With the advent of the coherent age the implementation of massive digital signal processors (DSP) co-integrated with high speed AD and DA converters became feasible allowing for the realization of huge flexibility of transponders. Today the implementation of variable transponders is mainly based on variable programming of DSP to support different modulation formats and symbol rates. Modulation formats with high flexibility are required such as pragmatic QAM formats and hybrid modulation formats. Furthermore, we report on an implementable probabilistically shaping technique allowing for adjusting the bitrate. We introduce fundamental characteristics of all modes and describe basic operation principles. The modifications of the operational modes are enabled simply by switching between different formats and symbol rates in the DSP to adjust the transponders spectral efficiency, the bitrate and the maximum transmission distance. A fine granularity in bitrate and in maximum transmission distance can be implemented especially by hybrid formats and by probabilistically shaped formats. Furthermore, latter allow for ~+25% increase of the maximum transmission distance due their operation close to the Shannon limit as a consequence of their 2D Gaussian like signal nature. If the flexibility and programmability of transponders is implemented, it can be utilized to support different strategies for the application. The variability in symbol rate is mainly translated into variability in bitrate and in bandwidth consumption. Contrary the variable spectral efficiency translates into a variation of the maximum transmission reach and of the bitrate. A co-adjustment of both options will lead to a superior flexibility of optical transponders to address all requirements from application, transponder and fiber infrastructure perspective.
Simulation study of scatter correction in photon counting CT
Shinichi Kojima, Kazuma Yokoi, Isao Takahashi
To understand how the signals are affected by the radiation scattered by the test subject in Photon Counting CT system, the characteristics of the scattered photons were evaluated using Monte Carlo simulation “GEANT” (GEometry ANd Tracking). Cylinder water phantoms with diameters of 165 – 380 mm were examined, and the X-ray energy from 20 to 120 keV was divided into 5 ranges with a width of 20 keV in the detector. With the phantom with the diameter of 380 mm, the ratio of signals which are scattered in the phantom to those of the total X-rays incident on the detector turned out to be more than 50% in the 20 - 40 keV range, while it remained 2% in the 100 - 120 keV one. The profiles of this ratio were approximated by a quadratic function αx2 +β in each energy range where x corresponds to the longitudinal detector position. It was found that α and β can be described with the energy range and phantom size.
Poster Session: Breast Imaging
icon_mobile_dropdown
Characterization of adipose compartments in mastectomy CT images
Anthropomorphic software breast phantoms are generated by simulating breast anatomy. Virtual Clinical Trial (VCT) tools are developed for evaluating novel imaging modalities, based on anthropomorphic breast phantoms. Simulation of breast anatomical structures requires informed selection of parameters, which is crucial for the simulation realism. Our goal is to optimize the parameter selection based upon the analysis of clinical images.

Adipose compartments defined by Cooper’s ligaments significantly contribute to breast image texture (parenchymal pattern) which affects image interpretation and lesion detection. We have investigated the distribution and orientation of compartments segmented from CT images of a mastectomy specimen. Ellipsoidal fitting was applied to 205 segmented compartments, by matching the moments of inertia. The goodness-of-fit was measured by calculating Dice coefficients. Compartment size, shape, and orientation were characterized by estimating the volume, axis ratio, and Euler’s angles of fitted ellipsoids. Potential correlations between estimated parameters were tested.

We found that the adipose compartments are well approximated with ellipsoids (average Dice coefficient of 0.79). The compartment size is correlated with the barycenter-chest wall distance (r=0.235, p-value<0.001). The goodness-of-fit to ellipsoids is correlated to the compartment shape (r=0.344, p-value<0.001). The shape is also correlated with barycenter coordinates. The compartment orientation is correlated to their size (Euler angle α: r=0.188, p-value=0.007; angle β: r=0.156, p-value=0.025) and the barycenter-chest wall distance (r=0.159, p-value=0.023). These results from the characterization of adipose compartments and the observed correlations could help improve the realism of simulated breast anatomy.
Binary implementation of fractal Perlin noise to simulate fibroglandular breast tissue
Magnus Dustler, Hannie Förnvik, Kristina Lång
Software breast phantoms are important in many applications within the field of breast imaging and mammography. This paper describes an improved method of using a previously employed in-house fractal Perlin noise algorithm to create binary software breast phantoms. The Perlin Noise algorithm creates smoothly varying structures of a frequency with a set band limit. By combining a range of frequencies (octaves) of noise, more complex structures are generated. Previously, visually realistic appearances were achieved with continuous noise values, but these do not adequately represent the breast as radiologically consisting of two types of tissue – fibroglandular and adipose. A binary implementation with a similarly realistic appearance would therefore be preferable. A library of noise volumes with continuous values between 0 and 1 were generated. A range of threshold values, also between 0 and 1, were applied to these noise volumes, creating binary volumes of different appearance, with high values resulting in a fine network of strands, and low values in nebulous clusters of tissue. These building blocks were then combined into composite volumes and a new threshold applied to make them binary. This created visually complex binary volumes with a visually more realistic appearance than earlier implementations of the algorithm. By using different combinations of threshold values, a library of pre-generated building blocks can be used to create an arbitrary number of software breast tissue volumes with desired appearance and density.
OpenVCT: a GPU-accelerated virtual clinical trial pipeline for mammography and digital breast tomosynthesis
Bruno Barufaldi, David Higginbotham, Predrag R. Bakic, et al.
Virtual clinical trials (VCTs) have a critical role in preclinical testing of imaging systems. A VCT pipeline has been developed to model the human body anatomy, image acquisition systems, display and processing, and image analysis and interpretation. VCTs require the execution of multiple computer simulations in a reasonable time. This study presents the OpenVCT Framework, consisting of graphical software to design a sequence of processing steps for the VCT pipeline; management software that coordinates the pipeline execution, manipulates, and retrieves phantoms and images using a relational database; and a server that executes the individual steps of the virtual patient accrual process using GPU optimized software. The framework is modular and supports various data types, algorithms, and modalities. The framework can be used to conduct massive simulations and several hundred imaging studies can be simulated per day on a single workstation. On average, we can simulate a Tomo Combo (DM + DBT) study using anthropomorphic breast phantoms in less than 9 minutes (voxel size = 100 μm3 and volume = 700 mL). Tomo Combo images from an entire virtual population can be simulated in less than a week. We can accelerate system performance using phantoms with large voxels. The VCT pipeline can also be accelerated by using multiple GPU’s (e.g., using SLI mode, GPU clusters).
Analysis of volume overestimation artifacts in the breast outline segmentation in tomosynthesis
Raymond J. Acciavatti, Alejandro Rodríguez-Ruiz, Trevor L. Vent, et al.
In digital breast tomosynthesis (DBT), the reconstruction is calculated from x-ray projection images acquired over a small range of angles. One step in the reconstruction process is to identify the pixels that fall outside the shadow of the breast, to segment the breast from the background (air). In each projection, rays are back-projected from these pixels to the focal spot. All voxels along these rays are identified as air. By combining these results over all projections, a breast outline can be determined for the reconstruction. This paper quantifies the accuracy of this breast segmentation strategy in DBT. In this study, a physical phantom modeling a breast under compression was analyzed with a prototype next-generation tomosynthesis (NGT) system described in previous work. Multiple wires were wrapped around the phantom. Since the wires are thin and high contrast, their exact location can be determined from the reconstruction. Breast parenchyma was portrayed outside the outline defined by the wires. Specifically, the size of the phantom was overestimated along the posteroanterior (PA) direction; i.e., perpendicular to the plane of conventional source motion. To analyze how the acquisition geometry affects the accuracy of the breast outline segmentation, a computational phantom was also simulated. The simulation identified two ways to improve the segmentation accuracy; either by increasing the angular range of source motion laterally or by increasing the range in the PA direction. The latter approach is a unique feature of the NGT design; the advantage of this approach was validated with our prototype system.
Simulation of breast compression using a new biomechanical model
Anna Mîra, Yohan Payan, Ann-Katherine Carton, et al.
Mammography is currently the primary imaging modality for breast cancer screening and plays an important role in cancer diagnostics. A standard mammographic image acquisition always includes the compression of the breast prior xray exposure. The breast is compressed between two plates (the image receptor and the compression paddle) until a nearly uniform breast thickness is obtained. The breast flattening improves diagnostic image quality1 and reduces the absorbed dose2 . However, this technique can also be a source of discomfort and might deter some women from attending breast screening by mammography3,4. Therefore, the characterization of the pain perceived during breast compression is of potential interest to compare different compression approaches. The aim of this work is to develop simulation tools enabling the characterization of existing breast compression techniques in terms of patient comfort, dose delivered to the patient and resulting image quality. A 3D biomechanical model of the breast was developed providing physics-based predictions of tissue motion and internal stress and strain intensity. The internal stress and strain intensity are assumed to be directly correlated with the patient discomfort. The resulting compressed breast model is integrated in an image simulation framework to assess both image quality and average glandular dose. We present the results of compression simulations on two breast geometries, under different compression paddles (flex or rigid).
Multi-layer dual-energy detectors with fiber optic faceplate
Sandwich-like multilayer detectors can measure dual-energy images at a single x-ray shot and the resulting images are free from the motion artifacts. In case of phosphor-coupled photodiode detector-based multilayer detectors, the direct x-ray interaction within the front photodiode layer can be a significant noise source. In this study, we propose to use a fiber-optic faceplate (FOP) between the front phosphor and photodiode layers, instead of the intermediate metal filter between the front and rear detector layers. This detector design is based on the fact that the FOP can reduce the probability of direct interaction of x-ray photons with the front photodiode as well as prevent x-ray photons with lower energies from reaching the rear detector layer. We develop a cascaded-systems model to describe the signal and noise characteristics in multilayer detector designs with the FOP. With the developed model, we investigate the imaging performance of the proposed detector designs for various FOP thicknesses in comparisons with the experimental measurements. The cascaded-systems analysis and demonstration dual-energy images of a postmortem mouse show that the proposed design is feasible for dual-energy imaging.
Improvement of image quality and density accuracy of breast peripheral area in mammography
During breast image acquisition from the mammography, the inner regions of the breast are relatively thicker and denser than the peripheral areas, which can lead to overexposure to the periphery. Some images show low visibility of tissue structures in the breast peripheral areas due to the intensity change. It has a negative effect on diagnosis for breast cancer detection. To improve image quality, we have proposed pre-processing technique based on distance transformation to enhance the visibility of peripheral areas. The distance transform method aims to calculate the distance between each zero pixel and the nearest nonzero pixel in the binary images. For each pixel with the distance to the skin-line, the intensity of pixel is iteratively corrected by multiplying a propagation ratio. To evaluate the quality of processed images, the texture features were extracted using gray-level co-occurrence matrices (GLCM). And the breast density is quantitatively calculated. According to the results, the structure of breast tissues in the overexposed peripheral areas was well observed. The processed images showed more complexity and improved contrast. On the other hand, the homogeneity tended to be similar to the original images. The pixel values of peripheral areas were normalized without losing information and weighted to reduce the intensity variation. In this study, the pre-processing technique based on distance transformation was used to overcome the problem of overexposed peripheral areas in the breast images. The results demonstrated that appropriate pre-processing techniques are useful for improving image quality and accuracy of density measurement.
Validation of noise estimation for a clinical contrast-to-noise ratio for digital mammographic imaging
Mammographic image quality is important to monitor to maximize diagnostic performance while minimizing patient exposure to ionizing radiation. Phantom imaging for quality control permits practical monitoring of signal and noise, and to optimize use of dose via the contrast-to-noise ratio (CNR). However, it remains a challenge to directly and objectively evaluate CNR in clinical images due to subject variability. A novel clinical image CNR metric has been developed that derives an estimate of system-dependent image noise and references contrast to tissue composition. The present work uses phantom images to validate the noise estimates and to demonstrate sensitivity to imaging conditions. Images of 1 cm adipose-equivalent blocks with 2, 3, and 5 cm 50/50 swirl phantoms and uniform 50/50 blocks were acquired using AEC-selected parameters, and at 0.33 and 0.5 of the AEC-selected mAs at 6 cm. Digital mammograms (DM) were acquired on a GE Essential with and without FineView processing, and in conventional and digital breast tomosynthesis (DBT) views on a Hologic Selenia Dimensions. The CNR was computed using contrast between a 0.4 mm CaCO3 speck in a target slab and adjacent background signal, and noise derived from paired raw and subtracted swirl phantom images. Swirl phantom CNR was estimated to within ±10% of uniform image CNR for GE and Hologic DM, and ±3% for Hologic DBT, and showed good sensitivity to acquisition technique. These results demonstrate promise for objective and efficient image quality evaluation from patient images, using noise estimates that effectively avoid signal related to tissue structure.
Validation and application of a new image reconstruction software toolbox (TIGRE) for breast cone-beam computed tomography
Shada Kazemi, Oliver Diaz, Premkumar Elangovan, et al.
A new image reconstruction software toolbox TIGRE (Tomographic Iterative GPU-based Reconstruction) has been evaluated for use in breast cone-beam computed tomography (CBCT) studies. This new software toolbox TIGRE has been compared to a standard Matlab-based implementation previously validated for X-ray mammography imaging. In particular, the image projection generator algorithm in the TIGRE toolbox, which is based on the Siddon ray-tracing algorithm, has been studied. The quantitative evaluation in terms of histograms and profile analyses, illustrates that TIGRE’s image projection show good agreement with our in-house validated X-ray ray tracing tool. In addition, it has been observed that since TIGRE uses GPU-based calculations, it produces projections approximately 90 times faster than CPU-based algorithms, dependent on choice of GPU. The breast CT images have also been reconstructed and evaluated using the two projection tools. The analyses show that the projections taken by TIGRE and our in-house developed Siddon algorithm, yield systematically similar results. To further investigate the differences between these two algorithms, the reconstructed images have been compared to each other. The correlation coefficients for an entire 3D reconstructed breast volumes using the two methods studied is 0.99±3.64x10-12 (mean ±standard deviation), the peak signal noise ratio is 117.17, the mean square error is 1.92x10-12 and the similarity index is 1.00.
Breast density assessment: image feature extraction and density classification with machine intelligence
Biao Chen, Chris Ruth, Yiheng Zhang, et al.
Breast composition density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect in x-ray mammography images. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. Compared to traditional x-ray absorption computing based areal and volumetric tissue density assessments, image feature detection based density classification approaches emulate the clinical density evaluation process by radiologists instead of using indirect information (e.g., percentage density values). We have modeled breast density assessment as a machine intelligence task which automatically extract the image features and dynamically improves density classification performance in clinical environment: (1) a bank of deep learning networks are explored to automatically extract the image features that emulate the radiologists’ image review process; (2) the pretrained networks are retrained with clinical 2D digital mammography images (for processing and for presentation DICOM images) using transfer learning; (3) a deep reinforcement network is incorporated through human-machine gaming process. The data preprocessing, trained models / processes have been described, and the classification inference have been evaluated with the predicted breast density category values of the clinical validation 2D digital mammographic images in terms of statistic measures. The experimental results have shown that the method is promising for breast density assessment.
Methodology for the objective assessment of lesion detection performance with breast tomosynthesis and digital mammography using a physical anthropomorphic phantom
Lynda C. Ikejimba, Telon Yan, Katherine Kemp, et al.
Realistic breast phantoms serve as important tools when evaluating full field digital mammography (FFDM) and digital breast tomosynthesis (DBT) system modifications. Current breast phantoms contain either unrealistic features or uniform backgrounds. The purpose of this work was to introduce a novel, task-based methodology for evaluating FFDM and DBT systems using an anthropomorphic inkjet-printed 3D phantom with clinically relevant signals. The methodology consists of multiple physical components: an anthropomorphic breast phantom, microcalcifications made of two types of material, and masses. A 4 cm compressed thickness breast phantom was first modeled analytically, then realized in a slice-by-slice fashion using inkjet printing with iohexol-doped ink. The microcalcifications (MCs) were made by arranging individual specks of varying sizes into regular patterns. Two types of MCs were used, ranging in diameter from 150 μm to 260 μm: one made from calcium hydroxyapatite (HA) and another from soda lime glass microspheres. Lastly, realistically-shaped masses were created using ink doped with potassium iodide. The phantom was imaged on two commercially available FFDM/DBT systems, Holgic Selenia Dimensions and the GE Senographe Essential. A typical mammographic beam was used (according to the automatic exposure control for each commercial system), and a similar average glandular dose was maintained across the systems. A pilot study consisting of a four-alternative, forced-choice (4AFC) analysis with human observers was performed on the FFDM and DBT acquisitions. The linear attenuation coefficients of the microcalcification models were measured to be similar to reference values. A custom Matlab program was created to extract ROI images from images of the phantom, each containing a signal, in preparation for use with 4AFC software. A pilot 4AFC study showed the visibility of the microcalcifications ranged from easy to difficult, and results informed the final reader study. An anthropomorphic breast phantom was created using inexpensive, easily available materials. The task-based assessment was performed on clinical FFDM and DBT systems. This promising phantom generation methodology can be used to objectively evaluate task performance resulting with FFDM and DBT breast imaging systems.
Laboratory designs and validations of a glandularity-adjustable dual-purpose breast tissue phantom
The objectives of this study were to develop and evaluate a breast tissue equivalent phantom that can be used for dual purposes, conventional x-ray imaging and ultrasonography. This phantom was designed based on the prototype of an intralipid-gel soft tissue mimicking phantom used for laser photothermal therapy. The glandularities and the densities of the phantom can be adjusted by modifying the ratio of intralipid and other ingredients and adding fiber powders. An adipose tissue phantom and a glandular tissue phantom were firstly developed and phantoms of different glandularities were further developed through mixing different weight proportions of adipose and glandular. To validate the properties of the phantom for the applications of x-ray imaging techniques, three methods were employed: (1) the compositions of the elements contained in the phantoms were estimated through calculations; (2) the x-ray mass attenuation coefficients of the phantom were calculated based on the elemental compositions; (3) the x-ray photon energies deposit in the phantoms with different glandularities were simulated using Geant4 Simulation Tool Kit. The results showed high agreements with the real breast tissues at corresponding breast glandularities. For the application in ultrasonography, the elasticity of the phantom was determined by measuring the value of Young’s modulus and the value of 39 ± 10 kPa indicated the satisfactory of the requirement of being used as phantom for ultrasound imaging. Therefore, the phantoms developed in this study potentially provided a solution of dualpurpose breast tissue mimicking phantom in the needs of different level of glandularity.
Comparison of three breast imaging techniques using 4-AFC human observation study
Shada Kazemi, Oliver Diaz, Premkumar Elangovan, et al.
X-ray mammography is the gold standard for detecting malignancies in a breast cancer screening context. However, limited angle tomosynthesis has now started to be used in screening due to its ability to remove overlying image clutter. However, breast CT is a method, which can potentially remove all overlying clutter through the use of tomographic image reconstruction.

The aim of this work is to investigate whether breast cone-beam computed tomography (CBCT) can provide better lesion detectability compared to 2D mammography or digital breast tomosynthesis (DBT).

Lesions with a diameter of 4 mm, 5 mm and 6 mm have been inserted in a simulated breast phantom. In total 180 images are analysed, out of which 90 images contain lesions (equally divided between the 4 mm, 5mm and 6mm diameter lesions) and the rest represent normal breast tissues. The TIGRE (Tomographic Iterative GPU-based Reconstruction) has been used to simulate 360 projections and to reconstruct the images using the FeldKamp, Davis and Kress (FDK) algorithm. Scattered radiation and Poisson noise have also been added to the projections prior the image reconstruction.

In total 10 observers, some with, and some without experience of mammography images, have been used as observers for this preliminary 4AFC study. The analysis of the 4AFC study shows that the mean minimum detectable lesion size for the breast CBCT is 2.96±0.23 mm with a 95% confidence intervals of [2.73, 3.19].
Breast imaging using micro-resolution field emission x-ray system with carbon nanotube emitter
We report the design and fabrication of a carbon nanotube (CNT) based micro-resolution field emission mobile open type x-ray system for breast imaging. It can be used efficiently during the surgery of breast cancer removal to obtain accurate resection margin in partial resection of breast specimen. The obtained x-ray image of breast specimen with the proposed system shows the clear detection of micro-calcifications.
A proposed new image display method with high contrast-to-noise ratio using energy resolved photon-counting mammography with a CdTe series detector
R. Suzuki, A. Nakajima, M. Sasaki, et al.
In this study, we propose a new image display method to obtain high contrast-to-noise ratio (CNR) using energy resolved photon-counting mammography (ERPCM) with a cadmium telluride (CdTe) series detector manufactured by JOB CORPOLATION. The CdTe series detector can detect high-energy photons with high sensitivity, enabling users to image with high-energy X-rays. Using this detector, it is possible to reduce the dose given to a patient while increasing the CNR. First, the spectrum was divided into three bins and their corresponding linear attenuation coefficients were calculated from input and output photon numbers. Further, absorption vector length (AVL) and average absorption length (AAL) were calculated from the linear attenuation coefficients and from thicknesses of objects after beam-hardening correction. We further compared the CNR between ERPCM and general mammography images under the constant average glandular dose (AGD). We imaged an acrylic plate (1 mm thick) on RMI-156 phantom, determined regions of interest (ROIs) on an acrylic plate and background, and calculated the CNR. Our ERPCM generated two types of images: an AVL image and an AAL image. AMULET Innovality manufactured by FUJIFILM generated an integrated image. MicroDose SI manufactured by Philips generated a count image and removed electrical noise by the photon-counting technique. The four images, in order of decreasing CNR, were the AAL image, AVL image, MicroDose image, and AMULET image. The proposed method using ERPCM generated an image with higher CNR than images using general mammography under the constant AGD.
Developing a unique portable device to non-invasively detect bio-electrochemical characteristics of human tissues
The objective of this study is to develop and test a unique portable device that aims to non-invasively detect bio-electrochemical characteristics of human tissues. For this purpose, we designed and developed a new portable Bio-impedance Spectroscopy (BIS) system utilizing active probe technique as measurement technique for bioelectrical features. This BIS system includes the integrated current source and output voltage signal detection sensors. Active probes are placed on the skin surface of the targeted human organ tissues to directly detect bioimpedance signals. Bio-impedance spectrum was measured by applying electrical currents over a range of frequencies (10kHz − 3MHz). The spectrum was then quantitatively analyzed to produce new biomarkers based on bio-electrochemical characteristics of human tissues. These new bioelectrical markers aim to accurately and reproducibly predict and/or detect human diseases (including cancer). To address the feasibility of this new research technique, we conducted a comprehensive evaluation of new BIS device with its calibration techniques and phantom study. Results showed that the computed bioelectrical marker values monotonically change corresponding to tissue compositions. In this research, we demonstrated how to compute independent and dependent bioelectrical features to be implemented on machine learning (ML) models that can improve our understanding of disease or cancer risk state. The study suggested that using this new device has potential for different applications, including the noninvasive assessment of breast density and the detection of asymmetrical focal areas between two bilateral breasts, which may eventually help more accurately predict breast cancer risk.
Poster Session: Tomosynthesis
icon_mobile_dropdown
The first freely available open source software package for performing 3D image reconstruction for digital breast tomosynthesis
Digital Breast Tomosynthesis (DBT) improves the visibility of cancerous lesions as compared to 2D full-field digital mammography (FFDM) by removing the overlap in breast tissues. An integral and computationally demanding part of the DBT image acquisition process is the reconstruction of the volume from projections. To facilitate further research towards improving DBT technology, it is essential to have access to image reconstruction software that generates volumes within a reasonable amount of time. We have developed an open source version of the filtered back-projection (FBP) reconstruction algorithm for DBT using single-threaded C. This is an extention to the C codes developed by Leeser et al. for cone-beam computed tomography (CBCT) reconstruction. For each projection angle, the DBT projection view was interpolated to create an estimation of the corresponding CT projection view for that angle. The estimated CT projection views were then filtered and backprojected to generate the DBT volume. We tested our implementation using mathematical and anatomical phantom data and compared the results with a previously verified MATLAB implementation. We observed negligible relative differences between the DBT reconstruction by both methods with a considerable increase (up to 9 times faster) in speed over the MATLAB code.
In-plane MTF measurement using sphere phantoms for step-and-shoot mode and continuous mode digital tomosynthesis systems
In-plane modulation transfer function (MTF) has been widely used as a quantitative metric describing the spatial resolution of an in-plane image for a digital tomosynthesis system. Although the in-plane MTF was measured using fine wire and edge objects, precise phantom alignment along the measurement direction was one of challenging issues. To overcome this limitation, a sphere object was regarded as an alternative phantom because of spherically symmetric property. However, due to anisotropic property of tomosynthesis image, the sphere phantom has not been used to measure the in-plane MTF. In our previous work, we proposed the inverse filtering approach to measure the in-plane MTF using sphere phantoms. Using the inverse filtering approach, in this work, we measure the in-plane MTF of step-and-shoot mode and continuous mode digital tomosynthesis systems. We generated projection data of point and sphere objects in step-and-shoot mode and continuous mode tomosynthesis systems, and reconstructed using FDK algorithm. An in-plane image of reconstructed point volume was regarded as an ideal in-plane point spread function (PSF). The ideal in-plane MTF was calculated by taking Fourier transform of the ideal in-plane PSF, and fx-directional in-plane MTF was used as a reference. To measure fx-directional in-plane MTF, we divided the Fourier transform of reconstructed sphere phantom by that of ideal sphere object, and performed plane integral along the fz-direction. The estimation errors caused by inverse filtering were corrected by pseudo inverse filtering and Laplacian operator. Our results show that the in-plane MTFs of step-and-shoot mode and continuous mode are reliably estimated by the inverse filtering approach.
Quantitative lung nodule detectability and dose reduction in low-dose chest tomosynthesis
Quantitative imaging analysis has become a focus of medical imaging fields in recent days. In this study, Fourier-based imaging metrics for task-based quantitative assessment of lung nodules were applied in low-dose chest tomosynthesis. Compared to the conventional filtered back-projection (FBP), a compressed-sensing (CS) image reconstruction has been proposed for dose and artifact reduction. We implemented the CS-based low-dose reconstruction scheme to a sparsely sampled projection dataset and compared the lung nodule detectability index (d’) between the FBP and CS methods. We used the non-prewhitening (NPW) model observer to estimate the in-plane slice detectability in tomosynthesis and theoretically calculated d’ using the weighted amounts of local noise, spatial resolution, and task function in Fourier domain. We considered spatially varying noise and spatial resolution properties because the iterative reconstruction showed non-stationary characteristics. For analysis of task function, we adopted a simple binary hypothesis-testing model which discriminates outer and inner region of the encapsulated shape of lung nodule. The results indicated that the local noise power spectrum showed smaller intensities with increasing the number of projections, whereas the local transfer function provided similar appearances between the FBP and CS schemes. The resulted task functions for the same size of lung nodules showed the same pattern with different intensity, whereas the task function for different size of lung nodules presented different shapes due to different object functions. The theoretically calculated d’ values showed that the CS schemes provided higher values than the FBP method by factors of 2.64-3.47 and 2.50-3.10 for two different lung nodules among all projection views. This could demonstrate that the low-dose CS algorithm provide a comparable lung nodule images in comparison to FBP from 37.9% up to 28.8% reduced dose in the same projection views. Moreover, we observed that the CS method implemented with small number of projections provided similar or somewhat higher d’ values compared to the FBP method with large number of projections. In conclusion, the CS scheme may present a potential dose reduction for lung nodule detection in the chest tomosynthesis by showing higher d’ in comparison to the conventional FBP method.
Development of respiratory-correlated 4D digital tomosynthesis imaging technique for image-guided radiation therapy
Recently, image-guided radiation therapy (IGRT) with cone-beam computed tomography (CBCT) has been used to precisely identify the location of target lesion. However, the treatment accuracy for respiratory-sensitive regions is still low, and the imaging dose is also relatively high. These issues can be solved by using the respiratory-correlated 4D IGRT with digital tomosynthesis (DT). The purpose of this study was to develop the 4D DT imaging technique for the IGRT and compare image quality between the 3D DT and 4D DT. A DT model was based on a linear accelerator (LINAC) system. In order to simulate the motion of a lesion the sphere defined in a 3D phantom was moved with an irregular pattern. Projections were separately obtained through 3 phases, which were sorted according to the position of the sphere, for simulating the 4D DT imaging. We measured profile, normalized root-mean-square error (NRMSE), noise, contrast-to-noise ratio (CNR) and figure-of-merit (FOM). Noise of 4D DT images was averagely 0.99 times lower than 3D DT images. And, NRMSEs, CNRs, and FOMs of 4D DT images were averagely 1.03, 1.22, and 4.48 times higher than those of 3D DT images, respectively. The results showed that the 4D DT imaging technique accurately determined the position of a moving target and improved image quality compared to the 3D DT imaging technique. These benefits will enable the high-precision IGRT for respiratory-sensitive regions.
Development of a next generation tomosynthesis system
Jeffrey E. Eben, Trevor L. Vent, Chloe J. Choi, et al.
A next generation tomosynthesis (NGT) system has been proposed to obtain higher spatial resolution than traditional digital breast tomosynthesis (DBT) by achieving consistent sub-pixel resolution. Resolution and linear acquisition artifacts can be further improved by creating multi-axis, x-ray tube acquisition paths. This requires synchronization of the x-ray generator, x-ray detector, and motion controller for an x-ray tube motion path composed of arbitrarily spaced x-ray projection points. We have implemented a state machine run on an Arduino microcontroller that synchronizes the system processes through hardware interrupts. The desired x-ray projection points are converted into two-dimensional motion segments that are compiled to the motion controller’s memory. The state machine then signals the x-ray tube to move from one acquisition point to another, exposing x-rays at each point, until every acquisition is made. The effectiveness of this design was tested based on speed of procedure and image quality metrics. The results show that the average procedure time, over 15 test runs for three different paths, took under 20 seconds, which is far superior to previous acquisition methods on the NGT system. In conclusion, this study shows that a state machine implementation is viable for fast and accurate acquisitioning in NGT systems.
Optimization of shutter scan parameters in digital tomosynthesis system
In medical imaging field, various dose reduction techniques have been studied. We proposed shutter scan acquisition for region of interest (ROI) imaging to reduce the patient exposure dose in digital tomosynthesis system. Projections obtained by shutter scan acquisition is a combination of truncated projections and non-truncated projections. In this study, we call the number of truncated projections divided by the number of non-truncated projections as shutter weighting factor. The shutter scan acquisition parameters were optimized using 5 different acquisition sets with the shutter weighting factor (0.16, 0.35, 1.03, 3.05 and 7.1). A prototype CDT system (LISTEM, Korea) and the LUNGMAN phantom (Kyoto Kagaku, Japan) with an 8 mm lung nodule were used. A total of 81 projections with shutter scan acquisition were obtained in 5 sets according to shutter weighting factor. The image quality was investigated using the contrast noise ratio (CNR). We also calculated figure of merit (FOM) to determine optimal acquisition parameters for the shutter scan acquisition. The ROI of the reconstructed image with shutter scan acquisition showed enhanced contrast. The highest CNR and FOM value, shutter weighting factor 7.1, is the acquisition set consisting of 71 truncated projections and 10 non-truncated projections. In this study, we investigated the effects of composition ratio of the truncated and non-truncated projections on reconstructed images through the shutter scan acquisition. In addition, the optimal acquisition conditions for the shutter scan acquisition were determined by deriving the FOM values. In conclusion, we can suggest optimal shutter scan acquisition parameters on the lesion within the ROI to be diagnosed.
A framework for optimization of digital tomosynthesis imaging configurations
Frank Smith, Ying Chen
Digital tomosynthesis is a novel technology to volumetrically reconstruct three-dimensional information with a finite number of low-dose two-dimensional projection images. In digital breast tomosynthesis imaging fields, most current breast tomosynthesis systems utilize a design where a single x-ray tube moves along an arc above objects over a certain angular range. Parallel imaging configurations also exist with new nanotechnology enabled multi-beam x-ray sources. In this paper, a framework is described for comparison and optimization of imaging configurations for digital tomosynthesis. The framework is designed to allow the flexibility of comparison and optimization of various imaging configurations such as parallel imaging, partial iso-centrical imaging, rectangular imaging, etc., with uniform and non-uniform beam distributions where imaging parameters of x-ray tube and detector can be assigned. The proposed framework may assist in the study of digital tomosynthesis and expand tomosynthesis applications to various diagnostic and interventional procedures.
Comparison study of task-based detectability index according to angular distribution in a prototype breast tomosynthesis
Quantitative imaging performance analysis has recently been the focus in medical imaging fields. It would not only provide objective information but also it could aid a patient diagnosis by giving optimized system parameters for various imaging tasks. However, the previous studies on task-based metric in breast tomosynthesis usually take into account a cascaded system modeling for generalized noise equivalent quanta. In this study, the authors have been focused on the experimental study for calculating task-based detectability index (d') in the prototype breast tomosynthesis system for different angular ranges. According to the summarized d' the authors observed that the highest d' could be found in the angular range of ±10.5° (1.5° angle step) among several cases for detection of 4.7 mm mass in our prototype breast tomosynthesis system. Our study would be easily applied in practical breast tomosynthesis for the quantitative performance analysis of imaging parameter is needed. More various imaging tasks with different parameter combinations would be conducted in the future for generalized optimization of breast tomosynthesis study.
Poster Session: Detectors
icon_mobile_dropdown
Dual screen sandwich configurations for digital radiography
Motivated by recent advances in TFT array technology for display, this study develops a theoretical treatment of dual granular scintillating screens sandwiched around a light detector and applies this to investigate possible improvements in imaging performance of indirect active-matrix flat-panel imagers (AMFPI’s) for x-ray applications, when dual intensifying screen configurations are used. Theoretical methods, based on previous studies of granular intensifying screens, are developed and applied to calculate modulation transfer function (MTF), normalized noise power spectrum (NNPS), Swank factor (As), Lubberts function L(f), and spatial frequency-dependent detective quantum efficiency (DQE(f)) for a variety of detector configurations in which a pair of screens are sandwiched around a light sensing array. Single-screen front illuminated (FI) and back illuminated (BI) configurations are also included in the analysis. DQE(f) is used as a performance metric to optimize and compare the performance of the various configurations. Large improvements in performance in MTF and DQE(f) are found possible, when the substrate layer between the light sensing array and the intensifying screen is optically thin. The ratio of the thicknesses of the two screens which optimizes DQE performance is generally asymmetric with the thinner screen facing the incident flux, and the ratio depends on the x-ray attenuation length in the phosphor material.
Latest advancements in state-of-the-art aSi-based x-ray flat panel detectors
Thierry Ducourant, Thibaut Wirth, Guillaume Bacher, et al.
Since aSi-based Flat Panel Detectors (FPD) were introduced in the early 2000’s with Trixell first generation of products considerable improvements have been brought incrementally to the products architectures and core technologies. Today, the 3rd generation of detectors achieve performances that seemed unreachable a decade ago … By combining advanced amorphous Silicon (aSi) sensor plate processes, high absorption and low ghosting indirect deposition CsI and the last generation of very fast and low noise readout ICs, detectors can run over 300 frame per second (fps) in binned mode and 60 fps in full resolution, while keeping sub 1000 e- electronic noise and maintaining Detective Quantum Efficiency (DQE) @5 nGy, @1 lp/mm, above 45%. This paper will show that a consistent product platform has been built around these optimized building blocks and that this platform is now ready for a complete portfolio allowing to serve the most demanding applications such as Radiography, ultra-low dose fluoroscopy, and even CT-like 3D imaging.
Increased temporal resolution of amorphous selenium detector using preferential charge sensing approach
Amorphous selenium (a-Se) is a direct conversion photoconductor capable of very high spatial resolution that can enable early detection of small and subtle lesions. A-Se also offers cost effective and reliable coupling to large area readout circuitry. Currently, the highest performance commercial flat panel detectors used for mammography are based on a-Se technology. However, this inherent spatial resolution has not been leveraged for real-time imaging applications, e.g. micro- angiography for imaging fine brain vessels that requires spatial resolution approaching 20 lp/mm, which is achievable with selenium technology. The challenge is that a-Se detectors suffer from memory artifacts such as lag that limits the frame rate of the X-ray imager. The frame rate reduction is attributed primarily to lag, which manifests itself as an increased dark conductivity after an X-ray exposure. Increased lag degrades the temporal response of the detector and makes a-Se photoconductor impractical for real-time imaging. Furthermore, high ionization energy required for electron-hole pair creation with a-Se limits the sensitivity of detector for a given X-ray dose, achieving a quantum noise limited system become a challenge. In this study, we investigate preferential sensing of those charge carriers having a higher mobility, i.e. holes for a-Se, to improve the temporal response of a-Se detectors for real-time imaging. A new preferential charge sensing detector with a field shaping internal grid, called Multi Pixel Proportional Counter, is fabricated and tested under the typical clinical usage conditions similar to that of fluoroscopy. The fabricated detector offers high frame rate and low noise imaging through avalanche gain. Conventional a-Se detectors are also fabricated for comparison purposes. Experimental results show that image lag as low as 1% can be achieved with the new structure with an internal grid while the conventional detector exhibits higher lag around 5%.
A novel radiation imaging detector with proportional charge gain
Using electric field to partition the selenium layer into a low field charge drift region and a high field avalanche gain region was first proposed in 2005(1). Engineering and fabricating such a grid structure on a TFT array have been a challenge. High dielectric strength material (up to several hundred volts/um) is required. Furthermore, it is very difficult to achieve or control a stable and uniform avalanche gain for imaging without too much excess noise from the elevated grid structure about the pixel plane. Image charge gain is non-­uniform depending on the distance from the center of the avalanche well. A novel coplanar detector structure is now being tested. All image charges collected on a dielectric pixel surface will transfer to the central pixel readout electrode along a converging field. Uniform gain via a stable avalanche process can be achieved. This new structure does not require a conventional TFT platform and higher temperature fabrication process can be used. Imaging charges generated from x‐ray are first directed to a dielectric charge collection interface surface. During the sequential rolling image readout, imaging charges in each line are re-­directed to an orthogonal lines of central readout electrode by a convergent field with high electric field strength at the rim of each pixel central electrode. All accumulated image charges need to pass through the end point of this converging field and therefore undergo a uniform impact ionization charge gain. This gain mechanism is similar to a proportional counter in radiation detection.
Photon counting performance of amorphous selenium and its dependence on detector structure
Photon counting detectors (PCD) have the potential to improve x-ray imaging, however they are still hindered by high production costs and performance limitations. By using amorphous Selenium (a-Se) the cost of PCDs can be significantly reduced compared to currently used crystalline semiconductors and enable large area deposition. To overcome the limitation of low carrier mobility and low charge conversion gain in a-Se, we are developing a novel direct conversion a-Se field-Shaping multi-Well Avalanche Detector (SWAD). SWADs multi-well, dual grid design creates separate non-avalanche interaction (bulk) and avalanche sensing (well) regions, achieving depth-independent avalanche gain. Unipolar time differential (UTD) charge sensing, combined with tunable avalanche gain in the well region allows for fast timing and comparable charge conversion gain to crystalline semiconductors. In the present work we developed a probability based numerical simulation to model the charge generation, transport and signal collection of three different a-Se detector configurations and systematically show the improvements in energy resolution attributed to UTD charge sensing and avalanche gain. Pulse height spectra (PHS) for each detector structure, exposed to a filtered 241Am source, are simulated and compared against previously published PHS measurements of a conventional a-Se detector. We observed excellent agreement between our simulation of planar a-Se and the measured results. The energy resolution of each generated PHS was estimated by the full-width-at-half-maximum (FWHM) of the primary photo-peak. The energy resolution significantly improved from ~33 keV for the planar a-Se detector to ~7 keV for SWAD utilizing UTD charge sensing and avalanche gain.
Detector output prediction for CT detector array manufacturing
H. Zuo, Y. Lu, D. Xiang, et al.
The detector panel on a typical CT machine today is made of more than 500 detector boards, nicknamed chiclets. Each chiclet contains a number of detectors (i.e., pixels). In the manufacturing process, the chiclets on the panel need to go through an iterative test, swap, and test (TST) process, till some image quality level is achieved. Currently, this process is largely manual and can take hours to several days to complete. This is inefficient and the results can also be inconsistent. In this work, we investigate techniques that can be used to automate the iterative TST process. Specifically, we develop novel prediction techniques that can be used to simulate the iterative TST process. Our results indicate that deep neural networks produce significantly better results than linear regression in the more difficult prediction scenarios.
Poster Session: Radiography and Fluoroscopy
icon_mobile_dropdown
Developing a database of 3D scattered radiation distributions for a C-arm fluoroscope as a function of exposure parameters and phantom
The purpose of this work is to develop a database of 3D scattered radiation dose-rate distributions to estimate the staff dose by location around a C-Arm fluoroscopic system in an interventional procedure room. The primary x-ray beam of a Toshiba Infinix fluoroscopy machine was modeled using EGSnrc Monte Carlo code and the scattered radiation distributions were calculated using 5 x 109 photons per simulation. These 3D distributions were determined over the volume of the room as a function of various parameters such as the beam kVp and beam filter, the size and shape of the field, the angulation of the Carm, and the phantom size and shape. Two phantom shapes were used in this study: cylindrical and superellipses. The results show that shape of the phantom will affect the dose-rate distribution at distances less than 100 cm, with a higher intensity for the super-ellipse. The scatter intensity per entrance air kerma is seen to be approximately proportional to field area and to increase with increasing kVp. The scatter changes proportionally with increases in primary entrance air kerma for factors such as pulse rate, mA and pulse width. This database will allow estimation of the scatter distribution in the procedure room and, when displayed to the staff during a procedure, may facilitate a reduction of occupational dose.
Precision analysis of the noise power spectrum estimate in radiography imaging
Eunae Lee, Dong Sik Kim
The noise power spectrum (NPS) is usually measured to quantitatively evaluate the detector noise performance. In this paper, Bartlett’s method is employed to estimate the NPS values, in which the NPS estimate is the sample mean of periodograms. The precision of the NPS estimate is then derived and equal to the inverse of the square root of the sample size. It can be noticed that the precision is independent of the NPS values and the spectral resolution. We conducted a Monte Calro simulation on the precision and compared the precisions for a general radiography detector. We could observe the sample-size dependency on the NPS estimate precision through extensive experiments and derived a system uncertainty, which is dependent on the radiation of the x-ray, readout circuits, etc. The theoretical and numerical analyses on the NPS estimate precision will be a good guideline in designing high-precision detectors.
Oversampling digital radiography imaging based on the 2x2 moving average filter for mammography detectors
The conventional binning process can be described in terms of a technology called oversampling radiography imaging and is equivalent to applying the moving average filter followed by downsampling. In order to improve the performance of detective quantum efficiency (DQE) when conducting a 2 × 2 binning process compared to the conventional binning, a method to apply the moving average filter multiple times is proposed in this paper. For a theoretical analysis of the multiple filtering, an image formation model with the noise power spectrum and DQE measurements is first constructed. Monte Carlo simulations are then conducted to observe the filtering performance based on the theoretical analysis. For direct mammography detectors, which are based on a-Se, the DQE values were increased by more than 20% at 2lp/mm compared to the conventional 2 × 2 binning process. Since the multiple application of the moving average filter requires only shifting registers instead of the complicate floating-point multiplications, the filter can be easily implemented at the hardware controller level and can provide a fast data transmission capability.
Calculation of forward scatter dose distribution at the skin entrance from the patient table for fluoroscopically guided interventions using a pencil beam convolution kernel
The forward-scatter dose distribution generated by the patient table during fluoroscopic interventions and its contribution to the skin dose is studied. The forward-scatter dose distribution to skin generated by a water table-equivalent phantom and the patient table are calculated using EGSnrc Monte-Carlo and Gafchromic film as a function of x-ray field size and beam penetrability. Forward scatter point spread function’s (PSFn) were generated with EGSnrc from a 1×1 mm simulated primary pencil beam incident on the water model and patient table. The forward-scatter point spread function normalized to the primary is convolved over the primary-dose distribution to generate scatter-dose distributions. The utility of PSFn to calculate the entrance skin dose distribution using DTS (dose tracking system) software is investigated. The forward-scatter distribution calculations were performed for 2.32 mm, 3.10 mm, 3.84 mm and 4.24 mm Al HVL x-ray beams for 5×5 cm, 9×9 cm, 13.5×13.5 cm sized x-ray fields for water and 3.1 mm Al HVL x-ray beam for 16.5×16.5 cm field for the patient table. The skin dose is determined with DTS by convolution of the scatter dose PSFn’s and with Gafchromic film under PMMA “patient-simulating” blocks for uniform and for shaped x-ray fields. The normalized forward-scatter distribution determined using the convolution method for water table-equivalent phantom agreed with that calculated for the full field using EGSnrc within ±6%. The normalized forwardscatter dose distribution calculated for the patient table for a 16.5×16.5 cm FOV, agreed with that determined using film within ±2.4%. For the homogenous PMMA phantom, the skin dose using DTS was calculated within ±2 % of that measured with the film for both uniform and non-uniform x-ray fields. The convolution method provides improved accuracy over using a single forward-scatter value over the entire field and is a faster alternative to performing full-field Monte-Carlo calculations.
Initial investigations of a special high-definition (Hi-Def) zoom capability in a new detector system for neuro-interventional procedures
Real-time visualization of fine details ranging to 100 um or less in neuro-vascular imaging guided interventions is important. A separate high-resolution detector mounted on a standard flat panel detector (FPD) was previously reported. This device had to be rotated mechanically into position over the FPD for high resolution imaging. Now, the new detector reported here has a high definition (Hi-Def) zoom capability along with the FPD built into one unified housing. The new detector enables rapid switching, by the operator between Hi-Def and FPD modes. Standard physical metrics comparing the new Hi-Def modes with those of the FPD are reported, demonstrating improved imaging resolution and noise capability at patient doses similar to those used for the FPD. Semi-quantitative subjective studies involving qualitative clinician feedback on images of interventional devices such as a Pipeline Embolization Device (PED) acquired in both Hi-Def and FPD modes are presented. The PED is deployed in a patient specific 3D printed neuro-vascular phantom embedded inside realistic bone and with tissue attenuating material. Field-of-view (FOV), exposure and magnification were kept constant for FPD and Hi-Def modes. Static image comparisons of the same view of the PED within the phantom were rated by expert interventionalists who chose from the following ratings: Similar, Better, or Superior. Generally, the Hi-Def zoomed images were much preferred over the FPD, indicating the potential to improve endovascular procedures and hence outcomes using such a Hi-Def feature.
Evaluation of methods of displaying the real-time scattered radiation distribution during fluoroscopically guided interventions for staff dose reduction
J. Kilian-Meneghin, Z. Xiong, C. Guo, et al.
2D and 3D scatter dose display options are evaluated for usefulness and ease of interpretation for real-time feedback to staff to facilitate changes in individual positioning for dose reduction purposes, as well as improving staff consciousness of radiation presence. Room-sized scatter dose 3D matrices are obtained utilizing Monte Carlo simulations in EGSnrc. These distributions are superimposed on either a ceiling-view 2D graphic of the patient and table for reference or a 3D augmented reality (AR) display featuring a real-time video feed of the interventional room. A slice of the scatter dose matrix, at a selectable distance above the floor, is color-coded and superimposed on the computer graphic or AR display. The 3D display obtains depth information from a ceiling mounted Microsoft Kinect camera, which is equipped with a 1080p visual camera, as well as an IR emitter/receiver to generate a depth map of the interventional suite and persons within it. The 3D depth information allows parts of objects above the 2D dose map to pass through the map without being colorized by it so the height perspective of the dose map can be maintained. The 2D and 3D displays incorporate network information from the imaging system to scale the scatter dose with exposure factors and adjust rotation of the distribution to match the gantry. Demonstration images were displayed to neurosurgery interventional staff and survey responses were collected. Results from the survey indicated that scatter distribution displays would be desirable and helpful in managing staff dose.
Anti-scatter grid artifact elimination for high-resolution x-ray imaging detectors without a prior scatter distribution profile
Using anti-scatter grids with high-resolution imaging detectors could result in grid-line artifacts, with increasing severity as detector resolution improves. Grid-line mask subtraction can result in residual artifacts that are due to residual scatter penetrating the grid and not subtracted. By subtracting this residual scatter, the grid artifacts can be minimized. In the previous works, an initial residual-scatter estimate was derived by placing lead markers on a test object; however, any change in the object geometry requires a new scatter estimate. Such a method is impractical to implement during a clinical procedure. In this work, we present a new method to derive the initial scatter estimate to eliminate grid-line artifacts during a procedure. A standard stationary Smit-Roentgen x-ray grid (line density - 70 lines/cm, grid ratio - 13:1) was used with a high-resolution CMOS detector (Dexela Model 1207, pixel size - 75 μm) to image an anthropomorphic head phantom. The initial scatter estimate was derived from the image itself and the grid artifacts were eliminated using recursive correction estimation; this result was compared to that with the estimate derived from placing lead markers on the phantom. In both cases, the contrast-to-noise ratio (CNR) was improved compared to the original image with grid artifacts. Percentage differences in CNR’s for three regions between the images corrected with the two estimates were less than 5%. With the new method no a priori scatter distribution profiles are needed, eliminating the need to have libraries of pre-calculated scatter profiles and making the implementation more clinically practical.
Use of high purity aluminum filter with different processing methods in the DQE measurement
Satoshi Yanagita, Masayuki Nishiki
Evaluation of the DQE should be made under well-defined X-ray beam condition in order to assure intercomparison among different facilities. For this purpose, IEC61267 requires the use of high purity (at least 99.9% or 3N) Al attenuation filter, while IEC62220-1-1 requires lower purity of 99.0% (or 2N) filter since high purity metals are prone to kinds of non-uniformities including unexpected NPS increase in lower spatial frequencies <0.32 mm-1. The purpose of this study was to explore a possibility to adopt high purity Al filter without sacrificing NPS degradation in the low frequency region. To this end, we evaluated several types of high purity Al filters with different processing methods: casting, forging and rolling. Since the beam quality of RQA5 requires the use of 21 mm thick Al filter, we compared the following 4 types of 5N purity Al filters with 2N5 purity Al of 21mm thick: A. casting (5N) with 21mm x 1 sheet, B. forging (5N) with 21mm x 1 sheet, C. rolling (5N) with 7mm x 3 sheets, D. rolling (5N) with 1mm x 21 sheets. The comparison was made in terms of the Normalized Noise Power Spectrum (NNPS) exposure product in order to eliminate the effect of exposure variability. As a result, thin rolling sheets (D) showed no meaningful difference with 2N5 Al, though casting and forging sheets showed an observable NPS increase throughout the whole frequency range, suggesting that high purity thin rolled sheets could be used as beam attenuating material without suffering from non-uniformity problem.
Quantitative analysis of age-dependent backscatter factor (BSF) in x-ray examinations with a Monte Carlo method using XCAT phantoms
Su-Jin Park, Jaehyuk Kim, Yoonsuk Huh, et al.
The international organizations, including IAEA, have established a diagnostic reference levels (DRLs) for diagnostic xray examinations in order to control doses. The DRLs are mostly expressed as an entrance surface dose (ESD) to quantify dose on the patients. However, there are significant uncertainties associated with the measurements of ESD, and most of them are results from back scatter factor (BSF). The accurate BSF could be determined only by Monte Carlo simulations. In addition, as the human body is much more complicate compared with a commonly used ANSI phantom, for realistic simulation the computational anthropomorphic extended cardiac-torso (XCAT) phantom has to be used in order to obtain accurate BSF. However, to our knowledge, a few BSF values have previously been reported in the literature in x-ray examinations considering patients’ age with XCAT phantom with Monte Carlo methods. The aim of this study was to quantitatively analyze the BSFs using XCAT phantom with Monte Carlo simulation in various x-ray examinations considering the patient’s age. For chest examinations, the BSFs were 1.21, 1.29, and 1.35 for one-year old, seven-year old, and adult patient’s protocol, respectively. For adult patient, the BSFs were 1.35, 1.26, and 1.10 for chest, abdomen, and extremity examinations, respectively. In addition, the ESDs in chest examinations were 36.30 μGy, 52.69 μGy, and 111.80 μGy for one-, seven-year old, and adult patient’s protocol, respectively. These results demonstrated that accurate BSF should be chosen considering the diagnostic examination in order to estimate the ESD properly.