Proceedings Volume 9033

Medical Imaging 2014: Physics of Medical Imaging

cover
Proceedings Volume 9033

Medical Imaging 2014: Physics of Medical Imaging

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 11 April 2014
Contents: 33 Sessions, 207 Papers, 0 Presentations
Conference: SPIE Medical Imaging 2014
Volume Number: 9033

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9033
  • Keynote and Cardiac CT
  • CT and Applications
  • Phase Contrast Imaging
  • Algorithms
  • CT Reconstructions
  • Reconstruction
  • Cone Beam CT and Novel Design
  • Tomosynthesis
  • Multi-energy CT
  • Multi-energy Imaging and Detectors
  • New Contrast Mechanisms
  • Dose
  • Phantoms
  • Metrology and System Characterization
  • Performance Evaluation
  • Poster Session: Algorithms and Applications
  • Poster Session: Cone Beam CT
  • Poster Session: Conventional CT
  • Poster Session: CT Reconstruction
  • Poster Session: Multi-energy CT
  • Poster Session: Detectors
  • Poster Session: Dose
  • Poster Session: Mammography
  • Poster Session: New Imaging Concepts
  • Poster Session: Nuclear Medical Imaging
  • Poster Session: Phantoms and Radiation Transport
  • Poster Session: Phase Contrast Imaging
  • Poster Session: Reconstruction
  • Poster Session: System Characterization
  • Poster Session: System Reports
  • Poster Session: Tomosynthesis and Multi-energy Imaging
  • Poster Session: X-ray Imaging
Front Matter: Volume 9033
icon_mobile_dropdown
Front Matter: Volume 9033
This PDF file contains the front matter associated with SPIE Proceedings Volume 9033, including the Title Page, Copyright information, Table of Contents, Invited Panel Discussion, and Conference Committee listing.
Keynote and Cardiac CT
icon_mobile_dropdown
Simulation evaluation of quantitative myocardial perfusion assessment from cardiac CT
Michael Bindschadler, Dimple Modgil, Kelley R. Branch, et al.
Contrast enhancement on cardiac CT provides valuable information about myocardial perfusion and methods have been proposed to assess perfusion with static and dynamic acquisitions. There is a lack of knowledge and consensus on the appropriate approach to ensure 1) sufficient diagnostic accuracy for clinical decisions and 2) low radiation doses for patient safety. This work developed a thorough dynamic CT simulation and several accepted blood flow estimation techniques to evaluate the performance of perfusion assessment across a range of acquisition and estimation scenarios. Cardiac CT acquisitions were simulated for a range of flow states (Flow = 0.5, 1, 2, 3 ml/g/min, cardiac output = 3,5,8 L/min). CT acquisitions were simulated with a validated CT simulator incorporating polyenergetic data acquisition and realistic x-ray flux levels for dynamic acquisitions with a range of scenarios including 1, 2, 3 sec sampling for 30 sec with 25, 70, 140 mAs. Images were generated using conventional image reconstruction with additional image-based beam hardening correction to account for iodine content. Time attenuation curves were extracted for multiple regions around the myocardium and used to estimate flow. In total, 2,700 independent realizations of dynamic sequences were generated and multiple MBF estimation methods were applied to each of these. Evaluation of quantitative kinetic modeling yielded blood flow estimates with an root mean square error (RMSE) of ~0.6 ml/g/min averaged across multiple scenarios. Semi-quantitative modeling and qualitative static imaging resulted in significantly more error (RMSE = ~1.2 and ~1.2 ml/min/g respectively). For quantitative methods, dose reduction through reduced temporal sampling or reduced tube current had comparable impact on the MBF estimate fidelity. On average, half dose acquisitions increased the RMSE of estimates by only 18% suggesting that substantial dose reductions can be employed in the context of quantitative myocardial blood flow estimation. In conclusion, quantitative model-based dynamic cardiac CT perfusion assessment is capable of accurately estimating MBF across a range of cardiac outputs and tissue perfusion states, outperforms comparable static perfusion estimates, and is relatively robust to noise and temporal subsampling.
A combined local and global motion estimation and compensation method for cardiac CT
Qiulin Tang, Beshan Chiang, Akinola Akinyemi, et al.
A new motion estimation and compensation method for cardiac computed tomography (CT) was developed. By combining two motion estimation (ME) approaches the proposed method estimates the local and global cardiac motion and then preforms motion compensated reconstruction. The combined motion estimation method has two parts: one is the local motion estimation, which estimates the coronary artery motion by using coronary artery tree tracking and registration; the other one is the global motion estimation, which estimates the entire cardiac motion estimation by image registration. The final cardiac motion is the linear combination of the coronary artery motion and entire cardiac motion the. We use the backproject-then-warp method proposed by Pack et al. to perform motion compensation reconstruction (MCR). The proposed method was evaluated with 5 patient data and improvements in sharpness of both coronary arteries and heart chamber boundaries were obtained.
CT and Applications
icon_mobile_dropdown
Dose reduction assessment in dynamic CT myocardial perfusion imaging in a porcine balloon-induced-ischemia model
Rachid Fahmi, Brendan L. Eck, Mani Vembar, et al.
We investigated the use of an advanced hybrid iterative reconstruction (IR) technique (iDose4, Philips Health- care) for low dose dynamic myocardial CT perfusion (CTP) imaging. A porcine model was created to mimic coronary stenosis through partial occlusion of the left anterior descending (LAD) artery with a balloon catheter. The severity of LAD occlusion was adjusted with FFR measurements. Dynamic CT images were acquired at end-systole (45% R-R) using a multi-detector CT (MDCT) scanner. Various corrections were applied to the acquired scans to reduce motion and imaging artifacts. Absolute myocardial blood flow (MBF) was computed with a deconvolution-based approach using singular value decomposition (SVD). We compared a high and a low dose radiation protocol corresponding to two different tube-voltage/tube-current combinations (80kV p/100mAs and 120kV p/150mAs). The corresponding radiation doses for these protocols are 7.8mSv and 34.3mSV , respectively. The images were reconstructed using conventional FBP and three noise-reduction strengths of the IR method, iDose. Flow contrast-to-noise ratio, CNRf, as obtained from MBF maps, was used to quantitatively evaluate the effect of reconstruction on contrast between normal and ischemic myocardial tissue. Preliminary results showed that the use of iDose to reconstruct low dose images provide better or comparable CNRf to that of high dose images reconstructed with FBP, suggesting significant dose savings. CNRf was improved with the three used levels of iDose compared to FBP for both protocols. When using the entire 4D dynamic sequence for MBF computation, a 77% dose reduction was achieved, while considering only half the scans (i.e., every other heart cycle) allowed even further dose reduction while maintaining relatively higher CNRf.
Estimating lesion volume in low-dose chest CT: How low can we go?
Purpose: To examine the potential for dose reduction in chest CT studies where lesion volume is the primary output (e.g. in therapy-monitoring applications). Methods: We added noise to the raw sinogram data from 15 chest exams with lung lesions to simulate a series of reduced-dose scans for each patient. We reconstructed the reduced-dose data on the clinical workstation and imported the resulting image series into our quantitative imaging database for lesion contouring. One reader contoured the lesions (one per patient) at the clinical reference dose (100%) and 8 simulated fractions of the clinical dose (50, 25, 15, 10, 7, 5, 4, and 3%). Dose fractions were hidden from the reader to reduce bias. We compared clinical and reduced-dose volumes in terms of bias error and variability (4x the standard deviation of the percent differences). Results: Averaging over all lesions, the bias error ranged from -0.6% to 10.6%. Variability ranged from 92% at 3% of clinical dose to 54% at 50% of clinical dose. Averaging over only the smaller lesions (<1cm equivalent diameter), bias error ranged from -9.2% to 14.1% and variability ranged from 125% at 3% dose to 33.9% at 50% dose. Conclusions: The reader’s variability decreased with dose, especially for smaller lesions. However, these preliminary results are limited by potential recall bias, a small patient cohort, and an overly-simplified task. Therapy monitoring often involves checking for new lesions, which may influence the reader’s clinical dose threshold for acceptable performance.
A biological phantom for evaluation of CT image reconstruction algorithms
J. Cammin, G. S. K. Fung, E. K. Fishman, et al.
In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce “plastic-like”, patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.
Impact of norm selections on the performance of four-dimensional cone-beam computed tomography (4DCBCT) using PICCS
Iterative image reconstruction methods have been proposed in computed tomography to address two major challenges: one is to reduce radiation dose while maintaining image quality and the other is to reconstruct diagnostic quality images from angularly sparse projection datasets. A variety of regularization models have been introduced in these iterative image reconstruction methods to incorporate the desired image features. To address the sparse view angle image reconstruction problem in four-dimensional cone-beam CT (4DCBCT), Prior Image Constrained Compressed Sensing (PICCS) was proposed. In the past in 4DCBCT, as well as other applications of the PICCS algorithm, the PICCS regularization was formulated using the 1 norm as the means to promote image sparsity. The 1 norm in the objective function is not differentiable and thus may pose challenges in numerical implementations. When the norm deviates from 1.0, the differentiability of the objective function improves, however, the imaging performance may degrade in image reconstruction from sparse datasets. In this paper, we study how the performance of PICCS-4DCBCT changes with norm selection and whether the introduction of a reweighted scheme in relaxed norm PICCS reconstruction helps improve the imaging performance.
3D image-based scatter estimation and correction for multi-detector CT imaging
M. Petersilka, T. Allmendinger, K. Stierstorfer
The aim of this work is to implement and evaluate a 3D image-based approach for the estimation of scattered radiation in multi-detector CT. Based on a reconstructed CT image volume, the scattered radiation contribution is calculated in 3D fan-beam geometry in the framework of an extended point-scatter kernel (PSK) model of scattered radiation. The PSK model is based on the calculation of elemental scatter contributions propagating the rays from the focal spot to the detector across the object for defined interaction points on a 3D fan beam grid. Each interaction point in 3D leads to an individual elemental 2D scatter distribution on the detector. The sum of all elemental contributions represents the total scatter intensity distribution on the detector. Our proposed extended PSK depends on the scattering angle (defined by the interaction point and the considered detector channel) and the line integral between the interaction point on a 3D fan beam ray and the intersection of the same ray with the detector. The PSK comprises single- and multiple scattering as well as the angular selectivity characteristics of the anti-scatter grid on detector. Our point-scatter kernels were obtained from a low-noise Monte-Carlo simulation of water-equivalent spheres with different radii for a particular CT scanner geometry. The model allows obtaining noise-free scatter intensity distribution estimates with a lower computational load compared to Monte-Carlo methods. In this work, we give a description of the algorithm and the proposed PSK. Furthermore, we compare resulting scatter intensity distributions (obtained for numerical phantoms) to Monte-Carlo results.
In-line X-ray phase-contrast lung imaging In situ with a benchtop system
A. B. Garson, E. W. Izaguirre, S. G. Price, et al.
X-ray phase-contrast (XPC) imaging methods are well-suited for lung imaging applications due to the weakly absorbing nature of lung tissue and the strong refractive effects associated with tissue-air interfaces. Until recently, XPC lung imaging had only been accomplished at synchrotron facilities. In this work, we investigate the manifestation of speckle in propagation-based XPC images of mouse lungs acquired in situ by use of a benchtop imager. The key contributions of the work are: a) the demonstration that lung speckle can be observed by use of a benchtop XPC imaging system employing a polychromatic tube-source; and b) a systematic experimental investigation of how the texture of the speckle pattern depends on the parameters of the imaging system. Our analyses consists of image texture characterization based on the statistical properties of pixel intensity distributions. Results show how image texture measures of lung regions are strongly dependent on imaging system parameters associated with XPC sensitivity.
Phase Contrast Imaging
icon_mobile_dropdown
Fast data acquisition method in X-ray differential phase contrast imaging using a new grating design
Grating-based x-ray differential phase contrast imaging (DPCI) often uses a phase stepping procedure that involves sequential grating motion and multiple x-ray exposures to obtain x-ray phase information. Such a data acquisition process breaks the continuous data acquisition into several step-and-shoot data acquisition sequences. Between two neighboring x-ray pulses, the acquisition will have to be stopped for the grating to translate into the next phase stepping position. This setup also requires that the grating not be fixed. If the gratings are to be mounted onto a fast-rotating gantry (such as those used in x-ray CT), this translation of the grating would add another potential source of mechanical instability. To accelerate the data acquisition speed and improve the mechanical stability of of DPCI data acquisitions, a new grating design was developed. In this method, one of the gratings used in DPCI was divided into four-row groups, within each group, grating structures have a designed offset with respect to their neighboring rows. This design allows the acquired data from any adjacent four detector rows to be combined in order to retrieve the needed x-ray differential phase information from a single x-ray exposure. Both numerical simulations and initial phantom experiments have demonstrated that the new interferometer design can enable DPCI image acquisitions without this well-known overhead in data acquisition time.
Slit-scanning differential phase-contrast mammography: first experimental results
Ewald Roessl, Heiner Daerr, Thomas Koehler, et al.
The demands for a large field-of-view (FOV) and the stringent requirements for a stable acquisition geometry rank among the major obstacles for the translation of grating-based, differential phase-contrast techniques from the laboratory to clinical applications. While for state-of-the-art Full-Field-Digital Mammography (FFDM) FOVs of 24 cm x 30 cm are common practice, the specifications for mechanical stability are naturally derived from the detector pixel size which ranges between 50 and 100 μm. However, in grating-based, phasecontrast imaging, the relative placement of the gratings in the interferometer must be guaranteed to within micro-meter precision. In this work we report on first experimental results on a phase-contrast x-ray imaging system based on the Philips MicroDose L30 mammography unit. With the proposed approach we achieve a FOV of about 65 mm x 175 mm by the use of the slit-scanning technique. The demand for mechanical stability on a micrometer scale was relaxed by the specific interferometer design, i.e., a rigid, actuator-free mount of the phase-grating G1 with respect to the analyzer-grating G2 onto a common steel frame. The image acquisition and formation processes are described and first phase-contrast images of a test object are presented. A brief discussion of the shortcomings of the current approach is given, including the level of remaining image artifacts and the relatively inefficient usage of the total available x-ray source output.
A multi-channel image reconstruction method for grating-based X-ray phase-contrast computed tomography
Qiaofeng Xu, Alex Sawatzky, Mark A. Anastasio
In this work, we report on the development of an advanced multi-channel (MC) image reconstruction algorithm for grating-based X-ray phase-contrast computed tomography (GB-XPCT). The MC reconstruction method we have developed operates by concurrently, rather than independently as is done conventionally, reconstructing tomographic images of the three object properties (absorption, small-angle scattering, refractive index). By jointly estimating the object properties by use of an appropriately defined penalized weighted least squares (PWLS) estimator, the 2nd order statistical properties of the object property sinograms, including correlations between them, can be fully exploited to improve the variance vs. resolution tradeoff of the reconstructed images as compared to existing methods. Channel-independent regularization strategies are proposed. To solve the MC reconstruction problem, we developed an advanced algorithm based on the proximal point algorithm and the augmented Lagrangian method. By use of experimental and computer-simulation data, we demonstrate that by exploiting inter-channel noise correlations, the MC reconstruction method can improve image quality in GB-XPCT.
Simultaneous implementation of low dose and high sensitivity capabilities in differential phase contrast and dark-field imaging with laboratory x-ray sources
A. Olivo, C. K. Hagen, T. P. Millard, et al.
We present a development of the laboratory-based implementation of edge-illumination (EI) x-ray phase contrast imaging (XPCI) that simultaneously enables low-dose and high sensitivity. Lab-based EI-XPCI simplifies the set-up with respect to other methods, as it only requires two optical elements, the large pitch of which relaxes the alignment requirements. Albeit in the past it was erroneously assumed that this would reduce the sensitivity, we demonstrate quantitatively that this is not the case. We discuss a system where the pre-sample mask open fraction is smaller than 50%, and a large fraction of the created beamlets hits the apertures in the detector mask. This ensures that the majority of photons traversing the sample are detected i.e. used for image formation, optimizing dose delivery. We show that the sensitivity depends on the dimension of the part of each beamlet hitting the detector apertures, optimized in the system design. We also show that the aperture pitch does not influence the sensitivity. Compared to previous implementations, we only reduced the beamlet fraction hitting the absorbing septa on the detector mask, not the one falling inside the apertures: the same number of x-rays per second is thus detected, i.e. the dose is reduced, but not at the expense of exposure time. We also present an extension of our phase-retrieval algorithm enabling the extraction of ultra-small-angle scattering by means of only one additional frame, with all three frames acquired within dose limits imposed by e.g. clinical mammography, and easy adaptation to lab-based phase-contrast x-ray microscopy implementations.
Cramér-Rao lower bound in differential phase contrast imaging and its application in the optimization of data acquisition systems
Unlike conventional x-ray absorption imaging, x-ray differential phase contrast imaging (DPCI) uses a phase retrieval algorithm to obtain x-ray phase information from a group of x-ray intensity measurements. As a result, the noise performance of DPCI is expected to differ from that of x-ray absorption imaging. Given the total number of x-ray photons used in imaging, lower noise variance in estimated phase contrast images suggests superior dose efficiency, which is one of the most desirable feature in x-ray imaging. When an algorithm is used to retrieve the phase information, it is important to understand what the lowest possible noise variance would be and whether the algorithm used to retrieve the phase information yields the lowest possible noise variance. To address these questions for differential phase contrast imaging, we studied the noise performance of DPC imaging using the powerful Cramér-Rao lower bound (CRLB) in statistical signal estimation method. Results demonstrated that the noise variances in DPCI images obtained by the algorithmic phase retrieval are always higher than the CRLB, which implies a possible sub-optimality of current phase estimation method. The results also call for the need to apply statistical signal estimation theory to DPCI in order to further improve its noise performance and dose efficiency.
Statistical signal estimation methods in X-ray differential phase contrast imaging
In x-ray differential phase contrast imaging (DPCI), a simple sinusoidal model is used to describe the measured signal intensities. When there is no noise added to the measurements, one is able to estimate the three unknown parameters, absorption contrast, phase-contrast, and small angle scatter, directly from the signal model. However, for x-ray detections, noise is always present and the noise level increases with a decrease in x-ray flux. When noise is involved, an experimental ensemble average is needed to make the above parameter estimation method accurate. Unfortunately, an ensemble average requires a large number of measurements acquired under identical experimental conditions. Such repetition procedures would significantly prolong the data acquisition time and require additional radiation dose to the image object. Therefore, it is desirable to develop a reliable parameter estimation method from a group of single x-ray signal intensity measurements. This paper addresses this concern by proposing a statistical signal estimation method. This method maximizes the joint probability of measurements in a complete phase stepping period and is able to estimate the three unknowns in DPCI simultaneously. Our numerical simulations demonstrate that the statistical signal estimation method outperformed the current blind estimation method by minimizing the noise variance of each parameter. As a result, this method has the potential to further improve the dose efficiency of DPCI.
Depth resolution properties of in-line X-ray phase-contrast tomosynthesis
In-line x-ray phase-contrast (XPC) tomosynthesis combines the concepts of tomosynthesis and in-line XPC imaging to utilize the advantages of both for biological imaging applications. Tomosynthesis permits reductions in acquisition times compared with conventional tomography scans while in-line XPC imaging provides high contrast and resolution in images of weakly absorbing materials. In this work, we develop an advanced iterative algorithm as an approach for dealing with the incomplete (and often noisy) data inherent to XPC tomosynthesis. We also investigate the depth resolution properties of XPC tomosynthesis and demonstrate that the z-resolution properties of XPC tomosynthesis is superior to that of conventional absorption-based tomosynthesis. More specifically, we find in-plane structures display strong boundary-enhancement while out-of-plane structures do not. This effect can facilitate the identification of in-plane structures.
Algorithms
icon_mobile_dropdown
Removing blooming artifacts with binarized deconvolution in cardiac CT
With modern CT scanners, detection and classification of coronary artery disease has become a routine applica- tion in cardiac CT. It poses a desirable non–invasive alternative to the invasive coronary angiography, which is the current clinical gold standard. However, the accuracy of cardiac CT depends on the spatial resolution of the imaging system. The limited spatial resolution leads to blooming artifacts, arising from hyper–dense calcification deposits in the arterial walls. This blooming leads to an overestimation of the degree of luminal narrowing and to loss of the morphology of the calcified region. We propose an image–based algorithm, which aims at removing the blooming and estimating the correct CT–value and morphology of the calcification. The method is based on the assumption, that each calcification consists of a compact region which has an almost constant density and attenuation. This knowledge is incorporated into an iterative deconvolution algorithm in image space. We quantitatively assess the accuracy of the proposed algorithm on analytically simulated phantom data. Qualita- tive results of clinical patient data are presented as well. In both cases, the proposed method outperforms the compared algorithms. The initial patient data results are promising. However, an ex vivo study has to be done to confirm the quantitative results of the simulation study with real specimen.
Automatic cable artifact removal for cardiac C-arm CT imaging
C. Haase, D. Schäfer, M. Kim, et al.
Cardiac C-arm computed tomography (CT) imaging using interventional C-arm systems can be applied in various areas of interventional cardiology ranging from structural heart disease and electrophysiology interventions to valve procedures in hybrid operating rooms. In contrast to conventional CT systems, the reconstruction field of view (FOV) of C-arm systems is limited to a region of interest in cone-beam (along the patient axis) and fan-beam (in the transaxial plane) direction. Hence, highly X-ray opaque objects (e.g. cables from the interventional setup) outside the reconstruction field of view, yield streak artifacts in the reconstruction volume. To decrease the impact of these streaks a cable tracking approach on the 2D projection sequences with subsequent interpolation is applied. The proposed approach uses the fact that the projected position of objects outside the reconstruction volume depends strongly on the projection perspective. By tracking candidate points over multiple projections only objects outside the reconstruction volume are segmented in the projections. The method is quantitatively evaluated based on 30 simulated CT data sets. The 3D root mean square deviation to a reference image could be reduced for all cases by an average of 50 % (min 16 %, max 76 %). Image quality improvement is shown for clinical whole heart data sets acquired on an interventional C-arm system.
Ringing artifact reduction for metallic objects in direct digital radiography detectors with stationary antiscatter grids
Dong Sik Kim, Sanggyun Lee
In digital radiography imaging, the antiscatter grid is usually employed to obtain clear projected images by absorbing scattered x-ray beams. However, due to the shadow of the stationary antiscatter grid, the grid artifact appears in the obtained x-ray image as stripes if a linear grid is considered. In order to alleviate the grid artifact, band-stop filters (BSFs) can be used and may cause annoying ringing artifact especially for metallic materials. In this paper, in order to reduce the ringing artifact while applying BSFs to alleviate the grid artifact, a spatial prefiltering technique, which is called the gradient-reduction filter, is proposed. The gradient-reduction filter can remove steep edges in the obtained x-ray images especially for metallic materials and can prevent the grid- artifact BSFs from yielding serious ringing artifact. The structure of the proposed filter is similar to that of the Crawford noise-reduction filter. However, the nonlinear function in the proposed filter has decreasing slopes as its input increases contrary to the Crawford filter case. Through extensive experiments for real digital x-ray images, which are obtained from a direct digital radiography detector, we can see the superior ringing-artifact reduction performance comparing to the conventional filtering approaches especially when the metallic materials present in or on the patients.
Algorithms for optimizing CT fluence control
The ability to customize the incident x-ray fluence in CT via beam-shaping filters or mA modulation is known to improve image quality and/or reduce radiation dose. Previous work has shown that complete control of x-ray fluence (ray-by-ray fluence modulation) would further improve dose efficiency. While complete control of fluence is not currently possible, emerging concepts such as dynamic attenuators and inverse-geometry CT allow nearly complete control to be realized. Optimally using ray-by-ray fluence modulation requires solving a very high-dimensional optimization problem. Most optimization techniques fail or only provide approximate solutions. We present efficient algorithms for minimizing mean or peak variance given a fixed dose limit. The reductions in variance can easily be translated to reduction in dose, if the original variance met image quality requirements. For mean variance, a closed form solution is derived. The peak variance problem is recast as iterated, weighted mean variance minimization, and at each iteration it is possible to bound the distance to the optimal solution. We apply our algorithms in simulations of scans of the thorax and abdomen. Peak variance reductions of 45% and 65% are demonstrated in the abdomen and thorax, respectively, compared to a bowtie filter alone. Mean variance shows smaller gains (about 15%).
Towards in-vivo K-edge imaging using a new semi-analytical calibration method
Carsten Schirra, Axel Thran, Heiner Daerr, et al.
Flat field calibration methods are commonly used in computed tomography (CT) to correct for system imperfections. Unfortunately, they cannot be applied in energy-resolving CT when using bow-tie filters owing to spectral distortions imprinted by the filter. This work presents a novel semi-analytical calibration method for photon-counting spectral CT systems, which is applicable with a bow-tie filter in place and efficiently compensates pile-up effects at fourfold increased photon flux compared to a previously published method without degradation of image quality. The achieved reduction of the scan time enabled the first K-edge imaging in-vivo. The method employs a calibration measurement with a set of flat sheets of only a single absorber material and utilizes an analytical model to predict the expected photon counts, taking into account factors such as x-ray spectrum and detector response. From the ratios of the measured x-ray intensities and the corresponding simulated photon counts, a look-up table is generated. By use of this look-up table, measured photon-counts can be corrected yielding data in line with the analytical model. The corrected data show low pixel-to-pixel variations and pile-up effects are mitigated. Consequently, operations like material decomposition based on the same analytical model yield accurate results. The method was validated on a experimental spectral CT system equipped with a bow-tie filter in a phantom experiment and an in-vivo animal study. The level of artifacts in the resulting images is considerably lower than in images generated with a previously published method. First in-vivo K-edge images of a rabbit selectively depict vessel occlusion by an ytterbium-based thermoresponsive polymer.
CT Reconstructions
icon_mobile_dropdown
Regularization design and control of change admission in prior-image-based reconstruction
Nearly all reconstruction methods are controlled through various parameter selections. Traditionally, such parameters are used to specify a particular noise and resolution trade-off in the reconstructed image volumes. The introduction of reconstruction methods that incorporate prior image information has demonstrated dramatic improvements in dose utilization and image quality, but has complicated the selection of reconstruction parameters including those associated with balancing information used from prior images with that of the measurement data. While a noise-resolution tradeoff still exists, other potentially detrimental effects are possible with poor prior image parameter values including the possible introduction of false features and the failure to incorporate sufficient prior information to gain any improvements. Traditional parameter selection methods such as heuristics based on similar imaging scenarios are subject to error and suboptimal solutions while exhaustive searches can involve a large number of time-consuming iterative reconstructions. We propose a novel approach that prospectively determines optimal prior image regularization strength to accurately admit specific anatomical changes without performing full iterative reconstructions. This approach leverages analytical approximations to the implicitly defined prior image-based reconstruction solution and predictive metrics used to estimate imaging performance. The proposed method is investigated in phantom experiments and the shift-variance and data-dependence of optimal prior strength is explored. Optimal regularization based on the predictive approach is shown to agree well with traditional exhaustive reconstruction searches, while yielding substantial reductions in computation time. This suggests great potential of the proposed methodology in allowing for prospective patient-, data-, and change-specific customization of prior-image penalty strength to ensure accurate reconstruction of specific anatomical changes.
Novel iterative reconstruction method for optimal dose usage in redundant CT - acquisitions
H. Bruder, R. Raupach, T. Allmendinger, et al.
In CT imaging, a variety of applications exist where reconstructions are SNR and/or resolution limited. However, if the measured data provide redundant information, composite image data with high SNR can be computed. Generally, these composite image volumes will compromise spectral information and/or spatial resolution and/or temporal resolution. This brings us to the idea of transferring the high SNR of the composite image data to low SNR (but high resolution) ‘source’ image data. It was shown that the SNR of CT image data can be improved using iterative reconstruction [1] .We present a novel iterative reconstruction method enabling optimal dose usage of redundant CT measurements of the same body region. The generalized update equation is formulated in image space without further referring to raw data after initial reconstruction of source and composite image data. The update equation consists of a linear combination of the previous update, a correction term constrained by the source data, and a regularization prior initialized by the composite data. The efficiency of the method is demonstrated for different applications: (i) Spectral imaging: we have analysed material decomposition data from dual energy data of our photon counting prototype scanner: the material images can be significantly improved transferring the good noise statistics of the 20 keV threshold image data to each of the material images. (ii) Multi-phase liver imaging: Reconstructions of multi-phase liver data can be optimized by utilizing the noise statistics of combined data from all measured phases (iii) Helical reconstruction with optimized temporal resolution: splitting up reconstruction of redundant helical acquisition data into a short scan reconstruction with Tam window optimizes the temporal resolution The reconstruction of full helical data is then used to optimize the SNR. (iv) Cardiac imaging: the optimal phase image (‘best phase’) can be improved by transferring all applied over radiation into that image. In all these cases, we show that - at constant patient dose - SNR can efficiently be transferred from the composite data to the source data while maintaining spatial, temporal and contrast resolution properties of the source data.
FINESSE: a Fast Iterative Non-linear Exact Sub-space SEarch based algorithm for CT imaging
K. Schmitt, H. Schöndube, Karl Stierstorfer, et al.
Statistical iterative image reconstruction has become the subject of strong, active research in X-ray computed tomography (CT), primarily because it may allow a significant reduction in dose imparted to the patient. However, developing an algorithm that converges fast while allowing parallelization so as to obtain a product that can be used routinely in the clinic is challenging. In this work, we present a novel algorithm that combines the strength of two popular methods. A preliminary investigation of this algorithm was performed, and strongly encouraging initial results are reported.
A new approach to regularized iterative CT image reconstruction
Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise, to reduce artifacts, or to improve spatial resolution. Among vendors and researchers, however, there is no consensus of how to best achieve these aims. The general approach is to incorporate a priori knowledge into iterative image reconstruction for example by adding additional constraints to the cost function, which penalize strong variations between neighboring voxels. However this approach to regularization in general poses a resolution noise trade–off because the stronger the regularization, and thus the noise reduction, the stronger the loss of spatial resolution and thus loss of anatomical detail. We propose a method which tries to improve this trade-off. One starts with generating basis images, which emphasize certain desired image properties, like high resolution or low noise. The proposed reconstruction algorithm reconstructs voxel–specific weighting coefficients that are applied to combine the basis images. By combining the desired properties of each basis image one can generate an image with lower noise and maintained high contrast resolution thus improving the resolution noise trade–off.
A practical statistical polychromatic image reconstruction for computed tomography using spectrum binning
Meng Wu, Qiao Yang, Andreas Maier, et al.
Polychromatic statistical reconstruction algorithms have very high computational demands due to the difficulty of the optimization problems and the large number of spectrum bins. We want to develop a more practical algorithm that has a simpler optimization problem, a faster numerical solver, and requires only a small amount of prior knowledge. In this paper, a modified optimization problem for polychromatic statistical reconstruction algorithms is proposed. The modified optimization problem utilizes the idea of determining scanned materials based on a first pass FBP reconstruction to fix the ratios between photoelectric and Compton scattering components of all image pixels. The reconstruction of a density image is easy to solve by a separable quadratic surrogate algorithm that is also applicable to the multi-material case. In addition, a spectrum binning method is introduced so that the full spectrum information is not required. The energy bins sizes and attenuations are optimized based on the true spectrum and object. With these approximations, the expected line integral values using only a few energy bins are very closed to the true polychromatic values. Thus both the problem size and computational demand caused by the large number of energy bins that are typically used to model a full spectrum are reduced. Simulation showed that three energy bins using the generalized spectrum binning method could provide an accurate approximation of the polychromatic X-ray signals. The average absolute error of the logarithmic detector signal is less than 0.003 for a 120 kVp spectrum. The proposed modified optimization problem and spectrum binning approach can effectively suppress beam hardening artifacts while providing low noise images.
Investigation of an efficient short-scan C-arm reconstruction method with radon-based redundancy handling
The short-scan Feldkamp David Kress (FDK) method for C-arm CT reconstruction involves a heuristic raybased weighting scheme to handle data redundancies. This scheme is known to be approximate under general circumstances and it often creates low frequency image artifacts in regions away from the central axial plane. Alternative algorithms, such as the one proposed by Defrise and Clack (DC),1 can handle data redundancy in a theoretically exact manner and thus notably improve image quality. The DC algorithm, however, is computationally more complex than FDK, as it requires a shift-variant 2D filtering of the data instead of a efficient 1D filtering. In this paper, a modification of the original DC algorithm is investigated, which applies the efficient FDK filtering scheme whereever possible and the DC filtering scheme only where it is required. This modification leads to a more efficient implementation of the DC algorithm, in which filtering effort can be reduced by up to about 70%, dependent on the specific geometry set-up. This gain in computation speed makes the DC method even more attractive for use in an interventional environment, where fast and interactive X-ray imaging is a crucial requirement.
Reconstruction
icon_mobile_dropdown
Statistical image reconstruction via denoised ordered-subset statistically penalized algebraic reconstruction technique (DOS-SPART)
Statistical Image Reconstruction (SIR) often involves a balance of two requirements: the first requirement is enforcing a minimal difference between the forward projection of the reconstructed image with the measured projection data and the second requirement enforcing some kind of image smoothness, which depends on the specific selection of regularizer, to reduce the noise in the reconstructed image. The needed delicate balance between these two requirements in the numerical implementations often slow down the reconstruction speed due to either a degradation in convergence rate of the algorithm or a degradation of parallellizability of the numerical implementation algorithms. In this work, a general numerical implementation strategy has been proposed to allow the SIR algorithms to be implemented in two decoupled and alternating steps. The first step using SIR without any regularizer which allows for the use of the well-known ordered subset (OS) strategy to accelerate the image reconstruction. The second step solves a denoising problem without involving the data fidelity term. The alternation of these two decoupled steps enable one to perform SIR with both high convergence rate and high parallellizability. The total variation norm of the image has been used as an example of regularizers to illustrate the proposed numerical implementation strategy. Numerical simulations have been performed to validate the proposed algorithm. The noise-spatial resolution tradeoff curve and convergence speed of the algorithm have been investigated and compared against the conventional gradient descent based implementation strategy.
Toward a dose reduction strategy using model-based reconstruction with limited-angle tomosynthesis
Model-based iterative reconstruction (MBIR) is an emerging technique for several imaging modalities and appli- cations including medical CT, security CT, PET, and microscopy. Its success derives from an ability to preserve image resolution and perceived diagnostic quality under impressively reduced signal level. MBIR typically uses a cost optimization framework that models system geometry, photon statistics, and prior knowledge of the recon- structed volume. The challenge of tomosynthetic geometries is that the inverse problem becomes more ill-posed due to the limited angles, meaning the volumetric image solution is not uniquely determined by the incom- pletely sampled projection data. Furthermore, low signal level conditions introduce additional challenges due to noise. A fundamental strength of MBIR for limited-views and limited-angle is that it provides a framework for constraining the solution consistent with prior knowledge of expected image characteristics. In this study, we analyze through simulation the capability of MBIR with respect to prior modeling components for limited-views, limited-angle digital breast tomosynthesis (DBT) under low dose conditions. A comparison to ground truth phantoms shows that MBIR with regularization achieves a higher level of fidelity and lower level of blurring and streaking artifacts compared to other state of the art iterative reconstructions, especially for high contrast objects. The benefit of contrast preservation along with less artifacts may lead to detectability improvement of microcalcification for more accurate cancer diagnosis.
Enhancing tissue structures with iterative image reconstruction for digital breast tomosynthesis
We design an iterative image reconstruction (IIR) algorithm for enhancing tissue structure contrast. The algorithm takes advantage of a data fidelity term, which compares the derivative of the DBT projections with the derivative of the estimated projections. This derivative data fidelity is sensitive to the edges of tissue structure projections, and as a consequence minimizing the corresponding the data-error term brings out structure information in the reconstructed volumes. The method has the practical advantages that few iterations are required and that direct region-of-interest (ROI) reconstruction is possible with the proposed derivative data fidelity term. Both of these advantages reduce the computational burden of the IIR algorithm and potentially make it feasible for clinical application. The algorithm is demonstrated on clinical DBT data.
Estimation of sparse null space functions for compressed sensing in SPECT
Joyeeta Mitra Mukherjee, Emil Sidky, Michael A. King
Compressed sensing (CS) [1] is a novel sensing (acquisition) paradigm that applies to discrete-to-discrete system models and asserts exact recovery of a sparse signal from far fewer measurements than the number of unknowns [1- 2]. Successful applications of CS may be found in MRI [3, 4] and optical imaging [5]. Sparse reconstruction methods exploiting CS principles have been investigated for CT [6-8] to reduce radiation dose, and to gain imaging speed and image quality in optical imaging [9]. In this work the objective is to investigate the applicability of compressed sensing principles for a faster brain imaging protocol on a hybrid collimator SPECT system. As a proofof- principle we study the null space of the fan-beam collimator component of our system with regards to a particular imaging object. We illustrate the impact of object sparsity on the null space using pixel and Haar wavelet basis functions to represent a piecewise smooth phantom chosen as our object of interest.
Whole-body PET parametric imaging employing direct 4D nested reconstruction and a generalized non-linear Patlak model
Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.
Cone Beam CT and Novel Design
icon_mobile_dropdown
Rapid scatter estimation for CBCT using the Boltzmann transport equation
Mingshan Sun, Alex Maslowski, Ian Davis, et al.
Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU’s running in parallel.
A patient-specific scatter artifacts correction method
This paper provides a fast and patient-specific scatter artifact correction method for cone-beam computed tomography (CBCT) used in image-guided interventional procedures. Due to increased irradiated volume of interest in CBCT imaging, scatter radiation has increased dramatically compared to 2D imaging, leading to a degradation of image quality. In this study, we propose a scatter artifact correction strategy using an analytical convolution-based model whose free parameters are estimated using a rough estimation of scatter profiles from the acquired cone-beam projections. It was evaluated using Monte Carlo simulations with both monochromatic and polychromatic X-ray sources. The results demonstrated that the proposed method significantly reduced the scatter-induced shading artifacts and recovered CT numbers.
Development and evaluation of a novel designed breast CT system
The performance of a novel designed x-ray CT scanning geometry is investigated. Composed of a specially designed tungsten collimation mask and a high resolution flat panel detector, this scanning geometry provides high efficient data acquisition allowing dose reduction potentially up to 50%. In recent years a special type of scanning geometry has been proposed. A first prototype of this geometry called CTDOR( CT with Dual Optimal Reading) has already been built. Despite many drawbacks, resulting images have shown promising potential of dual reading. The approach of gaining two subsets of data has anew been picked up and come to terms with a novel designed CT scanner for breast imaging. The main idea consists of collimating the X-ray beam through a specially designed shielding mask thereby reducing radiation dose without compromising image quality. This is achieved by hexagonally sampled Radon transform and image reconstruction with the especially suitable OPED (orthogonal polynomial expansion on disk) algorithm. This work now presents the development and evaluation of the novel designed breast CT system. Therefore simulated phantom data were obtained to test the performance of the scanning device and compared to a standard 3rd generation scanner. Retaining advantages such as scatter-correction potential and 3D-capability, the proposed CT system yields high resolution images for breast diagnostics in low energy ranges. Assuming similar sample size, it is expected that the novel designed breast CT system in conjunction with OPED outperforms the standard 3rd generation CT system combined with FBP (filtered back projection).
Effective one step-iterative fiducial marker-based compensation for involuntary motion in weight-bearing C-arm cone-beam CT scanning of knees
Jang-Hwan Choi, Andreas Maier, Martin Berger, et al.
We previously introduced three different fiducial marker-based correction methods (2D projection shifting, 2D projection warping, and 3D image warping) for patients’ involuntary motion in the lower body during weight-bearing Carm CT scanning. The 3D warping method performed better than 2D methods since it could more accurately take into account the lower body motion in 3D. However, as the 3D warping method applies different rotational and translational movement to the reconstructed image for each projection frame, distance-related weightings were slightly twisted and thus result in overlaying background noise over the entire image. In order to suppress background noise and artifacts (e.g. metallic marker-caused streaks), the 3D warping method has been improved by incorporating bilateral filtering and a Landwebertype iteration in one step. A series of projection images of five healthy volunteers standing at various flexion angles were acquired using a C-arm cone-beam CT system with a flat panel. A horizontal scanning trajectory of the C-arm was calibrated to generate projection matrices. Using the projection matrices, the static reference marker coordinates in 3D were estimated and used for the improved 3D warping method. The improved 3D warping method effectively reduced background noise down below the noise level of 2D methods and also eliminated metal-generated streaks. Thus, improved visibility of soft tissue structures (e.g. fat and muscle) was achieved while maintaining sharp edges at bone-tissue interfaces. Any high resolution weight-bearing cone-beam CT system can apply this method for motion compensation.
Tomosynthesis
icon_mobile_dropdown
Evaluation of low contrast detectability after scatter correction in digital breast tomosynthesis
Koen Michielsen, Andreas Fieselmann, Lesley Cockmartin, et al.
Projection images from digital breast tomosynthesis acquisitions can contain a large fraction of scattered x-rays due to the absence of an anti-scatter grid in front of the detector. In order to produce quantitative results, this should be accounted for in reconstruction algorithms. We examine the possible improvement in signal difference to noise ratio (SDNR) for low contrast spherical densities when applying a scatter correction algorithm. Hybrid patient data were created by combining real patient data with attenuation profiles of spherical masses acquired with matching exposure settings. Scatter in these cases was estimated using Monte-Carlo based scatter- ing kernels. All cases were reconstructed using filtered backprojection (FBP) with and without beam hardening correction and two maximum likelihood methods for transmission tomography, with and without quadratic smoothing prior (MAPTR and MLTR). For all methods, images were reconstructed without scatter correction, and with scatter precorrection, and for the iterative methods also with an adjusted update step obtained by including scatter in the physics model. SDNR of the inserted spheres was calculated by subtracting the recon- structions with and without inserted template to measure the signal difference, while noise was measured in the image containing the template. SDNR was significantly improved by 3.5% to 4.5% (p < 0.0001) at iteration 10 for both correction methods applied to the MLTR and MAPTR reconstructions. For MLTR these differences disappeared by iteration 100. For regular FBP SDNR remained the same after correction (p = 0.60) while it dropped slightly for FBP with beam hardening correction (-1.4%, p = 0.028). These results indicate that for the iterative methods, application of a scatter correction algorithm has very little effect on the SDNR, it only causes a slight decrease in convergence speed, which is similar for precorrection and correction incorporated in the update step. The FBP results were unchanged because the scatter being corrected is a low frequency component in the projection images, and this information is mostly ignored in the reconstruction due to the high pass filter.
An experimental study of practical computerized scatter correction methods for prototype digital breast tomosynthesis
Y. Kim, H. Kim, H. Park, et al.
Digital breast tomosynthesis (DBT) is a technique developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, the x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach for scatter reduction technique is a beam-stop-array (BSA) algorithm while this method has a concern of additional exposure to acquire the scatter distribution. The compressed breast is roughly symmetry and the scatter profiles from projection acquired at axially opposite angle are similar to mirror image from each other. The purpose of this study was to apply the BSA algorithm acquiring only two scans with a beam stop array, which estimates scatter distribution with minimum additional exposure. The results of scatter correction with angular interpolation were comparable to those of scatter correction with all scatter distributions at each angle and exposure increase was less than 13%. This study demonstrated the influence of scatter correction by BSA algorithm with minimum exposure which indicates the practical application in clinical situations.
Optimizing the acquisition geometry for digital breast tomosynthesis using the Defrise phantom
Raymond J. Acciavatti, Alice Chang, Laura Woodbridge, et al.
In cone beam computed tomography (CT), it is common practice to use the Defrise phantom for image quality assessment. The phantom consists of a stack of plastic plates with low frequency spacing. Because the x-ray beam may traverse multiple plates, the spacing between plates can appear blurry in the reconstruction, and hence modulation provides a measure of image quality. This study considers the potential merit of using the Defrise phantom in digital breast tomosynthesis (DBT), a modality with a smaller projection range than CT. To this end, a Defrise phantom was constructed and subsequently imaged with a commercial DBT system. It was demonstrated that modulation is dependent on position and orientation in the reconstruction. Modulation is preserved over a broad range of positions along the chest wall if the input frequency is oriented in the tube travel direction. By contrast, modulation is degraded with increasing distance from the chest wall if the input frequency is oriented in the posteroanterior (PA) direction. A theoretical framework was then developed to model these results. Reconstructions were calculated in an acquisition geometry designed to improve modulation. Unlike current geometries in which the x-ray tube motion is restricted to the plane of the chest wall, we consider a geometry with an additional component of tube motion along the PA direction. In simulations, it is shown that the newly proposed geometry improves modulation at positions distal to the chest wall. In conclusion, this study demonstrates that the Defrise phantom is a tool for optimizing DBT systems.
Increased microcalcification visibility in lumpectomy specimens using a stationary digital breast tomosynthesis system
Andrew W. Tucker, Yueh Z. Lee, Cherie M. Kuzmiak, et al.
Current digital breast tomosynthesis (DBT) systems have been shown to have diminished microcalcification (MC) visibility compared to 2D mammography systems. Rotating gantry DBT systems require mechanical motion of the X-ray source which causes motion blurring of the focal spot, thus reducing spatial resolution. We have developed a stationary DBT (s-DBT) technology that uses a carbon nanotube (CNT) based X-ray source array in order to acquire all the projections images without any mechanical motion. It is capable of producing full tomosynthesis datasets with zero motion blur. It has been shown to have significantly higher spatial resolution than continuous motion DBT systems. An s-DBT system also allows for a wider angular span without increasing the acquisition time. A larger angular span covers a larger portion of the Fourier domain, thus decreasing the tissue overlap. In this study, we compare tomosynthesis imaging of MCs, in lumpectomy specimens, between an s-DBT system and a rotating gantry DBT system. Results show that s-DBT produces better MC sharpness and reduced tissue overlap compared to continuous motion DBT systems.
Evaluation of imaging geometry for stationary chest tomosynthesis
Jing Shan, Andrew W. Tucker, Yueh Z. Lee, et al.
We have recently demonstrated the feasibility of stationary digital chest tomosynthesis (s-DCT) using a dis- tributed carbon nanotube x-ray source array. The technology has the potential to increase the imaging resolution and speed by eliminating source motion. In addition, the flexibility in the spatial configuration of the individual sources allows new tomosynthesis imaging geometries beyond the linear scanning mode used in the conventional systems. In this paper, we report the preliminary results on the effects of the tomosynthesis imaging geometry on the image quality. The study was performed using a bench-top s-DCT system consisting of a CNT x-ray source array and a flat-panel detector. System MTF and ASF are used as quantitative measurement of the in-plane and in-depth resolution. In this study geometries with the x-ray sources arranged in linear, square, rectangular and circular configurations were investigated using comparable imaging doses. Anthropomorphic chest phantom images were acquired and reconstructed for image quality assessment. It is found that wider angular coverage results in better in-depth resolution, while the angular span has little impact on the in-plane resolution in the linear geometry. 2D source array imaging geometry leads to a more isotropic in-plane resolution, and better in-depth resolution compared to 1D linear imaging geometry with comparable angular coverage.
Multi-energy CT
icon_mobile_dropdown
CT calibration and dose minimization in image-based material decomposition with energy-selective detectors
Sebastian Faby, Stefan Kuchenbecker, David Simons, et al.
Possible advantages of energy-selective photon counting detectors compared to dual energy CT shall be evaluated in the case of a typical dual energy application: Image-based material decomposition into an iodine and a water material image. Apart from a possibly smaller spectral overlap between the low and the high energy information, a photon counting detector will probably offer more than the two necessary energy bins. In this case additional degrees of freedom are gained that allow minimizing the noise in the material images. We propose an image-based method that determines optimal bin image weighting factors for material decomposition with respect to minimal material image noise. Results indicate that a perfect photon counting detector with four bins outperforms the dual energy CT technique by a noise reduction of 22% in the water image and 43% in the iodine image. Limited detector energy resolution has only a small impact on the achieved noise reduction, the photon counting detector performs still better than the dual energy technique. However, if degrading detector effects like charge sharing and K-escape peaks are taken into account, the performance of our proposed method drops below the one of conventional dual energy CT.
Segmented targeted least squares estimator for material decomposition in multi bin PCXDs
We present a fast, noise-efficient, and accurate estimator for material separation using photon-counting x-ray detectors (PCXDs) with multiple energy bin capability. The proposed targeted least squares estimator (TLSE) improves a previously proposed A-Table method by incorporating dynamic weighting that allows noise to be closer to the Cramér- Rao Lower Bound (CRLB) throughout the operating range. We explore Cartesian and average-energy segmentation of the basis material space for TLSE, and show that iso-average-energy contours require fewer segments compared to Cartesian segmentation to achieve similar performance. We compare the iso-average-energy TLSE to other proposed estimators - including the gold standard maximum likelihood estimator (MLE) and the A-Table1 - in terms of variance, bias and computational efficiency. The variance and bias of this estimator between 0 to 6 cm of aluminum and 0 to 50 cm of water is simulated with Monte Carlo methods. Iso-average energy TLSE achieves an average variance within 2% of CRLB, and mean of absolute error of (3.68 ± 0.06) x 10-6 cm. Using the same protocol, MLE showed variance-to- CRLB ratio and average bias of 1.0186 ± 0.0002 and (3.10 ± 0.06) x 10-6 cm, respectively, but was 50 times slower in our simulation. Compared to the A-Table method, TLSE gives a more homogenous variance-to-CRLB profile in the operating region. We show that variance-to-CRLB for TLSE is lower by as much as ~36% than A-Table method in the peripheral region of operation (thin or thick objects). The TLSE is a computationally efficient and fast method for implementing material separation technique in PCXDs, with performance parameters comparable to the MLE.
Pooling optimal combinations of energy thresholds in spectroscopic CT
Thomas Koenig, Marcus Zuber, Elias Hamann, et al.
Photon counting detectors used in spectroscopic CT are often based on small pixels and therefore offer only limited space to include energy discriminators and their associated counters in each pixel cell. For this reason, it is important to make efficient use of the available energy discriminators in order to achieve an optimized material contrast at a radiation dose as low as possible. Unfortunately, the complexity of evaluating every possible combination of energy thresholds, given a fixed number of counters, rapidly increases with the resolution at which this search is performed, and makes brute-force approaches to this problem infeasible. In this work, we introduce methods from machine learning, in particular sparse regression, to perform a feature selection to determine optimal combinations of energy thresholds. We will demonstrate how methods enforcing row-sparsity on a linear regression’s coefficient matrix can be applied to the multiple response problem in spectroscopic CT, i.e. the case in which a single set of energy thresholds is sought to simultaneously retrieve concentrations pertaining to a multitude of materials in an optimal way. These methods are applied to CT images experimentally obtained with a Medipix3RX detector operated in charge summing mode and with a CdTe sensor at a pixel pitch of 110μm. We show that the least absolute shrinkage and selection operator (lasso), generalized to the multiple response case, chooses four out of 20 possible threshold positions that allow discriminating PMMA, iodine and gadolinium in a contrast agent phantom at a higher accuracy than with equally spaced thresholds. Finally, we illustrate why it might be unwise to use a higher number of energy thresholds than absolutely necessary.
Effects of energy-bin acquisition methods on noise properties in photon-counting spectral CT
Spectral CT with photon-counting detectors has the potential to improve material decomposition and contrastto- noise ratio (CNR) compared to conventional CT. This work compared the noise properties of two general energy-bin acquisition methods: (1) energy bins acquired from the same spectrum noise realization, and (2) energy bins acquired from different spectrum noise realizations. For both types of acquisitions, the detected number of counts per bin was simulated and measured on a bench-top system. The energy-bin noise standard deviation was compared for both acquisition methods. Simulations were performed to compare both methods with respect to noise in material decomposition estimates and the CNR in image-based weighted images. Both the experimental and simulation results indicated increased energy-bin noise when energy measurements were acquired from different spectrum realizations. The noise increased by a factor of 2 for the lowest energy bin, with the noise penalty decreasing with increasing bin energy. The simulation results demonstrated a factor of 1.2 to 2 increase in noise in material decomposition estimates when acquiring from different spectrum realizations. Despite the increased energy-bin noise, energy measurements from different spectrum realizations increased the CNR in image-based-weighted images by 10%, potentially due to noise correlations across bins. Overall, the investigated acquisition methods demonstrated differences in noise standard deviation, affecting material decomposition estimates and CNR.
Photon counting CT at elevated X-ray tube currents: contrast stability, image noise and multi-energy performance
S. Kappler, A. Henning, B. Kreisler, et al.
The energy-selectivity of photon counting detectors provides contrast enhancement and enables new material-identification techniques for clinical Computed Tomography (CT). Patient dose considerations and the resulting requirement of efficient X-ray detection suggest the use of CdTe or CdZnTe as detector material. The finite signal pulse duration of several nanoseconds present in those detectors requires strong reduction of the pixel size to achieve feasible count rates in the high-flux regime of modern CT scanners. Residual pulse pile-up effects in scans with high X-ray fluxes still can limit two key properties of the counting detector, namely count-rate linearity and spectral linearity. We have used our research prototype scanner with CdTe-based counting detector and 225μm small pixels to investigate these effects in CT imaging scenarios at elevated X-ray tube currents. We present measurements of CT images and provide a detailed analysis of contrast stability, image noise and multi-energy performance achieved with different phantom sizes at various X-ray tube settings.
Direct spectral recovery using X-ray fluorescence measurements for material decomposition applications using photon counting spectral X-ray detectors
Tom Campbell-Ricketts, Mini Das
We present investigations into direct, calibration-free recovery of distorted spectral x-ray measurements with the Medipix 2 detector. Spectral x-ray measurements using pixelated photon counting spectral x-ray detectors are subject to significant spectral distortion. For detectors with small pixel size, charge sharing between adjacent electrodes often dominates this distortion. In material decomposition applications, a popular spectral recovery technique employs a calibration phantom with known spectral properties. This works due to the similarity of the attenuation properties of the phantom and the material to be studied. However, this approach may be too simplistic for clinical imaging applications as it assumes the homogeneity (and knowledge) of exactly the properties whose variation accounts entirely for the diagnostic content of the spectral data obtained by the photon counting detector. It may also be difficult to find the right calibration phantom for varying patient size and tissue densities on a case-by-case basis. Thus, it is desirable to develop direct correction strategies, based on the objectively measurable response of the detector. We model analytically the distortion of a spectral signal in a PCSXD by applying Gaussian broadening and a charge-sharing model. The model parameters are fitted to the measured fluorescence of several metals. While we are investigating the methodology using Medipix detectors, it should be applicable to other PCXDs as well.
Multi-energy Imaging and Detectors
icon_mobile_dropdown
Energy weighting improves the image quality of spectral mammograms: Implementation on a photon-counting mammography system
Johan Berglund, Henrik Johansson, Hanns-Ingo Maack, et al.
In x-ray imaging, contrast information content varies with photon energy. It is therefore possible to improve image quality by weighting photons according to energy. We have implemented and evaluated so-called energy weighting on a commercially available spectral photon-counting mammography system. A practical formula for calculating the optimal weight from pixel values was derived. Computer simulations and phantom measurements revealed that the contrast-tonoise ratio was improved by 3%–5%, and automatic image analysis showed that the improvement was detectable in a set of screening mammograms.
Spectral lesion characterization on a photon-counting mammography system
Spectral X-ray imaging allows to differentiate between two given tissue types, provided their spectral absorption characteristics differ measurably. In mammography, this method is used clinically to determine a decomposition of the breast into adipose and glandular tissue compartments, from which the glandular tissue fraction and, hence, the volumetric breast density (VBD) can be computed. Another potential application of this technique is the characterization of lesions by spectral mammography. In particular, round lesions are relatively easily detected by experienced radiologists, but are often difficult to characterize. Here, a method is described that aims at discriminating cystic from solid lesions directly on a spectral mammogram, obtained with a calibrated spectral mammography system and using a hypothesis-testing algorithm based on a maximum likelihood approach. The method includes a parametric model describing the lesion shape, compression height variations and breast composition. With the maximum likelihood algorithm, the model parameters are estimated separately under the cyst and solid hypothesis. The resulting ratio of the maximum likelihood values is used for the final tissue characterization. Initial results using simulations and phantom measurements are presented.
Amorphous selenium direct detection CMOS digital x-ray imager with 25 micron pixel pitch
We have developed a high resolution amorphous selenium (a-Se) direct detection imager using a large-area compatible back-end fabrication process on top of a CMOS active pixel sensor having 25 micron pixel pitch. Integration of a-Se with CMOS technology requires overcoming CMOS/a-Se interfacial strain, which initiates nucleation of crystalline selenium and results in high detector dark currents. A CMOS-compatible polyimide buffer layer was used to planarize the backplane and provide a low stress and thermally stable surface for a-Se. The buffer layer inhibits crystallization and provides detector stability that is not only a performance factor but also critical for favorable long term cost-benefit considerations in the application of CMOS digital x-ray imagers in medical practice. The detector structure is comprised of a polyimide (PI) buffer layer, the a-Se layer, and a gold (Au) top electrode. The PI layer is applied by spin-coating and is patterned using dry etching to open the backplane bond pads for wire bonding. Thermal evaporation is used to deposit the a-Se and Au layers, and the detector is operated in hole collection mode (i.e. a positive bias on the Au top electrode). High resolution a-Se diagnostic systems typically use 70 to 100 μm pixel pitch and have a pre-sampling modulation transfer function (MTF) that is significantly limited by the pixel aperture. Our results confirm that, for a densely integrated 25 μm pixel pitch CMOS array, the MTF approaches the fundamental material limit, i.e. where the MTF begins to be limited by the a-Se material properties and not the pixel aperture. Preliminary images demonstrating high spatial resolution have been obtained from a frst prototype imager.
Reflection properties of scintillator-septum candidates for a pixelated MeV detector
In order to predict and improve the performance of pixelated detectors, it is important to understand the optical properties of the basic unit of the scintillating structure in the detector. To measure one of the essential optical properties, reflectance, we have used a device composed of a laser and photodiode array. We have also developed an analytical model of the optical phenomena based on Snell's law and the Fresnel equations to simply analyze measured results and reflectance parameters at the interface. The computed and experimentally measured results typically have good agreement, validating the analytical model and measurements. The optical parameters are used as inputs to GEANT4 [1]. The simulations are then leveraged to optimize an imager design before a prototype is built. The optical reflectance was measured by using relatively inexpensive samples. A sample has scintillator, glue, and septum (reflector) layers, and each sample has a different scintillator surface (polished/rough) and/or reflector [ESR film/aluminum-sputtered (coated) ESR film] condition. A high-refractive-index hemisphere was attached on the top surface of a sample to increase the maximum incidence angle at the scintillator-glue interface from 27° to 52°. The sample including ESR film demonstrated average reflectance approximately 1.3 times higher than that from the sample with aluminum-sputtered ESR film as a reflector, and the polished surface condition showed higher reflectance than the rough-cut surface condition.
Initial steps toward the realization of large area arrays of single photon counting pixels based on polycrystalline silicon TFTs
Albert K. Liang, Martin Koniczek, Larry E. Antonuk, et al.
The thin-film semiconductor processing methods that enabled creation of inexpensive liquid crystal displays based on amorphous silicon transistors for cell phones and televisions, as well as desktop, laptop and mobile computers, also facilitated the development of devices that have become ubiquitous in medical x-ray imaging environments. These devices, called active matrix flat-panel imagers (AMFPIs), measure the integrated signal generated by incident X rays and offer detection areas as large as ~43×43 cm2. In recent years, there has been growing interest in medical x-ray imagers that record information from X ray photons on an individual basis. However, such photon counting devices have generally been based on crystalline silicon, a material not inherently suited to the cost-effective manufacture of monolithic devices of a size comparable to that of AMFPIs. Motivated by these considerations, we have developed an initial set of small area prototype arrays using thin-film processing methods and polycrystalline silicon transistors. These prototypes were developed in the spirit of exploring the possibility of creating large area arrays offering single photon counting capabilities and, to our knowledge, are the first photon counting arrays fabricated using thin film techniques. In this paper, the architecture of the prototype pixels is presented and considerations that influenced the design of the pixel circuits, including amplifier noise, TFT performance variations, and minimum feature size, are discussed.
New Contrast Mechanisms
icon_mobile_dropdown
X-ray fluorescence molecular imaging of high-Z tracers: investigation of a novel analyzer based setup
Bernhard H. Müller, Christoph Hoeschen, Florian Grüner, et al.
A novel x-ray fluorescence imaging setup for the in vivo detection of high-Z tracer distributions is investigated for its application in molecular imaging. The setup uses an energy resolved detection method based on a Bragg reflecting analyzer array together with a multiple scatter reducing radial collimator. The aim of this work is to investigate the potential application of this imaging method to in vivo imaging in humans. A proof of principle experiment modeling a partial setup for the detection of gold nano-particles was conducted in order to test the feasibility of the proposed imaging method. Furthermore a Monte Carlo simulation of the complete setup was created in order to quantify the dependence of the image quality on the applied radiation dose and on the geometrical collimator parameters as well as on the analyzer crystal parameters. The Monte Carlo simulation quantifies the signal-to-noise ratio per radiation dose and its dependence on the collimator parameters. Thereby the parameters needed for a dose efficient in vivo imaging of gold nano-particle based tracer distributions are quantified. However also a number of problems are found like the fluorescence emission as well as scatter from the collimator material obscuring the tracer fluorescence and the potentially large scan time.
Monte Carlo simulations of dose enhancement around gold nanoparticles used as X-ray imaging contrast agents and radiosensitizers
W. B. Li, M. Müllner, M. B. Greiter, et al.
Gold nanoparticles (GNPs) were demonstrated as X-ray imaging contrast agents and radiosensitizers in mice. However, the translational medical applications of GNPs in to the clinical practice need further detailed information on the biological effects related to the enhanced doses in malignant and healthy cells. The idea of improving radiotherapy with high atomic number materials, especially gold foils, was initiated in our research unit in the 1980s. Recently, experimental and theoretical efforts were made to investigate the potential improvement of imaging and radiotherapy with GNPs. Initially, the present work attempts to validate the dose enhancement effects of GNPs to cancer cells; secondly, it intends to examine the possible side effects on healthy cells when using GNPs as X-ray contrast agent. In this study, three Monte Carlo simulation programs, namely PENELOPE-2011, GEANT4 and EGSnrc were used to simulate the local energy deposition and the resulting dose enhancement of GNPs. Diameters of the GNPs were assumed to be 2 nm, 15 nm, 50 nm, 100 nm and 200 nm. The X-ray energy spectra for irradiation were 60 kVp, 80 kVp, 100 kVp, 150 kVp with a filtering of 2.7 mm Al for projectional radiography, and 8 mm Al for 100 kVp and 150 kVp for computed tomography. Additional peak energy of 200 kVp was simulated for radiotherapy purpose. The information of energy deposition and dose enhancement can help understanding the physical processes of medical imaging and the implication of nanoparticles in radiotherapy.
Small-animal microangiography using phase-contrast X-ray imaging and gas as contrast agent
Ulf Lundström, Daniel H. Larsson, Ulrica K. Westermark, et al.
We use propagation-based phase-contrast X-ray imaging with gas as contrast agent to visualize the microvasculature in small animals like mice and rats. The radiation dose required for absorption X-ray imaging is proportional to the minus fourth power of the structure size to be detected. This makes small vessels impossible to image at reasonable radiation doses using the absorption of conventional iodinated contrast agents. Propagation-based phase contrast gives enhanced contrast for high spatial frequencies by moving the detector away from the sample to let phase variations in the transmitted X-rays develop into intensity variations at the detector. Blood vessels are normally difficult to observe in phase contrast even with iodinated contrast agents as the density difference between blood and most tissues is relatively small. By injecting gas into the blood stream this density difference can be greatly enhanced giving strong phase contrast. One possible gas to use is carbon dioxide, which is a clinically accepted X-ray contrast agent. The gas is injected into the blood stream of patients to temporarily displace the blood in a region and thereby reduce the X-ray absorption in the blood vessels. We have shown that this method can be used to image blood vessels down to 8 μm in diameter in mouse ears. The low dose requirements of this method indicate a potential for live small-animal imaging and longitudinal studies of angiogenesis.
Small-animal dark-field radiography for pulmonary emphysema evaluation
Andre Yaroshenko, Felix G. Meinel, Katharina Hellbach, et al.
Chronic obstructive pulmonary disease (COPD) is one of the leading causes of morbidity and mortality worldwide and emphysema is one of its main components. The disorder is characterized by irreversible destruction of the alveolar walls and enlargement of distal airspaces. Despite the severe changes in the lung tissue morphology, conventional chest radiographs have only a limited sensitivity for the detection of mild to moderate emphysema. X-ray dark-field is an imaging modality that can significantly increase the visibility of lung tissue on radiographic images. The dark-field signal is generated by coherent, small-angle scattering of x-rays on the air-tissue interfaces in the lung. Therefore, morphological changes in the lung can be clearly visualized on dark-field images. This is demonstrated by a preclinical study with a small-animal emphysema model. To generate a murine model of pulmonary emphysema, a female C57BL/6N mouse was treated with a single orotracheal application of porcine pancreatic elastase (80 U/kg body weight) dissolved in phosphate-buffered saline (PBS). Control mouse received PBS. The mice were imaged using a small-animal dark-field scanner. While conventional x-ray transmission radiography images revealed only subtle indirect signs of the pulmonary disorder, the difference between healthy and emphysematous lungs could be clearly directly visualized on the dark-field images. The dose applied to the animals is compatible with longitudinal studies. The imaging results correlate well with histology. The results of this study reveal the high potential of dark-field radiography for clinical lung imaging.
Compton coincidence volumetric imaging: a new x-ray volumetric imaging modality based on Compton scattering
Compton scattering is a dominant interaction during radiography and computed tomography x-ray imaging. However, the scattered photons are not used for extracting imaging information, but seriously degrade image quality. Here we introduce a new scheme that overcomes most of the problems associated with existing Compton scattering imaging schemes and allows Compton scattered photons to be effectively used for imaging. In our scheme, referred as Compton coincidence volumetric imaging (CCVI), a collimated monoenergetic x-ray beam is directed onto a thin semiconductor detector. A small portion of the photons is Compton scattered by the detector and their energy loss is detected. Some of the scattered photons intersect the imaging object, where they are Compton scattered a second time. The finally scattered photons are recorded by an areal energy resolving detector panel around the object. The two detectors work in coincidence mode. CCVI images the spatial electron density distribution in the imaging object. Similar to PET imaging, the event location can be located within a curve; therefore the imaging reconstruction algorithms are also similar to those of PET. Two statistical iterative imaging reconstruction algorithms are tested. Our study verifies the feasibility of CCVI in imaging acquisition and reconstruction. Various aspects of CCVI are discussed. If successfully implemented, it will offer a great potential for imaging dose reduction compared with x-ray CT. Furthermore, a CCVI modality will have no moving parts, which potentially offers cost reduction and faster imaging speed.
Apparatus and fast method for cancer cell classification based on high harmonic coherent diffraction imaging in reflection geometry
Michael Zürch, Stefan Foertsch, Mark Matzas, et al.
In cancer treatment it is highly desirable to identify and /or classify individual cancer cells in real time. Nowadays, the standard method is PCR which is costly and time-consuming. Here we present a different approach to rapidly classify cell types: we measure the pattern of coherently diffracted extreme ultraviolet radiation (XUV radiation at 38nm wavelength), allowing to distinguish different single breast cancer cell types. The output of our laser driven XUV light source is focused onto a single unstained and unlabeled cancer cell, and the resulting diffraction pattern is measured in reflection geometry. As we will further show, the outer shape of the object can be retrieved from the diffraction pattern with sub-micron resolution. For classification it is often not necessary to retrieve the image, it is only necessary to compare the diffraction patterns which can be regarded as a spatial fingerprint of the specimen. For a proof-of-principle experiment MCF7 and SKBR3 breast cancer cells were pipetted on gold-coated silica slides. From illuminating each single cell and measuring a diffraction pattern we could distinguish between them. Owing to the short bursts of coherent soft x-ray light, one could also image temporal changes of the specimen, i.e. studying changes upon drug application once the desired specimen is found by the classification method. Using a more powerful laser, even classifying circulating tumor cells (CTC) at a high throughput seems possible. This lab-sized equipment will allow fast classification of any kind of cells, bacteria or even viruses in the near future.
Dose
icon_mobile_dropdown
Patient-specific minimum-dose imaging protocols for statistical image reconstruction in C-arm cone-beam CT using correlated noise injection
Purpose: A new method for accurately portraying the impact of low-dose imaging techniques in C-arm cone-beam CT (CBCT) is presented and validated, allowing identification of minimum-dose protocols suitable to a given imaging task on a patient-specific basis in scenarios that require repeat intraoperative scans. Method: To accurately simulate lower-dose techniques and account for object-dependent noise levels (x-ray quantum noise and detector electronics noise) and correlations (detector blur), noise of the proper magnitude and correlation was injected into the projections from an initial CBCT acquired at the beginning of a procedure. The resulting noisy projections were then reconstructed to yield low-dose preview (LDP) images that accurately depict the image quality at any level of reduced dose in both filtered backprojection and statistical image reconstruction. Validation studies were conducted on a mobile C-arm, with the noise injection method applied to images of an anthropomorphic head phantom and cadaveric torso across a range of lower-dose techniques. Results: Comparison of preview and real CBCT images across a full range of techniques demonstrated accurate noise magnitude (within ~5%) and correlation (matching noise-power spectrum, NPS). Other image quality characteristics (e.g., spatial resolution, contrast, and artifacts associated with beam hardening and scatter) were also realistically presented at all levels of dose and across reconstruction methods, including statistical reconstruction. Conclusion: Generating low-dose preview images for a broad range of protocols gives a useful method to select minimum-dose techniques that accounts for complex factors of imaging task, patient-specific anatomy, and observer preference. The ability to accurately simulate the influence of low-dose acquisition in statistical reconstruction provides an especially valuable means of identifying low-dose limits in a manner that does not rely on a model for the nonlinear reconstruction process or a model of observer performance.
Prospective optimization of CT under tube current modulation: I. organ dose
Xiaoyu Tian, Xiang Li, W. Paul Segars, et al.
In an environment in which computed tomography (CT) has become an indispensable diagnostic tool employed with great frequency, dose concerns at the population level have become a subject of public attention. In that regard, optimizing radiation dose has become a core problem to the CT community. As a fundamental step to optimize radiation dose, it is crucial to effectively quantify radiation dose for a given CT exam. Such dose estimates need to be patient-specific to reflect individual radiation burden. It further needs to be prospective so that the scanning parameters can be dynamically adjusted before the scan is performed. The purpose of this study was to prospectively estimate organ dose in abdominopelvic CT exams under tube current modulation (TCM). CTDIvol-normalized-organ dose coefficients ( hfixed ) for fixed tube current were first estimated using a validated Monte Carlo simulation program and 58 computational phantoms. To account for the effect of TCM scheme, a weighted CTDIvol was computed for each organ based on the tube current modulation profile. The organ dose was predicted by multiplying the weighted CTDIvol with the organ dose coefficients ( hfixed ). To quantify prediction accuracy, each predicted organ dose was compared with organ dose simulated from Monte Carlo program with TCM profile explicitly modeled. The predicted organ dose showed good agreement with simulated organ dose across all organs and modulation strengths. For an average CTDIvol of a CT exam of 10 mGy, the absolute median error across all organs were 0.64 mGy (-0.21 and 0.97 for 25th and 75th percentiles, respectively). The percentage differences (normalized by CTDIvol of the exam) were within 15%. This study developed a quantitative model to predict organ dose under clinical abdominopelvic scans. Such information may aid in the optimization of CT protocols.
Size-specific dose estimates (SSDE) for a prototype orthopedic cone-beam CT system
Samuel Richard, Nathan Packard, John Yorkston
Patient specific dose evaluation and reporting is becoming increasingly important for x-ray imaging systems. Even imaging systems with lower patient dose such as CBCT scanners for extremities can benefit from accurate and size-specific dose assessment and reporting. This paper presents CTDI dose measurements performed on a prototype CBCT extremity imaging system across a range of body part sizes (5, 10, 16, and 20 cm effective diameter) and kVp (70, 80, and 90 kVp - with 0.1 mm Cu added filtration). The ratio of the CTDI measurements for the 5, 10, and 20 cm phantoms to the CTDI measurements for the 16 cm phantom were calculated and results were compared to size-specific dose estimates conversion factors (AAPM Report 204), which were evaluated on a conventional CT scanner. Due to the short scan nature of the system (220 degree acquisition angle), the dependence of CTDI values on the initial angular orientation of the phantom with respect to the imager was also evaluated. The study demonstrated that for a 220 degree acquisition sequence, the initial angular position of the conventional CTDI phantom with respect to the scanner does not significantly affect CTDI measurements (varying by less than 2% overall across the range of possible initial angular positions). The size-specific conversion factor was found to be comparable to the Report 204 factors for the large phantom size (20 cm) but lower, by up to 12%, for the 5 cm phantom (i.e., 1.35 for CBCT vs 1.54 for CT). The factors dependence on kVp was minimal, but dependence on kVp was most significant for smaller diameters. These results indicate that specific conversion factors need to be used for CBCT systems with short scans in order to provide more accurate dose reporting across the range of body sizes found in extremity scanners.
Monte Carlo investigation of backscatter factors for skin dose determination in interventional neuroradiology procedures
Artur Omar, Hamza Benmakhlouf, Maria Marteinsdottir, et al.
Complex interventional and diagnostic x-ray angiographic (XA) procedures may yield patient skin doses exceeding the threshold for radiation induced skin injuries. Skin dose is conventionally determined by converting the incident air kerma free-in-air into entrance surface air kerma, a process that requires the use of backscatter factors. Subsequently, the entrance surface air kerma is converted into skin kerma using mass energy-absorption coefficient ratios tissue-to-air, which for the photon energies used in XA is identical to the skin dose. The purpose of this work was to investigate how the cranial bone affects backscatter factors for the dosimetry of interventional neuroradiology procedures. The PENELOPE Monte Carlo system was used to calculate backscatter factors at the entrance surface of a spherical and a cubic water phantom that includes a cranial bone layer. The simulations were performed for different clinical x-ray spectra, field sizes, and thicknesses of the bone layer. The results show a reduction of up to 15% when a cranial bone layer is included in the simulations, compared with conventional backscatter factors calculated for a homogeneous water phantom. The reduction increases for thicker bone layers, softer incident beam qualities, and larger field sizes, indicating that, due to the increased photoelectric crosssection of cranial bone compared to water, the bone layer acts primarily as an absorber of low-energy photons. For neurointerventional radiology procedures, backscatter factors calculated at the entrance surface of a water phantom containing a cranial bone layer increase the accuracy of the skin dose determination.
Phantoms
icon_mobile_dropdown
Design of anthropomorphic textured phantoms for CT performance evaluation
Commercially available computed tomography (CT) technologies such as iterative reconstruction (IR) have the potential to enable reduced patient doses while maintaining diagnostic image quality. However, systematically determining safe dose reduction levels for IR algorithms is a challenging task due to their nonlinear nature. Most attempts to evaluate IR algorithms rely on measurements made in uniform phantoms. Such measurements may overstate the dose reduction potential of IR because they don’t account for the complex relationship between anatomical variability and image quality. The purpose of this study was to design anatomically informed textured phantoms for CT performance evaluation. Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom includes intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and imaged on a modern multi-slice clinical CT scanner to assess the noise performance of a commercial IR algorithm in the context of uniform and textured backgrounds. Fifty repeated acquisitions were acquired for each background type and noise was assessed by measuring pixel standard deviation, across the ensemble of repeated acquisitions. For pixels in uniform areas, the IR algorithm reduced noise magnitude (STD) by 60% (compared to FBP). However, for edge pixels, the noise magnitude in the IR images ranged from 20% higher to 40% lower compared to FBP. In all FBP images and in IR images of the uniform phantom, noise appeared to be globally non-stationary (i.e., spatially dependent) but locally stationary (within a reasonably small region of interest). In the IR images of the textured phantoms, the noise was globally and locally non-stationary.
The development of a population of 4D pediatric XCAT phantoms for CT imaging research and optimization
Hannah Norris, Yakun Zhang, Jack Frush, et al.
With the increased use of CT examinations, the associated radiation dose has become a large concern, especially for pediatrics. Much research has focused on reducing radiation dose through new scanning and reconstruction methods. Computational phantoms provide an effective and efficient means for evaluating image quality, patient-specific dose, and organ-specific dose in CT. We previously developed a set of highly-detailed 4D reference pediatric XCAT phantoms at ages of newborn, 1, 5, 10, and 15 years with organ and tissues masses matched to ICRP Publication 89 values. We now extend this reference set to a series of 64 pediatric phantoms of a variety of ages and height and weight percentiles, representative of the public at large. High resolution PET-CT data was reviewed by a practicing experienced radiologist for anatomic regularity and was then segmented with manual and semi-automatic methods to form a target model. A Multi-Channel Large Deformation Diffeomorphic Metric Mapping (MC-LDDMM) algorithm was used to calculate the transform from the best age matching pediatric reference phantom to the patient target. The transform was used to complete the target, filling in the non-segmented structures and defining models for the cardiac and respiratory motions. The complete phantoms, consisting of thousands of structures, were then manually inspected for anatomical accuracy. 3D CT data was simulated from the phantoms to demonstrate their ability to generate realistic, patient quality imaging data. The population of pediatric phantoms developed in this work provides a vital tool to investigate dose reduction techniques in 3D and 4D pediatric CT.
Construction of anthropomorphic hybrid, dual-lattice voxel models for optimizing image quality and dose in radiography
Nina Petoussi-Henss, Janine Becker, Matthias Greiter, et al.
In radiography there is generally a conflict between the best image quality and the lowest possible patient dose. A proven method of dosimetry is the simulation of radiation transport in virtual human models (i.e. phantoms). However, while the resolution of these voxel models is adequate for most dosimetric purposes, they cannot provide the required organ fine structures necessary for the assessment of the imaging quality. The aim of this work is to develop hybrid/dual-lattice voxel models (called also phantoms) as well as simulation methods by which patient dose and image quality for typical radiographic procedures can be determined. The results will provide a basis to investigate by means of simulations the relationships between patient dose and image quality for various imaging parameters and develop methods for their optimization. A hybrid model, based on NURBS (Non Linear Uniform Rational B-Spline) and PM (Polygon Mesh) surfaces, was constructed from an existing voxel model of a female patient. The organs of the hybrid model can be then scaled and deformed in a non-uniform way i.e. organ by organ; they can be, thus, adapted to patient characteristics without losing their anatomical realism. Furthermore, the left lobe of the lung was substituted by a high resolution lung voxel model, resulting in a dual-lattice geometry model. “Dual lattice” means in this context the combination of voxel models with different resolution. Monte Carlo simulations of radiographic imaging were performed with the code EGS4nrc, modified such as to perform dual lattice transport. Results are presented for a thorax examination.
Population of 100 realistic, patient-based computerized breast phantoms for multi-modality imaging research
Breast imaging is an important area of research with many new techniques being investigated to further reduce the morbidity and mortality of breast cancer through early detection. Computerized phantoms can provide an essential tool to quantitatively compare new imaging systems and techniques. Current phantoms, however, lack sufficient realism in depicting the complex 3D anatomy of the breast. In this work, we created one-hundred realistic and detailed 3D computational breast phantoms based on high-resolution CT datasets from normal patients. We also developed a finiteelement application to simulate different compression states of the breast, making the phantoms applicable to multimodality imaging research. The breast phantoms and tools developed in this work were packaged into user-friendly software applications to distribute for breast imaging research.
A second generation of physical anthropomorphic 3D breast phantoms based on human subject data
Adam Nolte, Nooshin Kiarashi, Ehsan Samei, et al.
Previous fabrication of anthropomorphic breast phantoms has demonstrated their viability as a model for 2D (mammography) and 3D (tomosynthesis) breast imaging systems. Further development of these models will be essential for the evaluation of breast x-ray systems. There is also the potential to use them as the ground truth in virtual clinical trials. The first generation of phantoms was segmented from human subject dedicated breast computed tomography data and fabricated into physical models using highresolution 3D printing. Two variations were made. The first was a multi-material model (doublet) printed with two photopolymers to represent glandular and adipose tissues with the greatest physical contrast available, mimicking 75% and 35% glandular tissue. The second model was printed with a single 75% glandular equivalent photopolymer (singlet) to represent glandular tissue, which can be filled independently with an adipose-equivalent material such as oil. For this study, we have focused on improving the latter, the singlet phantom. First, the temporary oil filler has been replaced with a permanent adipose-equivalent urethane-based polymer. This offers more realistic contrast as compared to the multi-material approach at the expense of air bubbles and pockets that form during the filling process. Second, microcalcification clusters have been included in the singlet model via crushed eggshells, which have very similar chemical composition to calcifications in vivo. The results from these new prototypes demonstrate significant improvement over the first generation of anthropomorphic physical phantoms.
Automatic insertion of simulated microcalcification clusters in a software breast phantom
Varsha Shankla, David D. Pokrajac, Susan P. Weinstein, et al.
An automated method has been developed to insert realistic clusters of simulated microcalcifications (MCs) into computer models of breast anatomy. This algorithm has been developed as part of a virtual clinical trial (VCT) software pipeline, which includes the simulation of breast anatomy, mechanical compression, image acquisition, image processing, display and interpretation. An automated insertion method has value in VCTs involving large numbers of images. The insertion method was designed to support various insertion placement strategies, governed by probability distribution functions (pdf). The pdf can be predicated on histological or biological models of tumor growth, or estimated from the locations of actual calcification clusters. To validate the automated insertion method, a 2-AFC observer study was designed to compare two placement strategies, undirected and directed. The undirected strategy could place a MC cluster anywhere within the phantom volume. The directed strategy placed MC clusters within fibroglandular tissue on the assumption that calcifications originate from epithelial breast tissue. Three radiologists were asked to select between two simulated phantom images, one from each placement strategy. Furthermore, questions were posed to probe the rationale behind the observer’s selection. The radiologists found the resulting cluster placement to be realistic in 92% of cases, validating the automated insertion method. There was a significant preference for the cluster to be positioned on a background of adipose or mixed adipose/fibroglandular tissues. Based upon these results, this automated lesion placement method will be included in our VCT simulation pipeline.
Metrology and System Characterization
icon_mobile_dropdown
Cascaded systems modeling of signal, noise, and DQE for x-ray photon counting detectors
J. Xu, W. Zbijewski, G. Gang, et al.
Photon counting detector (PCD) x-ray imaging systems have seen increasing use in the past decade in applications such as low-dose radiography and tomography. A cascaded systems analysis model has been developed to describe the signal and noise transfer characteristics for such systems in a manner that accounts for unique PCD functionality (such as an application of a threshold) and explicitly considers the distribution of quanta through each stage. This model was used to predict the mean signal, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE) of a silicon-strip PCD system, and these predictions were compared to measurements across a range of exposure conditions and thresholds. Further, the model was used to investigate the impact of design parameters such as detector thickness and pulse height amplification as well as unique PCD performance effects such as charge sharing and additive noise with respect to threshold. The development of an analytical model for prediction of such metrics provides a framework for understanding the complex imaging performance characteristics of PCD systems – especially important in the early development of new radiographic and tomographic applications – and a guide to task-based performance optimization.
Detector system comparison using relative CNR for specific imaging tasks related to neuro-endovascular image-guided interventions (neuro-EIGIs)
Brendan Loughran, S. N. Swetadri Vasan, Vivek Singh, et al.
Neuro-EIGIs require visualization of very small endovascular devices and small vessels. A Microangiographic Fluoroscope (MAF) x-ray detector was developed to improve on the standard flat panel detector’s (FPD’s) ability to visualize small objects during neuro-EIGIs. To compare the performance of FPD and MAF imaging systems, specific imaging tasks related to those encountered during neuro-EIGIs were used to assess contrast to noise ratio (CNR) of different objects. A bar phantom and a stent were placed at a fixed distance from the x-ray focal spot to mimic a clinical imaging geometry and both objects were imaged by each detector system. Imaging was done without anti-scatter grids and using the same conditions for each system including: the same x-ray beam quality, collimator position, source to imager distance (SID), and source to object distance (SOD). For each object, relative contrasts were found for both imaging systems using the peak and trough signals. The relative noise was found using mean background signal and background noise for varying detector exposures. Next, the CNRs were found for these values for each object imaged and for each imaging system used. A relative CNR metric is defined and used to compare detector imaging performance. The MAF utilizes a temporal filter to reduce the overall image noise. The effects of using this filter with the MAF while imaging the clinical object’s CNRs are reported. The relative CNR for the detectors demonstrated that the MAF has superior CNRs for most objects and exposures investigated for this specific imaging task.
Method for measuring the intensity profile of a CT fan-beam filter
Research on CT systems often requires knowledge of intensity as a function of angle in the fan-beam, due to the presence of bowtie filters, for studies such as dose reduction simulation, Monte Carlo dose calculations, or statistical reconstruction algorithms. Since manufacturers consider the x-ray bowtie filter design to be proprietary information, several methods have been proposed to measure the beam intensity profile independently: 1) calculate statistical properties of noise in acquired sinograms (requires access to raw data files, which is also vendor proprietary); 2) measure the waveform of a dosimeter located away from the isocenter (requires dosimeter equipment costing > $10K). We present a novel method that is inexpensive (parts costing ~$100 from any hardware store, using Gafchromic film at ~$3 per measurement), requires no proprietary information, and can be performed in a few minutes. A fixture is built from perforated steel tubing, which forms an aperture that selectively samples the intensity at a particular fan-beam angle in a rotating gantry. Two exposures (1× and 2×) are made and self-developing radiochromic film (Gafchromic XR- Ashland Inc.) is then scanned on an inexpensive PC document scanner. An analysis method is described that linearizes the measurements for relative exposure. The resultant profile is corrected for geometric effects (1/LΛ2 fall-off, gantry dwell time) and background exposure, providing a noninvasive estimate of the CT fan-beam intensity present in an operational CT system. This method will allow researchers to conveniently measure parameters required for modeling the effects of bowtie filters in clinical scanners.
Prospective optimization of CT under tube current modulation: II. image quality
Xiaoyu Tian, Josh Wilson, Donald Frush, et al.
Despite the significant clinical benefits of computed tomography (CT) in providing diagnostic information for a broad range of diseases, concerns have been raised regarding the potential cancer risk induced by CT radiation exposure. In that regard, optimizing CT protocols and minimizing radiation dose have become the core problem for the CT community. To develop strategies to optimize radiation dose, it is crucial to effectively characterize CT image quality. Such image quality estimates need to be prospective to ensure that optimization can be performed before the scan is initiated. The purpose of this study was to establish a phantombased methodology to predict quantum noise in CT images as a first step in our image quality prediction. Quantum noise was measured using a variable-sized phantom under clinical protocols. The mathematical relationship between noise and water-equivalent-diameter (Dw) was further established. The prediction was achieved by ascribing a noise value to a patient according to the patient’s water-equivalent-diameter. The prediction accuracy was evaluated in anthropomorphic phantoms across a broad range of sizes, anatomy, and reconstruction algorithms. The differences between the measured and predicted noise were within 10% for anthropomorphic phantoms across all sizes and anatomy. This study proposed a practically applicable technique to predict noise in CT images. With a prospective estimation of image quality level, the scanning parameters can then by adjusted to ensure optimized imaging performance.
A task-based comparison of two reconstruction algorithms for digital breast tomosynthesis
Ravi Mahadevan, Lynda C. Ikejimba, Yuan Lin, et al.
Digital breast tomosynthesis (DBT) generates 3-D reconstructions of the breast by taking X-Ray projections at various angles around the breast. DBT improves cancer detection as it minimizes tissue overlap that is present in traditional 2-D mammography. In this work, two methods of reconstruction, filtered backprojection (FBP) and the Newton-Raphson iterative reconstruction were used to create 3-D reconstructions from phantom images acquired on a breast tomosynthesis system. The task based image analysis method was used to compare the performance of each reconstruction technique. The task simulated a 10mm lesion within the breast containing iodine concentrations between 0.0mg/ml and 8.6mg/ml. The TTF was calculated using the reconstruction of an edge phantom, and the NPS was measured with a structured breast phantom (CIRS 020) over different exposure levels. The detectability index d’ was calculated to assess image quality of the reconstructed phantom images. Image quality was assessed for both conventional, single energy and dual energy subtracted reconstructions. Dose allocation between the high and low energy scans was also examined. Over the full range of dose allocations, the iterative reconstruction yielded a higher detectability index than the FBP for single energy reconstructions. For dual energy subtraction, detectability index was maximized when most of the dose was allocated to the high energy image. With that dose allocation, the performance trend for reconstruction algorithms reversed; FBP performed better than the corresponding iterative reconstruction. However, FBP performance varied very erratically with changing dose allocation. Therefore, iterative reconstruction is preferred for both imaging modalities despite underperforming dual energy FBP, as it provides stable results.
Performance Evaluation
icon_mobile_dropdown
A refined methodology for modeling volume quantification performance in CT
Baiyu Chen, Joshua Wilson, Ehsan Samei
The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e’). The e’ was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e’ calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR’s quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e’ values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.
Internal noise in channelized Hotelling observer (CHO) study of detectability index-differential phase contrast CT vs. conventional CT
Xiangyang Tang, Yi Yang
The channelized Hotelling observer (CHO) model, wherein internal noise plays an important role to account for the psychophysiological uncertainty in human’s visual perception, has found extensive applications in the assessment of image quality in nuclear medicine, mammography and conventional CT. Recently, we extended its application to investigating the detectability index of differential phase contrast (DPC) CT–an emerging CT technology with the potential of increasing the capability in soft tissue differentiation. We found that the quantitative determination of internal noise in the CHO study of DPC-CT’s detectability index should differ from that in the conventional CT. It is believed that the root cause of such a difference lies in the distinct noise spectra between the DPC-CT and conventional CT. In this paper, we present the preliminary results and investigate the adequate strategies to quantitatively determine the internal noise of CHO model for its application in the assessment of image quality in DPC-CT and its comparison with that of the conventional CT.
Towards continualized task-based resolution modeling in PET imaging
We propose a generalized resolution modeling (RM) framework, including extensive task-based optimization, wherein we continualize the conventionally discrete framework of RM vs. no RM, to include varying degrees of RM. The proposed framework has the advantage of providing a trade-off between the enhanced contrast recovery by RM and the reduced inter-voxel correlations in the absence of RM, and to enable improved task performance. The investigated context was that of oncologic lung FDG PET imaging. Given a realistic blurring kernel of FWHM h (‘true PSF’), we performed iterative EM including RM using a wide range of ‘modeled PSF’ kernels with varying widths h. In our simulations, h = 6mm, while h varied from 0 (no RM) to 12mm, thus considering both underestimation and overestimation of the true PSF. Detection task performance was performed using prewhitened (PWMF) and nonprewhitened matched filter (NPWMF) observers. It was demonstrated that an underestimated resolution blur (h = 4mm) enhanced task performance, while slight over-estimation (h = 7mm) also achieved enhanced performance. The latter is ironically attributed to the presence of ringing artifacts. Nonetheless, in the case of the NPWMF, the increasing intervoxel correlations with increasing values of h degrade detection task performance, and underestimation of the true PSF provides the optimal task performance. The proposed framework also achieves significant improvement of reproducibility, which is critical in quantitative imaging tasks such as treatment response monitoring.
CT x-ray tube voltage optimisation and image reconstruction evaluation using visual grading analysis
Xiaoming Zheng, Ted Myeongsoo Kim, Rob Davidson, et al.
The purposes of this work were to find an optimal x-ray voltage for CT imaging and to determine the diagnostic effectiveness of image reconstruction techniques by using the visual grading analysis (VGA). Images of the PH-5 CT abdomen phantom (Kagaku Co, Kyoto) were acquired by the Toshiba Aquillion One 320 slices CT system with various exposures (from 10 to 580 mAs) under different tube peak voltages (80, 100 and 120 kVp). The images were reconstructed by employing the FBP and the AIDR 3D iterative reconstructions with Mild, Standard and Strong FBP blending. Image quality was assessed by measuring noise, contrast to noise ratio and human observer’s VGA scores. The CT dose index CTDIv was obtained from the values displayed on the images. The best fit for the curves of the image quality VGA vs dose CTDIv is a logistic function from the SPSS estimation. A threshold dose Dt is defined as the CTDIv at the just acceptable for diagnostic image quality and a figure of merit (FOM) is defined as the slope of the standardised logistic function. The Dt and FOM were found to be 5.4, 8.1 and 9.1 mGy and 0.47, 0.51 and 0.38 under the tube voltages of 80, 100 and 120 kVp, respectively, from images reconstructed by the FBP technique. The Dt and FOM values were lower from the images reconstructed by the AIDR 3D in comparison with the FBP technique. The optimal xray peak voltage for the imaging of the PH-5 abdomen phantom by the Aquillion One CT system was found to be at 100 kVp. The images reconstructed by the FBP are more diagnostically effective than that by the AIDR 3D but with a higher dose Dt to the patients.
High-performance soft-tissue imaging in extremity cone-beam CT
W. Zbijewski, A. Sisniega, J. W. Stayman, et al.
Purpose: Clinical performance studies of an extremity cone-beam CT (CBCT) system indicate excellent bone visualization, but point to the need for improvement of soft-tissue image quality. To this end, a rapid Monte Carlo (MC) scatter correction is proposed, and Penalized Likelihood (PL) reconstruction is evaluated for noise management. Methods: The accelerated MC scatter correction involved fast MC simulation with low number of photons implemented on a GPU (107 photons/sec), followed by Gaussian kernel smoothing in the detector plane and across projection angles. PL reconstructions were investigated for reduction of imaging dose for projections acquired at ~2 mGy. Results: The rapid scatter estimation yielded root-mean-squared-errors of scatter projections of ~15% of peak scatter intensity for 5⋅106 photons/projection (runtime ~0.5 sec/projection) and 25% improvement in fat-muscle contrast in reconstructions of a cadaveric knee. PL reconstruction largely restored soft-tissue visualization at 2 mGy dose to that of 10 mGy FBP image. Conclusion: The combination of rapid (5-10 minutes/scan) MC-based, patient-specific scatter correction and PL reconstruction offers an important means to overcome the current limitations of extremity CBCT in soft-tissue imaging.
Analyzing the performance of ultrasonic B-mode imaging for breast lesion diagnosis
We studied the efficiency of transferring task information from each stage of the ultrasonic imaging to another by computing the total task energy available at each stage. We computed the task-energy efficiency for the B-mode data with respect to RF data as an expression to predict performance efficiency. We also measured performance efficiency of B-mode image observers with that of ideal observer of the RF data using both analytical SNR expressinos and Monte-Carlo expreriment over a range of visual tasks related to breast lesion diagnosis. The performance efficiency results are compared with the prediction from task-energy efficiency and information efficiency. It is shown that task-energy efficiency can closely predict the performance efficiency especially for large-area tasks.
Poster Session: Algorithms and Applications
icon_mobile_dropdown
Investigation of the potential causes of partial scan artifacts in dynamic CT myocardial perfusion imaging
Yinghua Tao, Michael Speidel, Timothy Szczykutowicz, et al.
In recent years, there have been several findings regarding CT number variations (partial scan artifact or PSA) across time in dynamic myocardial perfusion studies with short scan gated reconstruction. These variations are correlated with the view angle range corresponding to the short scan acquisition for a given cardiac phase, which can vary from one cardiac cycle to another due to the asynchrony between heart rate and gantry rotation speed. In this study, we investigate several potential causes of PSA, including noise, beam hardening and scatter, using numerical simulations. In addition, we investigate partial scan artifact in a single source 64-slice diagnostic CT scanner in vivo data sets, and report its effect on perfusion analysis. Results indicated that among all three factors investigated, scatter can cause obvious partial scan artifact in dynamic myocardial perfusion imaging. Further, scatter is a low frequency phenomenon and is not heavily dependent on the changing contrasts, as both the frequency method and the virtual scan method are effective in reducing partial scan artifact. However, PSA does not necessarily lead to different blood volume maps compared to the full scan, because these maps are usually generated with a curve fitting procedure.
Quantification of microarchitectural anisotropy in bone with diffraction enhanced imaging
Dean M. Connor Jr., Meenal Mehrotra, Amanda C. LaRue
Purpose: The purpose of this study is to determine if diffraction enhanced imaging (DEI) can quantify anisotropy in bone microarchitecture. Background: Osteoporosis is characterized by low bone mass and microarchitectural deterioration of bone. A noninvasive tool for measuring the degree of anisotropy (DA) in bone microarchitecture will help clinicians better assess fracture risk in osteoporotic patients. DEI detects small angular deflections in an x-ray beam, and is only sensitive to angular changes in one plane. If the beam is refracted by multiple anisotropic microstructures (e.g. osteocyte lacunae and pores) in bone, the angular spreading can be measured with DEI and differences in the amount of spreading for different bone orientations is indicative of the DA in bone microarchitecture. Method: An x-ray-tube based DEI system was used to collect an array of DEI reflectivity profiles measured through bovine cortical bone samples with the bones oriented with the bone axis in the plane perpendicular to the propagation of the x-ray beam. Micro-CT images of the bones were obtained using a Scanco uCT40 ex vivo scanner, and the DA of the pore structure was quantified using BoneJ. Results: The maximum and minimum measured reflectivity profile widths through bone varied by a factor of two; this suggests that the microarchitecture is preferentially aligned with the bone axis in a 2-to-1 ratio. The DA for the cortical pores was 0.6, which agrees with DEI’s anisotropy measure. Conclusions: The preliminary findings of this study suggest that DEI is sensitive to anisotropy in bone microarchitecture.
Assessment of phase based dose modulation for improved dose efficiency in cardiac CT on an anthropomorphic motion phantom
Adam Budde, Roy Nilsen, Brian Nett
State of the art automatic exposure control modulates the tube current across view angle and Z based on patient anatomy for use in axial full scan reconstructions. Cardiac CT, however, uses a fundamentally different image reconstruction that applies a temporal weighting to reduce motion artifacts. This paper describes a phase based mA modulation that goes beyond axial and ECG modulation; it uses knowledge of the temporal view weighting applied within the reconstruction algorithm to improve dose efficiency in cardiac CT scanning. Using physical phantoms and synthetic noise emulation, we measure how knowledge of sinogram temporal weighting and the prescribed cardiac phase can be used to improve dose efficiency. First, we validated that a synthetic CT noise emulation method produced realistic image noise. Next, we used the CT noise emulation method to simulate mA modulation on scans of a physical anthropomorphic phantom where a motion profile corresponding to a heart rate of 60 beats per minute was used. The CT noise emulation method matched noise to lower dose scans across the image within 1.5% relative error. Using this noise emulation method to simulate modulating the mA while keeping the total dose constant, the image variance was reduced by an average of 11.9% on a scan with 50 msec padding, demonstrating improved dose efficiency. Radiation dose reduction in cardiac CT can be achieved while maintaining the same level of image noise through phase based dose modulation that incorporates knowledge of the cardiac reconstruction algorithm.
Image registration for motion estimation in cardiac CT
Bibo Shi, Gene Katsevich, Be-Shan Chiang, et al.
Motion estimation is a very important method for improving image quality by compensating the cardiac motion at the best phase reconstructed. We tackle the cardiac motion estimation problem using an image registration approach. We compare the performance of three gradient-based registration methods on clinical data. In addition to simple gradient descent, we test the Nesterov accelerated descent and conjugate gradient algorithms. The results show that accelerated gradient methods provide significant speedup over conventional gradient descent with no loss of image quality.
A novel Region of Interest (ROI) imaging technique for biplane imaging in interventional suites: high-resolution small field-of-view imaging in the frontal plane and dose-reduced, large field-of-view standard-resolution imaging in the lateral plane
Endovascular-Image-Guided-Interventional (EIGI) treatment of neuro-vascular conditions such as aneurysms, stenosed arteries, and vessel thrombosis make use of treatment devices such as stents, coils, and balloons which have very small feature sizes, 10's of microns to a few 100's of microns, and hence demand a high resolution imaging system. The current state-of-the-art flat panel detector (FPD) has about a 200-um pixel size with the Nyquist of 2.5 lp/mm. For higher-resolution imaging a charge-coupled device (CCD) based Micro-Angio - Fluoroscope (MAF-CCD) with a pixel size of 35um (Nyquist of 11 lp/mm) was developed and previously reported. Although the detector addresses the high resolution needs, the Field-Of-View (FOV) is limited to 3.5 cm x 3.5 cm, which is much smaller than current FPDs. During the use of the MAF-CCD for delicate parts of the intervention, it may be desirable to have real-time monitoring outside the MAF FOV with a low dose, and lower, but acceptable, quality image. To address this need, a novel imaging technique for biplane imaging systems has been developed, using an MAFCCD in the frontal plane and a dose-reduced standard large FOV imager in the lateral plane. The dose reduction is achieved by using a combination of ROI fluoroscopy and spatially different temporal filtering, a technique that has been previously presented. In order to evaluate this technique, a simulation using images acquired during an actual EIGI treatment on a patient, followed by an actual implementation on phantoms is presented.
Quantitative analysis of artifacts in 4D DSA: the relative contributions of beam hardening and scatter to vessel dropout behind highly attenuating structures
James Hermus, Timothy P. Szczykutowicz, Charles M. Strother, et al.
When performing Computed Tomographic (CT) image reconstruction on digital subtraction angiography (DSA) projections, loss of vessel contrast has been observed behind highly attenuating anatomy, such as dental implants and large contrast filled aneurysms. Because this typically occurs only in a limited range of projection angles, the observed contrast time course can potentially be altered. In this work, we have developed a model for acquiring DSA projections that models both the polychromatic nature of the x-ray spectrum and the x-ray scattering interactions to investigate this problem. In our simulation framework, scatter and beam hardening contributions to vessel dropout can be analyzed separately. We constructed digital phantoms with large clearly defined regions containing iodine contrast, bone, soft issue, titanium (dental implants) or combinations of these materials. As the regions containing the materials were large and rectangular, when the phantoms were forward projected, the projections contained uniform regions of interest (ROI) and enabled accurate vessel dropout analysis. Two phantom models were used, one to model the case of a vessel behind a large contrast filled aneurysm and the other to model a vessel behind a dental implant. Cases in which both beam hardening and scatter were turned off, only scatter was turned on, only beam hardening was turned on, and both scatter and beam hardening were turned on, were simulated for both phantom models. The analysis of this data showed that the contrast degradation is primarily due to scatter. When analyzing the aneurysm case, 90.25% of the vessel contrast was lost in the polychromatic scatter image, however only 50.5% of the vessel contrast was lost in the beam hardening only image. When analyzing the teeth case, 44.2% of the vessel contrast was lost in the polychromatic scatter image and only 26.2% of the vessel contrast was lost in the beam hardening only image.
Calibration-free coronary artery measurements for interventional device sizing using inverse geometry x-ray fluoroscopy: in vivo validation
Michael T. Tomkowiak, Amish N. Raval, Michael S. Van Lysel, et al.
Proper sizing of interventional devices to match coronary vessel dimensions improves procedural efficiency and therapeutic outcomes. We have developed a novel method using inverse geometry x-ray fluoroscopy to automatically determine vessel dimensions without the need for magnification calibration or optimal views. To validate this method in vivo, we compared results to intravascular ultrasound (IVUS) and coronary computed tomography angiography (CCTA) in a healthy porcine model. Coronary angiography was performed using Scanning-Beam Digital X-ray (SBDX), an inverse geometry fluoroscopy system that performs multiplane digital x-ray tomosynthesis in real time. From a single frame, 3D reconstruction of the arteries was performed by localizing the depth of vessel lumen edges. The 3D model was used to directly calculate length and to determine the best imaging plane to use for diameter measurements, where outof- plane blur was minimized and the known pixel spacing was used to obtain absolute vessel diameter. End-diastolic length and diameter measurements were compared to measurements from CCTA and IVUS, respectively. For vessel segment lengths measuring 6 mm to 73 mm by CCTA, the SBDX length error was -0.49 ± 1.76 mm (SBDX - CCTA, mean ± 1 SD). For vessel diameters measuring 2.1 mm to 3.6 mm by IVUS, the SBDX diameter error was 0.07 ± 0.27 mm (SBDX - minimum IVUS diameter, mean ± 1 SD). The in vivo agreement between SBDX-based vessel sizing and gold standard techniques supports the feasibility of calibration-free coronary vessel sizing using inverse geometry x-ray fluoroscopy.
Necessary forward model specification accuracy for basis material decomposition in spectral CT
Hans Bornefalk, Mats Persson, Mats Danielsson
Material basis decomposition in the sinogram domain requires accurate knowledge of the forward model in spectral CT. Misspecifications over a certain limit will result in biased estimates and make quantum limited quantitative CT difficult. We present a method whereby users can determine the degree of allowed misspecification error in a spectral CT forward model, and still have quantification errors that are quantum limited.
A study of the x-ray image quality improvement in the examination of the respiratory system based on the new image processing technique
Yuichi Nagai, Mayumi Kitagawa, Jun Torii, et al.
Recently, the double contrast technique in a gastrointestinal examination and the transbronchial lung biopsy in an examination for the respiratory system [1-3] have made a remarkable progress. Especially in the transbronchial lung biopsy, better quality of x-ray fluoroscopic images is requested because this examination is performed under a guidance of x-ray fluoroscopic images. On the other hand, various image processing methods [4] for x-ray fluoroscopic images have been developed as an x-ray system with a flat panel detector [5-7] is widely used. A recursive filtering is an effective method to reduce a random noise in x-ray fluoroscopic images. However it has a limitation for its effectiveness of a noise reduction in case of a moving object exists in x-ray fluoroscopic images because the recursive filtering is a noise reduction method by adding last few images. After recursive filtering a residual signal was produced if a moving object existed in x-ray images, and this residual signal disturbed a smooth procedure of the examinations. To improve this situation, new noise reduction method has been developed. The Adaptive Noise Reduction [ANR] is the brand-new noise reduction technique which can be reduced only a noise regardless of the moving object in x-ray fluoroscopic images. Therefore the ANR is a very suitable noise reduction method for the transbronchial lung biopsy under a guidance of x-ray fluoroscopic images because the residual signal caused of the moving object in x-ray fluoroscopic images is never produced after the ANR. In this paper, we will explain an advantage of the ANR by comparing of a performance between the ANR images and the conventional recursive filtering images.
Relaxation times estimation in MRI
Fabio Baselice, Rocchina Caivano, Aldo Cammarota, et al.
Magnetic Resonance Imaging is a very powerful techniques for soft tissue diagnosis. At the present, the clinical evaluation is mainly conducted exploiting the amplitude of the recorded MR image which, in some specific cases, is modified by using contrast enhancements. Nevertheless, spin-lattice (T1) and spin-spin (T2) relaxation times can play an important role in many pathology diagnosis, such as cancer, Alzheimer or Parkinson diseases. Different algorithms for relaxation time estimation have been proposed in literature. In particular, the two most adopted approaches are based on Least Squares (LS) and on Maximum Likelihood (ML) techniques. As the amplitude noise is not zero mean, the first one produces a biased estimator, while the ML is unbiased but at the cost of high computational effort. Recently the attention has been focused on the estimation in the complex, instead of the amplitude, domain. The advantage of working with real and imaginary decomposition of the available data is mainly the possibility of achieving higher quality estimations. Moreover, the zero mean complex noise makes the Least Square estimation unbiased, achieving low computational times. First results of complex domain relaxation times estimation on real datasets are presented. In particular, a patient with an occipital lesion has been imaged on a 3.0T scanner. Globally, the evaluation of relaxation times allow us to establish a more precise topography of biologically active foci, also with respect to contrast enhanced images.
Poster Session: Cone Beam CT
icon_mobile_dropdown
Comparison of the effect of simple and complex acquisition trajectories on the 2D SPR and 3D voxelized differences for dedicated breast CT imaging
Jainil P. Shah, Steve D. Mann, Randolph L. McKinley, et al.
The 2D scatter-to-primary (SPR) ratios and 3D voxelized difference volumes were characterized for a cone beam breast CT scanner capable of arbitrary (non-traditional) 3D trajectories. The CT system uses a 30x30cm2 flat panel imager with 197 micron pixellation and a rotating tungsten anode x-ray source with 0.3mm focal spot, with an SID of 70cm. Data were acquired for two cylindrical phantoms (12.5cm and 15cm diameter) filled with three different combinations of water and methanol yielding a range of uniform densities. Projections were acquired with two acquisition trajectories: 1) simple-circular azimuthal orbit with fixed tilt; and 2) saddle orbit following a ±15° sinusoidal trajectory around the object. Projection data were acquired in 2x2 binned mode. Projections were scatter corrected using a beam stop array method, and the 2D SPR was measured on the projections. The scatter corrected and uncorrected data were then reconstructed individually using an iterative ordered subsets convex algorithm, and the 3D difference volumes were calculated as the absolute difference between the two. Results indicate that the 2D SPR is ~7-15% higher on projections with greatest tilt for the saddle orbit, due to the longer x-ray path length through the volume, compared to the 0° tilt projections. Additionally, the 2D SPR increases with object diameter as well as density. The 3D voxelized difference volumes are an estimate of the scatter contribution to the reconstructed attenuation coefficients on a voxel level. They help visualize minor deficiencies and artifacts in the volumes due to correction methods.
C-arm perfusion imaging with a fast penalized maximum-likelihood approach
Robert Frysch, Tim Pfeiffer, Sebastian Bannasch, et al.
Perfusion imaging is an essential method for stroke diagnostics. One of the most important factors for a successful therapy is to get the diagnosis as fast as possible. Therefore our approach aims at perfusion imaging (PI) with a cone beam C-arm system providing perfusion information directly in the interventional suite. For PI the imaging system has to provide excellent soft tissue contrast resolution in order to allow the detection of small attenuation enhancement due to contrast agent in the capillary vessels. The limited dynamic range of flat panel detectors as well as the sparse sampling of the slow rotating C-arm in combination with standard reconstruction methods results in limited soft tissue contrast. We choose a penalized maximum-likelihood reconstruction method to get suitable results. To minimize the computational load, the 4D reconstruction task is reduced to several static 3D reconstructions. We also include an ordered subset technique with transitioning to a small number of subsets, which adds sharpness to the image with less iterations while also suppressing the noise. Instead of the standard multiplicative EM correction, we apply a Newton-based optimization to further accelerate the reconstruction algorithm. The latter optimization reduces the computation time by up to 70%. Further acceleration is provided by a multi-GPU implementation of the forward and backward projection, which fulfills the demands of cone beam geometry. In this preliminary study we evaluate this procedure on clinical data. Perfusion maps are computed and compared with reference images from magnetic resonance scans. We found a high correlation between both images.
Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT
Jing Wang, Xuejun Gu
Image reconstruction and motion model estimation in four dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4DCBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR). The proposed SMEIR algorithm consists of two alternating steps: 1) model-based iterative image reconstruction to obtain a motion-compensated primary CBCT (m-pCBCT) and 2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction (SART) technique coupled with total variation minimization. During the forward- and back-projection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The SMEIR algorithm improves image reconstruction accuracy of 4D-CBCT and tumor motion trajectory estimation accuracy as compared to conventional sequential 4D-CBCT reconstruction and motion estimation.
Three dimensional image guided extrapolation for cone-beam CT image reconstruction
In cone-beam CT the range of projection views measured for each given image voxel is spatially variant. In the corners of the image volume there is less projection data available to be used by the image reconstruction algorithm, due to data truncation in the z direction (i.e. along the scanner axis). Given the desire to increase the fraction of the voxels which may be reconstructed from a given scan there is a desire to incorporate some extrapolated data into the image reconstruction procedure. In this work one approach is described which consists of a two-pass procedure where the first pass image reconstruction is performed over a larger extent in the z direction, a non-linear transform is applied to the initial reconstruction and a forward projection is applied in order to estimate the extrapolated image data. Initial results are presented which compare the method to zeroth order extrapolation and demonstrate that improvement in the reconstruction of the corner regions with a simple numerical phantom and with anatomical phantom data from a prototype wide coverage CT system.
Anti-scatter grid evaluation for wide-cone CT
Roman Melnyk, John Boudry, Xin Liu, et al.
Scatter is a significant source of image artifacts in wide-cone CT. Scatter management includes both scatter rejection and scatter correction. The common scatter rejection approach is to use an anti-scatter grid (ASG). Conventional CT scanners (with detector coverage not exceeding 40mm along the patient axis) typically employ one-dimensional (1D) ASGs. Such grids are quite effective for small cone angles. For larger cone angles, however, simply increasing the aspect ratio of a 1D ASG is not sufficient. In addition, a 1D ASG offers no scatter rejection along the patient axis. To ensure adequate image quality in wide-cone CT, a two-dimensional (2D) ASG is needed. In this work, we measured the amount of scatter and the degree of image artifacts typically attributable to scatter for four prototype 2D ASG designs, and we compared those to a 1D ASG. The scatter was measured in terms of the scatter-toprimary ratio (SPR). The cupping and ghosting artifacts were assessed through quantitative metrics. For the 2D ASGs, when compared to the 1D ASG, the SPR decreased by up to 66% and 75% for 35cm water and 48cm polyethylene, respectively, phantoms, at 80-160mm apertures (referenced to isocenter), as measured by the pinhole method. As measured by the two-aperture method, the SPR reduction was 59%-68% at isocenter for the 35cm water phantom at 160mm aperture. The cupping artifact was decreased by up to ~80%. The ghosting artifact was reduced as well. The results of the evaluation clearly demonstrate the advantage of using a 2D ASG for wide-cone CT.
Variance-based iterative image reconstruction from few views in limited-angle C-arm computed tomography
Wissam El Hakimi, Georgios Sakas
C-arm cone-beam computed tomography offers CT-like 3D imaging capabilities, but with the additional advantage of being appropriate for interventional suites. Due to the limitations of the data acquisition system, projections are oft acquired in a short scan angular range, resulting in significant artifacts, if conventional analytic formulas are applied. Furthermore, the presence of high-density objects, like metal parts, induces streak-like artifacts, which can obscure relevant anatomy. We present a new algorithm to reduce such artifacts and enhance the quality of reconstructed 3D volume. We make use of the variance of estimated voxel values over all projections to decrease the ground artifact level. The proposed algorithm is less sensitive to data truncation, and does not require explicit estimation of missing data. The number of required images is very low (up to 56 projections), which have several benefits, like significant reduction of patient dose and shortening of the acquisition time. The performance of the proposed method is demonstrated based on simulations and phantom data.
An experimental study on the noise correlation properties of CBCT projection data
Hua Zhang, Luo Ouyang, Jianhua Ma, et al.
In this study, we systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam on-board CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 mAs to 1.6 mAs per projection at three fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. The analyses of the repeated measurements show that noise correlation coefficients are non-zero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second- order neighbors are 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation results in a lower noise level as compared to the PWLS criterion without considering the noise correlation at the matched resolution.
A sinogram based technique for image correction and removal of metal clip artifacts in cone beam breast CT
T. Wang, Y. Shen, Y. Zhong, et al.
Cone beam CT (CBCT) technique provides true three-dimensional (3D) images of a breast; however, metal clips and needles used for surgical planning can cause artifacts, which may extend to many adjacent slices, in the reconstructed images obtained by the Feldkamp-Davis-Kress (FDK) filtered backprojection method,. In this paper, a sinogram based method to remove the metal clips in the projection image data is described and discussed for improving the quality of reconstructed breast images. First, the original projection data was reconstructed using the FDK algorithm to obtain a volumetric image with metal clips and artifacts. Second, the volumetric image was segmented by using the threshold method to obtain a 3D map of metal objects. Third, a forward projection algorithm is applied to the metal object map to obtain projection map of metal objects. Finally, the original projection images and projection map of metal objects are reorganized into sinograms for correction in the angular space on a pixel-by-pixel basis. Cone beam CT images of a mastectomy breast specimen are used to demonstrate the feasibility of using this technique for removal of metal object artifacts. Preliminary results have demonstrated that metal objects artifacts in 3D images were reduced and the image quality were improved.
Preliminary study of region-of-interest image reconstruction with intensity weighting in cone-beam CT using iterative algorithm
Kihong Son, Jiseoc Lee, Younjeong Lee, et al.
In computed tomography (CT) imaging, radiat ion dose delivered to the patient is one of the major concerns. Many CT developers and researchers have been making efforts to reduce radiat ion dose. Spars e-view CT takes project ions at sparser view-angles and provides a viable option to reducing radiation dose. Sparse-view CT inspired by a compressive sensing (CS) theory, which acquires sparsely sampled data in projection angles to reconstruct volumetric images of the scanned object, is under active research for low-dose imaging. Also, region of interest (ROI) imaging method is one of the reasonable approaches to reducing the integral dose to the patient and the risk of overdose. In this study, we combined the two approaches together to achieve an ultra-low-dose imaging: a sparse-view imaging and the intensityweighted region-of-interest (IWROI) imaging. IWROI imaging technique is particularly interesting because it can reduce the imaging radiation dose substantially to the structures away from the imaging target, while allowing a stable solution of the reconstruction problem in comparison with the interior problem. We used a total-variation (TV) minimization algorithm that exploits the sparseness of the image derivative magnitude and can reconstruct images from sparse-view data. In this study, we implemented an imaging mode that combines a sparse-view imaging and an ROI imaging. We obtained promising results and believe that the proposed scanning approach can help reduce radiation dose to the patients while preserving good quality images for applications such as image-guided radiation therapy. We are in progress of applying the method to the real data.
Poster Session: Conventional CT
icon_mobile_dropdown
Reduction of metal artifacts: beam hardening and photon starvation effects
The presence of metal-artifacts in CT imaging can obscure relevant anatomy and interfere with disease diagnosis. The cause and occurrence of metal-artifacts are primarily due to beam hardening, scatter, partial volume and photon starvation; however, the contribution to the artifacts from each of them depends on the type of hardware. A comparison of CT images obtained with different metallic hardware in various applications, along with acquisition and reconstruction parameters, helps understand methods for reducing or overcoming such artifacts. In this work, a metal beam hardening correction (BHC) and a projection-completion based metal artifact reduction (MAR) algorithms were developed, and applied on phantom and clinical CT scans with various metallic implants. Stainless-steel and Titanium were used to model and correct for metal beam hardening effect. In the MAR algorithm, the corrupted projection samples are replaced by the combination of original projections and in-painted data obtained by forward projecting a prior image. The data included spine fixation screws, hip-implants, dental-filling, and body extremity fixations, covering range of clinically used metal implants. Comparison of BHC and MAR on different metallic implants was used to characterize dominant source of the artifacts, and conceivable methods to overcome those. Results of the study indicate that beam hardening could be a dominant source of artifact in many spine and extremity fixations, whereas dental and hip implants could be dominant source of photon starvation. The BHC algorithm could significantly improve image quality in CT scans with metallic screws, whereas MAR algorithm could alleviate artifacts in hip-implants and dentalfillings.
Acquiring tomographic images from panoramic X-ray scanners
We propose a new method to acquire three-dimensional tomographic images of a large object from a dental panoramic X-ray scanner which was originally designed to produce a panoramic image of the teeth and jaws on a single frame. The method consists of two processes; (i) a new acquisition scheme to acquire the tomographic projection data using a narrow detector, and (ii) a dedicated model-based iterative technique to reconstruct images from the acquired projection data. In conventional panoramic X-ray scanners, the suspension arm that holds the X-ray source and the narrow detector has two moving axes of the angular movement and the linear movement. To acquire the projection data of a large object, we develop a new data acquisition scheme that can emulate an acquisition of the projectional view in a large detector by stitching narrow projection images, each of which is formed by a narrow detector, and design a trajectory to move the suspension arm accordingly. To reconstruct images from the acquired projection data, an accelerated model-based iterative reconstruction method derived from the ordered subset convex maximum-likelihood expectation-maximization algorithm is used. In this method each subset of the projection data is constructed by collecting narrow projection images to form emulated tomographic projectional views in a large detector. To validate the performance of the proposed method, we tested with a real dental panoramic X-ray system. The experimental results demonstrate that the new method has great potential to enable existing panoramic X-ray scanners to have an additional CT’s function of providing useful tomographic images.
Impact of redundant ray weighting on motion artifact in a statistical iterative reconstruction framework
Yinghua Tao, Jie Tang, Michael Speidel, et al.
In recent years, iterative reconstruction methods have been investigated extensively with the aim of reducing radiation dose while maintaining image quality in CT exams. In such a case, redundant data is usually available. In conventional FBP-type reconstructions, redundant data has to be carefully treated by applying a redundant weighting factor, such as Parker weighting. However, such a redundant weight has not been fully studied in a statistical iterative reconstruction framework. In this work, both numerical simulations and in vivo data sets were analyzed to study the impact of redundant weighting schemes on the reconstructed images for both static and moving objects. Results demonstrated that, for a static object, there was no obvious difference in the iterative reconstructions using different redundant weighting schemes, because the redundant data was consistent, and therefore, they all converged to the same solution. On the contrary, for a moving object, due to the inconsistency of the data, different redundant weighting schemes converged to different solutions, depending on the weight given to the data. The redundant weighting, if appropriately selected, can reduce motion-induced artifacts.
Effective noise reduction and equalization in projection domain
CT image quality is affected by various artifacts including noise. Among these artifacts of different causes, noisy data due to photon starvation should be contained in early processing stage to better mitigate other artifacts as they can cause severe streaks and noise in reconstructed CT image. For low dose imaging, it is critical to use effective processing method to handle the photon starved data in order to obtain required image quality with desired resolution, texture, low contrast detectability. In this paper, two promising projection domain noise reduction methods are proposed. They are derived from (1) the noise model that connects the noise behaviors in count and attenuation; (2) predicted noise reduction from a finite impulse response (FIR) filter; (3) two pre-determined noise reduction requirements (noise equalization and electronic noise suppression). Both methods showed significant streaks and noise reduction in tested cases while reasonably maintaining the resolution of the images.
X-ray pulsing methods for reduced-dose computed tomography in PET/CT attenuation correction
Uwe Wiedmann, V. Bogdan Neculaes, Dan Harrison, et al.
The image quality needed for CT-based attenuation correction (CTAC) is significantly lower than what is used currently for diagnostic CT imaging. Consequently, the X-ray dose required for sufficient image quality with CTAC is relatively small, potentially smaller than the lowest X-ray dose clinical CT scanners can provide. Operating modes have been proposed in which the X-rays are periodically turned on and off during the scan in order to reduce X-ray dose. This study reviews the different methods by which X-rays can be modulated in a CT scanner, and assesses their adequacy for lowdose acquisitions as required for CTAC. Calculations and experimental data are provided to exemplify selected X-ray pulsing scenarios. Our analysis shows that low-dose pulsing is possible but challenging with clinically available CT tubes. Alternative X-ray tube designs would lift this restriction.
Dose, noise and view weights in CT helical scans
Guangzhi Cao, Edgar Chino, Roy Nilsen, et al.
The amount of X-ray dose expresses itself as the noise level in image volume after reconstruction in clinical CT scans. It is important to understand the interaction between the dose, noise and reconstruction, which helps to guide the design of CT systems and reconstruction algorithms. Based on the fact that most of practical reconstruction algorithms in clinical CT systems are implemented as filtered back-projection, in this work, a unified analytical framework is proposed to establish the connection between dose, noise and view weighting functions of different reconstruction algorithms in CT helical scans. The proposed framework helps one better understand the relationship between X-ray dose and image noise and is instrumental on how to design the view weighting function in reconstruction without extensive simulations and experiments. Even though certain assumptions were made in order to simplify the analytical model, experimental results using both simulation data and real CT scan data show the proposed model is reasonably accurate even for objects of human body shape. In addition, based on the proposed framework an analytical form of theoretically optimal dose efficiency as a function of helical pitch is also derived, which suggests a somehow unintuitive but interesting conclusion that the theoretically optimal dose efficiency generally varies with helical pitch.
Volume estimation of multi-density nodules with thoracic CT
The purpose of this work was to quantify the effect of surrounding density on the volumetric assessment of lung nodules in a phantom CT study. Eight synthetic multidensity nodules were manufactured by enclosing spherical cores in larger spheres of double the diameter and with a different uniform density. Different combinations of outer/inner diameters (20/10mm, 10/5mm) and densities (100HU/-630HU, 10HU/- 630HU, -630HU/100HU, -630HU/-10HU) were created. The nodules were placed within an anthropomorphic phantom and scanned with a 16-detector row CT scanner. Ten repeat scans were acquired using exposures of 20, 100, and 200mAs, slice collimations of 16x0.75mm and 16x1.5mm, and pitch of 1.2, and were reconstructed with varying slice thicknesses (three for each collimation) using two reconstruction filters (medium and standard). The volumes of the inner nodule cores were estimated from the reconstructed CT data using a matched-filter approach with templates modeling the characteristics of the multi-density objects. Volume estimation of the inner nodule was assessed using percent bias (PB) and the standard deviation of percent error (SPE). The true volumes of the inner nodules were measured using micro CT imaging. Results show PB values ranging from -12.4 to 2.3% and SPE values ranging from 1.8 to 12.8%. This study indicates that the volume of multi-density nodules can be measured with relatively small percent bias (on the order of ±12% or less) when accounting for the properties of surrounding densities. These findings can provide valuable information for understanding bias and variability in clinical measurements of nodules that also include local biological changes such as inflammation and necrosis.
Poster Session: CT Reconstruction
icon_mobile_dropdown
Accelerating ordered-subsets X-ray CT image reconstruction using the linearized augmented Lagrangian framework
The augmented Lagrangian (AL) optimization method has drawn more attention recently in imaging applications due to its decomposable structure for composite cost functions and empirical fast convergence rate under weak conditions. However, for problems, e.g., X-ray computed tomography (CT) image reconstruction, where the inner least-squares problem is challenging, the AL method can be slow due to its iterative inner updates. In this paper, using a linearized AL framework, we propose an ordered-subsets (OS) accelerable linearized AL method, OS-LALM, for solving penalized weighted least-squares (PWLS) X-ray CT image reconstruction problems. To further accelerate the proposed algorithm, we also propose a deterministic downward continuation approach for fast convergence without additional parameter tuning. Experimental results show that the proposed algo- rithm significantly accelerates the “convergence” of X-ray CT image reconstruction with negligible overhead and exhibits excellent gradient error tolerance when using many subsets for OS acceleration.
Sinogram rebinning and frequency boosting for high resolution iterative CT reconstruction with focal spot deflection
Jiao Wang, Yong Long, Lin Fu, et al.
High resolution CT is important for qualitative feature identification and quantitative measurements for many clinical applications. To optimize the spatial resolution, filtered backprojection (FBP) based methods use various sinogram domain frequency boosting filters to provide flexible control of frequency responses in reconstructed images. In comparison, model-based iterative reconstruction (MBIR) methods usually rely on a single regularization strength parameter to control the image resolution, and there is limited flexibility in controlling the spectral response. Alternatively, MBIR can also improve the spatial resolution by sinogram preprocessing with frequency boosting filters. Focal spot deflection technology has been introduced to high-end CT scanner to increase the effective detector sampling rate. With higher Nyquist sampling rate along detector channels, we can design frequency boosting filters with a much wider frequency range and recover higher resolution details in the reconstructed images. In this paper, we explore the potential of sinogram rebinning and frequency boosting method for high resolution MBIR from data with focal spot deflection data. The proposed method is tested with phantom and clinical data. Compared with MBIR that models native focal spot deflection geometry, our results show that MBIR from the rebinned geometry with frequency boosting filters can achieve higher resolution and better noise-resolution tradeoff. Moreover, we also demonstrate some improvement in contrast and sharpness in clinical images. The proposed method provides a way to flexibly change the noise and resolution tradeoff in MBIR images similar to adjusting the filter kernels in FBP method.
A multi-resolution approach to retrospectively-gated cardiac micro-CT reconstruction
D. P. Clark, G. A. Johnson, C. T. Badea
In preclinical research, micro-CT is commonly used to provide anatomical information; however, there is significant interest in using this technology to obtain functional information in cardiac studies. The fastest acquisition in 4D cardiac micro-CT imaging is achieved via retrospective gating, resulting in irregular angular projections after binning the projections into phases of the cardiac cycle. Under these conditions, analytical reconstruction algorithms, such as filtered back projection, suffer from streaking artifacts. Here, we propose a novel, multi-resolution, iterative reconstruction algorithm inspired by robust principal component analysis which prevents the introduction of streaking artifacts, while attempting to recover the highest temporal resolution supported by the projection data. The algorithm achieves these results through a unique combination of the split Bregman method and joint bilateral filtration. We illustrate the algorithm’s performance using a contrast-enhanced, 2D slice through the MOBY mouse phantom and realistic projection acquisition and reconstruction parameters. Our results indicate that the algorithm is robust to under sampling levels of only 34 projections per cardiac phase and, therefore, has high potential in reducing both acquisition times and radiation dose. Another potential advantage of the multi-resolution scheme is the natural division of the reconstruction problem into a large number of independent sub-problems which can be solved in parallel. In future work, we will investigate the performance of this algorithm with retrospectively-gated, cardiac micro-CT data.
Generalized least-squares CT reconstruction with detector blur and correlated noise models
J. Webster Stayman, Wojciech Zbijewski, Steven Tilley II, et al.
The success and improved dose utilization of statistical reconstruction methods arises, in part, from their ability to incorporate sophisticated models of the physics of the measurement process and noise. Despite the great promise of statistical methods, typical measurement models ignore blurring effects, and nearly all current approaches make the presumption of independent measurements – disregarding noise correlations and a potential avenue for improved image quality. In some imaging systems, such as flat-panel-based cone-beam CT, such correlations and blurs can be a dominant factor in limiting the maximum achievable spatial resolution and noise performance. In this work, we propose a novel regularized generalized least-squares reconstruction method that includes models for both system blur and correlated noise in the projection data. We demonstrate, in simulation studies, that this approach can break through the traditional spatial resolution limits of methods that do not model these physical effects. Moreover, in comparison to other approaches that attempt deblurring without a correlation model, superior noise-resolution trade-offs can be found with the proposed approach.
LBP-based penalized weighted least-squares approach to low-dose cone-beam computed tomography reconstruction
Cone-beam computed tomography (CBCT) has attracted growing interest of researchers in image reconstruction. The mAs level of the X-ray tube current, in practical application of CBCT, is mitigated in order to reduce the CBCT dose. The lowering of the X-ray tube current, however, results in the degradation of image quality. Thus, low-dose CBCT image reconstruction is in effect a noise problem. To acquire clinically acceptable quality of image, and keep the X-ray tube current as low as achievable in the meanwhile, some penalized weighted least-squares (PWLS)-based image reconstruction algorithms have been developed. One representative strategy in previous work is to model the prior information for solution regularization using an anisotropic penalty term. To enhance the edge preserving and noise suppressing in a finer scale, a novel algorithm combining the local binary pattern (LBP) with penalized weighted leastsquares (PWLS), called LBP-PWLS-based image reconstruction algorithm, is proposed in this work. The proposed LBP-PWLS-based algorithm adaptively encourages strong diffusion on the local spot/flat region around a voxel and less diffusion on edge/corner ones by adjusting the penalty for cost function, after the LBP is utilized to detect the region around the voxel as spot, flat and edge ones. The LBP-PWLS-based reconstruction algorithm was evaluated using the sinogram data acquired by a clinical CT scanner from the CatPhan® 600 phantom. Experimental results on the noiseresolution tradeoff measurement and other quantitative measurements demonstrated its feasibility and effectiveness in edge preserving and noise suppressing in comparison with a previous PWLS reconstruction algorithm.
Nonlocal means-based regularizations for statistical CT reconstruction
Statistical iterative reconstruction (SIR) methods have shown remarkable gains over the conventional filtered backprojection (FBP) method in improving image quality for low-dose computed tomography (CT). They reconstruct the CT images by maximizing/minimizing a cost function in a statistical sense, where the cost function usually consists of two terms: the data-fidelity term modeling the statistics of measured data, and the regularization term reflecting a prior information. The regularization term in SIR plays a critical role for successful image reconstruction, and an established family of regularizations is based on the Markov random field (MRF) model. Inspired by the success of nonlocal means (NLM) algorithm in image processing applications, we proposed, in this work, a family of generic and edgepreserving NLM-based regularizations for SIR. We evaluated one of them where the potential function takes the quadratic-form. Experimental results with both digital and physical phantoms clearly demonstrated that SIR with the proposed regularization can achieve more significant gains than SIR with the widely-used Gaussian MRF regularization and the conventional FBP method, in terms of image noise reduction and resolution preservation.
Low-dose CT reconstruction with patch based sparsity and similarity constraints
As the rapid growth of CT based medical application, low-dose CT reconstruction becomes more and more important to human health. Compared with other methods, statistical iterative reconstruction (SIR) usually performs better in lowdose case. However, the reconstructed image quality of SIR highly depends on the prior based regularization due to the insufficient of low-dose data. The frequently-used regularization is developed from pixel based prior, such as the smoothness between adjacent pixels. This kind of pixel based constraint cannot distinguish noise and structures effectively. Recently, patch based methods, such as dictionary learning and non-local means filtering, have outperformed the conventional pixel based methods. Patch is a small area of image, which expresses structural information of image. In this paper, we propose to use patch based constraint to improve the image quality of low-dose CT reconstruction. In the SIR framework, both patch based sparsity and similarity are considered in the regularization term. On one hand, patch based sparsity is addressed by sparse representation and dictionary learning methods, on the other hand, patch based similarity is addressed by non-local means filtering method. We conducted a real data experiment to evaluate the proposed method. The experimental results validate this method can lead to better image with less noise and more detail than other methods in low-count and few-views cases.
Noise study on cone-beam CT FDK image reconstruction by improved area-simulating-volume technique
Previous studies have reported that the volume-weighting technique has advantages over the linear interpolation technique for cone-beam computed tomography (CBCT) image reconstruction. However, directly calculating the intersecting volume between the pencil beam X-ray and the object is a challenge due to the computational complexity. Inspired by previous works in area-simulating volume (ASV) technique for 3D positron emission tomography, we proposed an improved ASV (IASV) technique, which can fast calculate the geometric probability of the intersection between the pencil beam and the object. In order to show the improvements of using IASV technique in volumeweighting based Feldkamp–Davis–Kress (VW-FDK) algorithm compared to the conventional linear interpolation technique based FDK algorithm (LI-FDK), the variances images from both theoretical prediction and empirical determination are described basing on the assumption of the uncorrelated and stationary noise for each detector bin. In digital phantom study, both of the theoretically predicted variance images and the empirically determined variance images concurred and demonstrated that the VW-FDK algorithm can result in uniformly distributed noise across the FOV. In the physical phantom study, the performance enhancements by the VW-FDK algorithm were quantitatively evaluated by the contrast-noise-ratio (CNR) merit. The CNR values from the VW-FDK result were about 40% higher than the conventional LI-FDK result. Therefore it can be concluded that the VW-FDK algorithm can efficiently address the non-uniformity noise and suppress noise level of the reconstructed images.
Mojette tomographic reconstruction for micro-CT: a bone and vessels quality evaluation
Micro-CT represents a modality where the quality of CT reconstruction is very high thanks to the acquisition properties. The goal of this paper is to challenge our proposed Mojette discrete reconstruction scheme from real micro-CT data. A first study was done to analyze bone image degradations by lowering the number of projections. A second study analyzes trabecular bone and vessels tree through an animal study. Small vessels are filling trabecular holes with almost the same grey levels as the bone. Therefore vessel detectability that can be achieved from the reconstruction algorithm according to the number of projections is a major issue.
Two-step iterative reconstruction of region-of-interest with truncated projection in computed tomography
Keisuke Yamakawa, Shinichi Kojima
Iteratively reconstructing data only inside the region of interest (ROI) is widely used to acquire CT images in less computation time while maintaining high spatial resolution. A method that subtracts projected data outside the ROI from full-coverage measured data has been proposed. A serious problem with this method is that the accuracy of the measured data confined inside the ROI decreases according to the truncation error outside the ROI. We propose a two-step iterative method that reconstructs image inside the full-coverage in addition to a conventional iterative method inside the ROI to reduce the truncation error inside full-coverage images. Statistical information (e.g., quantum-noise distributions) acquired by detected X-ray photons is generally used in iterative methods as a photon weight to efficiently reduce image noise. Our proposed method applies one of two kinds of weights (photon or constant weights) chosen adaptively by taking into consideration the influence of truncation error. The effectiveness of the proposed method compared with that of the conventional method was evaluated in terms of simulated CT values by using elliptical phantoms and an abdomen phantom. The standard deviation of error and the average absolute error of the proposed method on the profile curve were respectively reduced from 3.4 to 0.4 [HU] and from 2.8 to 0.8 [HU] compared with that of the conventional method. As a result, applying a suitable weight on the basis of a target object made it possible to effectively reduce the errors in CT images.
Multigrid iterative method with adaptive spatial support for computed tomography reconstruction from few-view data
Ping-Chang Lee
Computed tomography (CT) plays a key role in modern medical system, whether it be for diagnosis or therapy. As an increased risk of cancer development is associated with exposure to radiation, reducing radiation exposure in CT becomes an essential issue. Based on the compressive sensing (CS) theory, iterative based method with total variation (TV) minimization is proven to be a powerful framework for few-view tomographic image reconstruction. Multigrid method is an iterative method for solving both linear and nonlinear systems, especially when the system contains a huge number of components. In medical imaging, image background is often defined by zero intensity, thus attaining spatial support of the image, which is helpful for iterative reconstruction. In the proposed method, the image support is not considered as a priori knowledge. Rather, it evolves during the reconstruction process. Based on the CS framework, we proposed a multigrid method with adaptive spatial support constraint. The simultaneous algebraic reconstruction (SART) with TV minimization is implemented for comparison purpose. The numerical result shows: 1. Multigrid method has better performance while less than 60 views of projection data were used, 2. Spatial support highly improves the CS reconstruction, and 3. When few views of projection data were measured, our method performs better than the SART+TV method with spatial support constraint.
Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT
Hisashi Takahashi, Taiga Goto, Koichi Hirokawa, et al.
Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.
Poster Session: Multi-energy CT
icon_mobile_dropdown
Use of depth information from in-depth photon counting detectors for x-ray spectral imaging: a preliminary simulation study
Yuan Yao, Hans Bornefalk, Scott S. Hsieh, et al.
Purpose: Photon counting x-ray detectors (PCXD) may improve dose-efficiency but are hampered by limited count rate. They generally have imperfect energy response. Multi-layer ("in-depth") detectors have been proposed to enable higher count rates but the potential benefit of the depth information has not been explored. We conducted a simulation study to compare in-depth detectors against single layer detectors composed of common materials. Both photon counting and energy integrating modes were studied. Methods: Polyenergetic transmissions were simulated through 25cm of water and 1cm of calcium. For PCXD composed of Si, GaAs or CdTe a 120kVp spectrum was used. For energy integrating x-ray detectors (EIXD) made from GaAs, CdTe or CsI, spectral imaging was done using 80 and 140kVp and matched dose. Semi-ideal and phenomenological energy response models were used. To compare these detectors, we computed the Cramér-Rao lower bound (CRLB) of the variance of basis material estimates. Results: For PCXDs with perfect energy response, depth data provides no additional information. For PCXDs with imperfect energy response and for EIXDs the improvement can be significant. E.g., for a CdTe PCXD with realistic energy response, depth information can reduce the variance by ~50%. The improvement depends on the x-ray spectrum. For a semi-ideal Si detector and a narrow x-ray spectrum the depth information has minimal advantage. For EIXD, the in-depth detector has consistent variance reduction (15% and 17%~19% for water and calcium, respectively). Conclusions: Depth information is beneficial to spectral imaging for both PCXD and EIXD. The improvement depends critically on the detector energy response.
Fast model-based restoration of noisy and undersampled spectral CT data
In this work we propose a fast, model-based restoration scheme for noisy or undersampled spec- tral CT data and demonstrate its potential utility with two simulation studies. First, we show how one can denoise photon counting CT images, post- reconstruction, by using a spectrally averaged im- age formed from all detected photons as a high SNR prior. Next, we consider a slow slew-rate kV switch- ing scheme, where sparse sinograms are obtained at peak voltages of 80 and 140 kVp. We show how the missing views can be restored by using a spectrally av- eraged, composite sinogram containing all of the views as a fully sampled prior. We have chosen these ex- amples to demonstrate the versatility of the proposed approach and because they have been discussed in the literature before3,6 but we hope to convey that it may be applicable to a fairly general class of spectral CT systems. Comparisons to several sparsity-exploiting, iterative reconstructions are provided for reference.
Experimental study of two material decomposition methods using multi-bin photon counting detectors
Photon-counting detectors with multi-bin pulse height analysis (PHA) are capable of extracting energy dependent information which can be exploited for material decomposition. Iterative decomposition algorithms have been previously implemented which require prior knowledge of the source spectrum, detector spectral response, and energy threshold settings. We experimentally investigated two material decomposition methods that do not require explicit knowledge of the source spectrum and spectral response. In the first method, the effective spectrum for each energy bin is estimated from calibration transmission measurements, followed by an iterative maximum likelihood decomposition algorithm. The second investigated method, first proposed and tested through simulations by Alvarez, uses a linearized maximum likelihood estimator which requires calibration transmission measurements. The Alvarez method has the advantage of being non-iterative. This study experimentally quantified and compared the material decomposition bias, as a percentage of material thickness, and standard deviation resulting from these two material decomposition estimators. Multi-energy x-ray transmission measurements were acquired through varying thicknesses of Teon, Delrin, and neoprene at two different flux settings and decomposed into PMMA and aluminum thicknesses using the investigated methods. In addition, a series of 200 equally spaced projections of a rod phantom were acquired over 360°. The multi-energy sinograms were decomposed using both empirical methods and then reconstructed using filtered backprojection producing two images representing each basis material. The Alvarez method decomposed Delrin into PMMA with a bias of 0.5-19% and decomposed neoprene into aluminum with a bias of less than 3%. The spectral estimation method decomposed Delrin into PMMA with a bias of 0.6-16% and decomposed neoprene into aluminum with a bias of 0.1-58%. In general, the spectral estimation method resulted in larger bias than the Alvarez method. Both methods demonstrated similar standard deviations of less than 1 mm. Both decomposition methods resulted in similar bias and standard deviation when comparing performance at the two flux levels. The results suggest preliminary feasibility of two empirical methods that use calibration measurements rather than prior knowledge of system parameters to estimate thicknesses of the basis materials.
Prostate tissue decomposition via DECT using the model based iterative image reconstruction algorithm DIRA
Better knowledge of elemental composition of patient tissues may improve the accuracy of absorbed dose delivery in brachytherapy. Deficiencies of water-based protocols have been recognized and work is ongoing to implement patient-specific radiation treatment protocols. A model based iterative image reconstruction algorithm DIRA has been developed by the authors to automatically decompose patient tissues to two or three base components via dual-energy computed tomography. Performance of an updated version of DIRA was evaluated for the determination of prostate calcification. A computer simulation using an anthropomorphic phantom showed that the mass fraction of calcium in the prostate tissue was determined with accuracy better than 9%. The calculated mass fraction was little affected by the choice of the material triplet for the surrounding soft tissue. Relative differences between true and approximated values of linear attenuation coefficient and mass energy absorption coefficient for the prostate tissue were less than 6% for photon energies from 1 keV to 2 MeV. The results indicate that DIRA has the potential to improve the accuracy of dose delivery in brachytherapy despite the fact that base material triplets only approximate surrounding soft tissues.
Investigation of the polynomial approach for material decomposition in spectral X-ray tomography using an energy-resolved detector
A. Potop, V. Rebuffel, J. Rinkel, et al.
Recent advances in the domain of energy-resolved semiconductor detectors stimulate research in X-ray computed tomography (CT). However, the imperfections of these detectors induce errors that should be considered for further applications. Charge sharing and pile-up effects due to high photon fluxes can degrade image quality or yield wrong material identification. Basis component decomposition provides separate images of principal components, based on the energy related information acquired in each energy bin. The object is typically either decomposed in photoelectric and Compton physical effects or in basis materials functions. This work presents a simulation study taking into account the properties of an energy-resolved CdTe detector with flexible energy thresholds in the context of materials decomposition CT. We consider the effects of a first order pile-up model with triangular pulses of a non-paralyzable detector and a realistic response matrix. We address the problem of quantifying mineral content in bone based on a polynomial approach for material decomposition in the case of two and three energy bins. The basis component line integrals are parameterized directly in the projection domain and a conventional filtered back-projection reconstruction is performed to obtain the material component images. We use figures of merit such as noise and bias to select the optimal thresholds and quantify the mineral content in bone. The results obtained with an energy resolved detector for two and three energy bins are compared with the ones obtained for the dual-kVp technique using an integrating-mode detector with filters and voltages optimized for bone densitometry.
Enabling photon counting detectors with dynamic attenuators
Photon-counting x-ray detectors (PCXDs) are being investigated as a replacement for conventional x-ray detectors because they promise several advantages, including better dose efficiency, higher resolution and spectral imaging. However, many of these advantages disappear when the x-ray flux incident on the detector is too high. We recently proposed a dynamic, piecewise-linear attenuator (or beam shaping filter) that can control the flux incident on the detector. This can restrict the operating range of the PCXD to keep the incident count rate below a given limit. We simulated a system with the piecewise-linear attenuator and a PCXD using raw data generated from forward projected DICOM files. We investigated the classic paralyzable and nonparalyzable PCXD as well as a weighted average of the two, with the weights chosen to mimic an existing PCXD (Taguchi et al, Med Phys 2011). The dynamic attenuator has small synergistic benefits with the nonparalyzable detector and large synergistic benefits with the paralyzable detector. Real PCXDs operate somewhere between these models, and the weighted average model still shows large benefits from the dynamic attenuator. We conclude that dynamic attenuators can reduce the count rate performance necessary for adopting PCXDs.
Noise balance in pre-reconstruction decomposition in spectral CT
Spectral CT requires two or more independent measurements for each ray path in order to extract complete energydependent information of the object attenuation. The number of required measurements is equivalent to the number of independent basis functions needed to describe the attenuation of the imaged objects. For example, two independent measurements are sufficient if only photoelectric absorption and Compton scattering are dominating. If additional Kedge( s) is present in the energy range of interest, more than two measurements are necessary. In this study, we present a pre-reconstruction decomposition method that utilizes spectral data redundancy to improve image quality. We assume projection data are acquired with an M-energy-bin photon counting detector that generates M independent measurements, and the attenuation of the objects can be described with N (M < M) basis functions. The method addresses un-balanced noise level of data from different energy bins of the photon counting detector. During a CT scan, with the non-uniform attenuation of a typical patient, spectral shape and beam intensity can change drastically from detector to detector, from view to view. As a consequence, a detector unit is subject to significantly varying incident x-ray spectra. Hardware adjustment approaches are limited by current detector and mechanical technology, and almost not possible in a typical clinical CT scan with e.g., 1800 views / 0.5 s. Our method applies adaptive noise balance weighting to data acquired from different energy bins, post data acquisition and prior data decomposition. The results show substantially improved quality in spectral images reconstructed from photon counting detector data.
Energy-resolved CT imaging with a photon-counting silicon-strip detector
Mats Persson, Ben Huber, Staffan Karlsson, et al.
Photon-counting detectors are promising candidates for use in the next generation of x-ray CT scanners. Among the foreseen benefits are higher spatial resolution, better trade-off between noise and dose, and energy discriminating capabilities. Silicon is an attractive detector material because of its low cost, mature manufacturing process and high hole mobility. However, it is sometimes claimed to be unsuitable for use in computed tomography because of its low absorption efficiency and high fraction of Compton scatter. The purpose of this work is to demonstrate that high-quality energy-resolved CT images can nonetheless be acquired with clinically realistic exposure parameters using a photon-counting silicon-strip detector with eight energy thresholds developed in our group. We use a single detector module, consisting of a linear array of 50 0.5 × 0.4 mm detector elements, to image a phantom in a table-top lab setup. The phantom consists of a plastic cylinder with circular inserts containing water, fat and aqueous solutions of calcium, iodine and gadolinium, in different concentrations. We use basis material decomposition to obtain water, calcium, iodine and gadolinium basis images and demonstrate that these basis images can be used to separate the different materials in the inserts. We also show results showing that the detector has potential for quantitative measurements of substance concentrations.
Poster Session: Detectors
icon_mobile_dropdown
Characterization of a hybrid energy-resolving photon-counting detector
A. Zang, G. Pelzer, G. Anton, et al.
Photon-counting detectors in medical x-ray imaging provide a higher dose efficiency than integrating detectors. Even further possibilities for imaging applications arise, if the energy of each photon counted is measured, as for example K-edge-imaging or optimizing image quality by applying energy weighting factors. In this contribution, we show results of the characterization of the Dosepix detector. This hybrid photon- counting pixel detector allows energy resolved measurements with a novel concept of energy binning included in the pixel electronics. Based on ideas of the Medipix detector family, it provides three different modes of operation: An integration mode, a photon-counting mode, and an energy-binning mode. In energy-binning mode, it is possible to set 16 energy thresholds in each pixel individually to derive a binned energy spectrum in every pixel in one acquisition. The hybrid setup allows using different sensor materials. For the measurements 300 μm Si and 1 mm CdTe were used. The detector matrix consists of 16 x 16 square pixels for CdTe (16 x 12 for Si) with a pixel pitch of 220 μm. The Dosepix was originally intended for applications in the field of radiation measurement. Therefore it is not optimized towards medical imaging. The detector concept itself still promises potential as an imaging detector. We present spectra measured in one single pixel as well as in the whole pixel matrix in energy-binning mode with a conventional x-ray tube. In addition, results concerning the count rate linearity for the different sensor materials are shown as well as measurements regarding energy resolution.
X-ray light valve (XLV): a novel detectors' technology for digital mammography
Sorin Marcovici, Vlad Sukhovatkin, Peter Oakham
A novel method, based on X-ray Light Valve (XLV) technology, is proposed for making good image quality yet inexpensive flat panel detectors for digital mammography. The digital mammography markets, particularly in the developing countries, demand quality machines at substantially lower prices than the ones available today. Continuous pressure is applied on x-ray detectors' manufacturers to reduce the flat panel detectors’ prices. XLV presents a unique opportunity to achieve the needed price - performance characteristics for direct conversion, x-ray detectors. The XLV based detectors combine the proven, superior, spatial resolution of a-Se with the simplicity and low cost of liquid crystals and optical scanning. The x-ray quanta absorbed by a 200 μm a-Se produce electron - hole pairs that move under an electric field to the top and bottom of a-Se layer. This 2D charge distribution creates at the interface with the liquid crystals a continuous (analog) charge image corresponding to the impinging radiation's information. Under the influence of local electrical charges next to them, the liquid crystals twist proportionally to the charges and vary their light reflectivity. A scanning light source illuminates the liquid crystals while an associated, pixilated photo-detector, having a 42 μm pixel size, captures the light reflected by the liquid crystals and converts it in16 bit words that are transmitted to the machine for image processing and display. The paper will describe a novel XLV, 25 cm x 30 cm, flat panel detector structure and its underlying physics as well as its preliminary performance measured on several engineering prototypes. In particular, the paper will present the results of measuring XLV detectors' DQE, MTF, dynamic range, low contrast resolution and dynamic behavior. Finally, the paper will introduce the new, low cost, XLV detector based, digital mammography machine under development at XLV Diagnostics Inc.
Characterization of a silicon strip detector for photon-counting spectral CT using monoenergetic photons from 40 keV to 120 keV
Xuejin Liu, Hans Bornefalk, Han Chen, et al.
Background: We are developing a segmented silicon strip detector that operates in photon-counting mode and allows pulse-height discrimination with 8 adjustable energy bins. In this work, we determine the energy resolution of the detector using monoenergetic x-ray radiation from 40 keV to 120 keV. We further investigate the effects of pulse pileup and charge sharing between detector channels that may lead to a decreased energy resolution. Methods: For each incident monochromatic x-ray energy, we obtain count spectra at different photon fluxes. These spectra corresponds to the pulse-height response of the detector and allow the determination of energy resolution and charge-sharing probability. The energy resolution, however, is influenced by signal pileup and charge sharing. Both effects are quantified using Monte Carlo simulations of the detector that aim to reproduce the conditions during the measurements. Results: The absolute energy resolution is found to increase from 1.7 to 2.1 keV for increasing energies 40 keV to 120 keV at the lowest measured photon flux. The effect of charge sharing is found to increase the absolute energy resolution by a factor of 1.025 at maximum. This increase is considered as negligibly small. The pileup of pulses leads to a deterioration rate of the energy resolution of 4 · 10-3 keV Mcps-1 mm2, corresponding to an increase of 0.04keV per 10 Mcps increase of the detected count rate.
Experimental and theoretical performance analysis for a CMOS-based high resolution image detector
Increasing complexity of endovascular interventional procedures requires superior x-ray imaging quality. Present stateof- the-art x-ray imaging detectors may not be adequate due to their inherent noise and resolution limitations. With recent developments, CMOS based detectors are presenting an option to fulfill the need for better image quality. For this work, a new CMOS detector has been analyzed experimentally and theoretically in terms of sensitivity, MTF and DQE. The detector (Dexela Model 1207, Perkin-Elmer Co., London, UK) features 14-bit image acquisition, a CsI phosphor, 75 μm pixels and an active area of 12 cm x 7 cm with over 30 fps frame rate. This detector has two modes of operations with two different full-well capacities: high and low sensitivity. The sensitivity and instrumentation noise equivalent exposure (INEE) were calculated for both modes. The detector modulation-transfer function (MTF), noise-power spectra (NPS) and detective quantum efficiency (DQE) were measured using an RQA5 spectrum. For the theoretical performance evaluation, a linear cascade model with an added aliasing stage was used. The detector showed excellent linearity in both modes. The sensitivity and the INEE of the detector were found to be 31.55 DN/μR and 0.55 μR in high sensitivity mode, while they were 9.87 DN/μR and 2.77 μR in low sensitivity mode. The theoretical and experimental values for the MTF and DQE showed close agreement with good DQE even at fluoroscopic exposure levels. In summary, the Dexela detector’s imaging performance in terms of sensitivity, linear system metrics, and INEE demonstrates that it can overcome the noise and resolution limitations of present state-of-the-art x-ray detectors.
Measurement of imaging properties of scintillating fiber optic plate
Scintillating Fiber Optic Plates (SFOP) or Fiber Optic Scintillator (FOS) made with scintillating fiber-glass, were investigated for x-ray imaging. Two different samples (T x W x L = 2cm x 5cm x 5cm) were used; Sample A: 10μm fibers, Sample B: 50μm fibers both with statistically randomized light absorbing fibers placed in the matrix. A customized holder was used to place the samples in close contact with photodiodes in an amorphous silicon flat panel detector (AS1000, Varian), typically used for portal imaging. The detector has a 392μm pixel pitch and in the standard configuration uses a gadolinium oxy-sulphide (GOS) screen behind a copper plate. X-ray measurements were performed at 120kV (RQA 9 spectrum), 1MeV (5mm Al filtration) and 6MeV (Flattening Filter Free) for Sample A and the latter 2 spectra for Sample B. A machined edge was used for MTF measurements. The measurements showed the MTF degraded with increased X-ray energies because of the increase in Compton scattering. However, at the Nyquist frequency of 1.3lp/mm, the MTF is still high (FOS value vs. Cu+GOS): (a) 37% and 21% at 120kVp for the 10μm FOS and the Cu+GOS arrays, (b) 31%, 20% and 20% at 1MeV and (c) 17%, 11% and 14% at 6MeV for the 10μm FOS, 50μm FOS and the Cu+GOS arrays. The DQE(0) value comparison were (a) at 120kV ~24% and ~13 % for the 10μm FOS and the Cu+GOS arrays (b) at 1MV 10%, 10% and 7% and (c) at 6MV 12%, ~19% and 1.6% for the 10μm FOS , 50μm FOS and Cu+GOS arrays.
Optimizing two radioluminescence based quality assurance devices for diagnostic radiology utilizing a simple model
Jan Lindström, Markus Hulthén, Gudrun Alm Carlsson, et al.
The extrinsic (absolute) efficiency of a phosphor is expressed as the ratio of light energy emitted per unit area at the phosphor surface to incident x-ray energy fluence. A model described in earlier work has shown that by knowing the intrinsic efficiency, the particle size, the thickness and the light extinction factor ξ, it is possible to deduce the extrinsic efficiency for an extended range of particle sizes and layer thicknesses for a given design. The model has been tested on Gd2O2S:Tb and ZnS:Cu fluorescent layers utilized in two quality assurance devices, respectively, aimed for the assessment of light field and radiation field congruence in diagnostic radiology. The first unit is an established device based on both fluorescence and phosphorescence containing an x-ray sensitive phosphor (ZnS:Cu) screen comprising a long afterglow. Uncertainty in field edge position is estimated to 0.8 mm (k=2). The second unit is under development and based on a linear CCD sensor which is sensitized to x-rays by applying a Gd2O2S:Tb scintillator. The field profiles and the corresponding edge location are then obtained and compared. Uncertainty in field edge location is estimated to 0.1 mm (k=2). The properties of the radioluminescent layers are essential for the functionality of the devices and have been optimized utilizing the previously developed and verified model. A theoretical description of the maximization of phosphorescence is also briefly discussed as well as an interesting finding encountered during the development processes: focal spot wandering. The oversimplistic physical assumptions made in the radioluminescence model have not been found to lead the optimizing process astray. The obtained functionality is believed to be adequate within their respective limitations for both devices.
Investigation of spatial resolution and temporal performance of SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout) with integrated electrostatic focusing
We have previously proposed SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout), a novel detector concept with potentially superior spatial resolution and low-dose performance compared with existing flat-panel imagers. The detector comprises a scintillator that is optically coupled to an amorphous selenium photoconductor operated with avalanche gain, known as high-gain avalanche rushing photoconductor (HARP). High resolution electron beam readout is achieved using a field emitter array (FEA). This combination of avalanche gain, allowing for very low-dose imaging, and electron emitter readout, providing high spatial resolution, offers potentially superior image quality compared with existing flat-panel imagers, with specific applications to fluoroscopy and breast imaging. Through the present collaboration, a prototype HARP sensor with integrated electrostatic focusing and nano- Spindt FEA readout technology has been fabricated. The integrated electron-optic focusing approach is more suitable for fabricating large-area detectors. We investigate the dependence of spatial resolution on sensor structure and operating conditions, and compare the performance of electrostatic focusing with previous technologies. Our results show a clear dependence of spatial resolution on electrostatic focusing potential, with performance approaching that of the previous design with external mesh-electrode. Further, temporal performance (lag) of the detector is evaluated and the results show that the integrated electrostatic focusing design exhibits comparable or better performance compared with the mesh-electrode design. This study represents the first technical evaluation and characterization of the SAPHIRE concept with integrated electrostatic focusing.
Imaging performance of a thin Lu2O3:Eu nanophosphor scintillating screen coupled to a high resolution CMOS sensor under X-ray radiographic conditions: comparison with Gd2O2S:Eu conventional phosphor screen
I. Seferis, C. Michail, I. Valais, et al.
The purpose of the present study was to experimentally evaluate the imaging characteristics of the Lu2O3:Eu nanophosphor thin screen coupled to a high resolution CMOS sensor under radiographic conditions. Parameters such as the Modulation Transfer Function (MTF), the Normalized Noise Power Spectrum (NNPS) and the Detective Quantum Efficiency (DQE) were investigated at 70 kVp under three exposure levels (20 mAs, 63 mAs and 90 mAs). Since Lu2O3:Eu emits light in the red wavelength range, the imaging characteristics of a 33.3 mg/cm2 Gd2O2S:Eu conventional phosphor screen were also evaluated for comparison purposes. The Lu2O3:Eu nanophosphor powder was produced by the combustion synthesis, using urea as fuel. A scintillating screen of 30.2 mg/cm2 was prepared by sedimentation of the nanophosphor powder on a fused silica substrate. The CMOS/Lu2O3:Eu detector`s imaging characteristics were evaluated using an experimental method proposed by the International Electrotechnical Commission (IEC) guidelines. It was found that the CMOS/Lu2O3:Eu nanophosphor system has higher MTF values compared to the CMOS/Gd2O2S:Eu sensor/screen combination in the whole frequency range examined. For low frequencies (0 to 2 cycles/mm) NNPS values of the CMOS/Gd2O2S:Eu system were found 90% higher compared to the NNPS values of the CMOS/Lu2O3:Eu nanophosphor system, whereas from medium to high frequencies (2 to 13 cycles/mm) were found 40% higher. In contrast with the CMOS/ Gd2O2S:Eu system, CMOS/Lu2O3:Eu nanophosphor system appears to retain high DQE values in the whole frequency range examined. Our results indicate that Lu2O3:Eu nanophosphor is a promising scintillator for further research in digital X-ray radiography.
Physical properties of a new flat panel detector with irradiated side sampling (ISS) technology
Martin Fiebich, Jan M. Burg, Christina Piel, et al.
Flat panel detectors have become the standard technology in projection radiography. Further progress in detector technology will result in an improvement of MTF and DQE. The new detector (FDR D-Evo plus C24i, Fuji, Japan) is based on cesium-iodine crystals and has a change in the detector layout. The read-out electrodes are moved to the irradiated side of the detector. The physical properties of the detector were determined following IEC 62220-1-1 as close as possible. The MTF showed a significant improvement compared to other cesium-iodine based flat-panel detectors. Thereby the DQE is improved to other cesium-iodine based detectors especially for the higher frequencies. The average distance between the point of interaction of the x-rays in the detector and the light collector is shorter, due to the exponential absorption law in the detector. Thereby there is a reduction in light scatter and light absorption in the cesium-iodine needle crystals. This might explain the improvement of the MTF and DQE results in our measurements. The new detector design results in an improvement in the physical properties of flat-panel detectors. This enables a potential for further dose reductions in clinical imaging.
MTF characterization in 2D and 3D for a high resolution, large field of view flat panel imager for cone beam CT
Jainil Shah, Steve D. Mann, Martin P. Tornai, et al.
The 2D and 3D modulation transfer functions (MTFs) of a custom made, large 40x30cm2 area, 600- micron CsI-TFT based flat panel imager having 127-micron pixellation, along with the micro-fiber scintillator structure, were characterized in detail using various techniques. The larger area detector yields a reconstructed FOV of 25cm diameter with an 80cm SID in CT mode. The MTFs were determined with 1x1 (intrinsic) binning. The 2D MTFs were determined using a 50.8 micron tungsten wire and a solid lead edge, and the 3D MTF was measured using a custom made phantom consisting of three nearly orthogonal 50.8 micron tungsten wires suspended in an acrylic cubic frame. The 2D projection data was reconstructed using an iterative OSC algorithm using 16 subsets and 5 iterations. As additional verification of the resolution, along with scatter, the Catphan® phantom was also imaged and reconstructed with identical parameters. The measured 2D MTF was ~4% using the wire technique and ~1% using the edge technique at the 3.94 lp/mm Nyquist cut-off frequency. The average 3D MTF measured along the wires was ~8% at the Nyquist. At 50% MTF, the resolutions were 1.2 and 2.1 lp/mm in 2D and 3D, respectively. In the Catphan® phantom, the 1.7 lp/mm bars were easily observed. Lastly, the 3D MTF measured on the three wires has an observed 5.9% RMSD, indicating that the resolution of the imaging system is uniform and spatially independent. This high performance detector is integrated into a dedicated breast SPECT-CT imaging system.
Comparing analytical and Monte Carlo optical diffusion models in phosphor-based X-ray detectors
N. Kalyvas, P. Liaparinos
Luminescent materials are employed as radiation to light converters in detectors of medical imaging systems, often referred to as phosphor screens. Several processes affect the light transfer properties of phosphors. Amongst the most important is the interaction of light. Light attenuation (absorption and scattering) can be described either through "diffusion" theory in theoretical models or "quantum" theory in Monte Carlo methods. Although analytical methods, based on photon diffusion equations, have been preferentially employed to investigate optical diffusion in the past, Monte Carlo simulation models can overcome several of the analytical modelling assumptions. The present study aimed to compare both methodologies and investigate the dependence of the analytical model optical parameters as a function of particle size. It was found that the optical photon attenuation coefficients calculated by analytical modeling are decreased with respect to the particle size (in the region 1- 12 μm). In addition, for particles sizes smaller than 6μm there is no simultaneous agreement between the theoretical modulation transfer function and light escape values with respect to the Monte Carlo data.
Poster Session: Dose
icon_mobile_dropdown
Radio-fluorogenic dosimetry with violet diode laser-induced fluorescence
Peter Sandwall, Henry Spitz, Howard Elson, et al.
A set of experiments is described with radio-fluorogenic detectors and violet diode lasers. Radio-fluorogenic dosimetry is the measurement of absorbed dose by quantification of fluorescent products formed in response to ionizing radiation. Relative dosimetry was accomplished with 405nm violet diode laser-induced fluorescence (LIF) and digital imaging. Aqueous and gelatin-based solutions of radio-fluorogenic detectors were fabricated, irradiated with medical radiation devices, and pixel intensity values of digital images analyzed. The potential to use RFD to characterize spatial dose distributions with violet diode LIF is demonstrated.
Comparison of different approaches of estimating effective dose from reported exposure data in 3D imaging with interventional fluoroscopy systems
Three-dimensional (3D) imaging with interventional fluoroscopy systems is today a common examination. The examination includes acquisition of two-dimensional projection images, used to reconstruct section images of the patient. The aim of the present study was to investigate the difference in resulting effective dose obtained using different levels of complexity in calculations of effective doses from these examinations. In the study the Siemens Artis Zeego interventional fluoroscopy system (Siemens Medical Solutions, Erlangen, Germany) was used. Images of anthropomorphic chest and pelvis phantoms were acquired. The exposure values obtained were used to calculate the resulting effective doses from the examinations, using the computer software PCXMC (STUK, Helsinki, Finland). The dose calculations were performed using three different methods: 1. using individual exposure values for each projection image, 2. using the mean tube voltage and the total DAP value, evenly distributed over the projection images, and 3. using the mean kV and the total DAP value, evenly distributed over smaller selection of projection images. The results revealed that the difference in resulting effective dose between the first two methods was smaller than 5%. When only a selection of projection images were used in the dose calculations the difference increased to over 10%. Given the uncertainties associated with the effective dose concept, the results indicate that dose calculations based on average exposure values distributed over a smaller selection of projection angles can provide reasonably accurate estimations of the radiation doses from 3D imaging using interventional fluoroscopy systems.
Improved-resolution real-time skin-dose mapping for interventional fluoroscopic procedures
We have developed a dose-tracking system (DTS) that provides a real-time display of the skin-dose distribution on a 3D patient graphic during fluoroscopic procedures. Radiation dose to individual points on the skin is calculated using exposure and geometry parameters from the digital bus on a Toshiba C-arm unit. To accurately define the distribution of dose, it is necessary to use a high-resolution patient graphic consisting of a large number of elements. In the original DTS version, the patient graphics were obtained from a library of population body scans which consisted of larger-sized triangular elements resulting in poor congruence between the graphic points and the x-ray beam boundary. To improve the resolution without impacting real-time performance, the number of calculations must be reduced and so we created software-designed human models and modified the DTS to read the graphic as a list of vertices of the triangular elements such that common vertices of adjacent triangles are listed once. Dose is calculated for each vertex point once instead of the number of times that a given vertex appears in multiple triangles. By reformatting the graphic file, we were able to subdivide the triangular elements by a factor of 64 times with an increase in the file size of only 1.3 times. This allows a much greater number of smaller triangular elements and improves resolution of the patient graphic without compromising the real-time performance of the DTS and also gives a smoother graphic display for better visualization of the dose distribution.
Beam hardening and partial beam hardening of the bowtie filter: Effects on dosimetric applications in CT
Purpose: To estimate the consequences on dosimetric applications when a CT bowtie filter is modeled by means of full beam hardening versus partial beam hardening. Method: A model of source and filtration for a CT scanner as developed by Turner et. al. [1] was implemented. Specific exposures were measured with the stationary CT X-ray tube in order to assess the equivalent thickness of Al of the bowtie filter as a function of the fan angle. Using these thicknesses, the primary beam attenuation factors were calculated from the energy dependent photon mass attenuation coefficients and used to include beam hardening in the spectrum. This was compared to a potentially less computationally intensive approach, which accounts only partially for beam hardening, by giving the photon spectrum a global (energy independent) fan angle specific weighting factor. Percentage differences between the two methods were quantified by calculating the dose in air after passing several water equivalent thicknesses representative for patients having different BMI. Specifically, the maximum water equivalent thickness of the lateral and anterior-posterior dimension and of the corresponding (half) effective diameter were assessed. Results: The largest percentage differences were found for the thickest part of the bowtie filter and they increased with patient size. For a normal size patient they ranged from 5.5% at half effective diameter to 16.1% for the lateral dimension; for the most obese patient they ranged from 7.7% to 19.3%, respectively. For a complete simulation of one rotation of the x-ray tube, the proposed method was 12% faster than the complete simulation of the bowtie filter. Conclusion: The need for simulating the beam hardening of the bow tie filter in Monte Carlo platforms for CT dosimetry will depend on the required accuracy.
CT-guided brachytherapy of prostate cancer: reduction of effective dose from X-ray examination
Dmitriy B. Sanin, Vitaliy A. Biryukov M.D., Sergey S. Rusetskiy, et al.
Computed tomography (CT) is one of the most effective and informative diagnostic method. Though the number of CT scans among all radiographic procedures in the USA and European countries is 11% and 4% respectively, CT makes the highest contribution to the collective effective dose from all radiographic procedures, it is 67% in the USA and 40% in European countries [1-5]. Therefore it is necessary to understand the significance of dose value from CT imaging to a patient . Though CT dose from multiple scans and potential risk is of great concern in pediatric patients, this applies to adults as well. In this connection it is very important to develop optimal approaches to dose reduction and optimization of CT examination. International Commission on Radiological Protection (ICRP) in its publications recommends radiologists to be aware that often CT image quality is higher than it is necessary for diagnostic confidence[6], and there is a potential to reduce the dose which patient gets from CT examination [7]. In recent years many procedures, such as minimally invasive surgery, biopsy, brachytherapy and different types of ablation are carried out under guidance of computed tomography [6;7], and during a procedures multiple CT scans focusing on a specific anatomic region are performed. At the Clinics of MRRC different types of treatment for patients with prostate cancer are used, incuding conformal CT-guided brachytherapy, implantation of microsources of I into the gland under guidance of spiral CT [8]. So, the purpose of the study is to choose optimal method to reduce radiation dose from CT during CT-guided prostate brachytherapy and to obtain the image of desired quality.
Poster Session: Mammography
icon_mobile_dropdown
X-ray scatter characterization in dedicated breast CT with bowtie filters
Kimberly Kontson, Robert J. Jennings
The scatter contamination of projection images in cone-beam computed tomography (CT) degrades image quality. The use of bowtie filters in dedicated breast CT can decrease this scatter contribution. Three bowtie filter designs that compensate for one or more aspects of the beam-modifying effects due to the differences in path length in a projection have been studied. The first produces the same beam-hardening effect as breast tissue with a single-material design. The second produces the same beam quality and intensity at the detector with a two-material design and the third eliminates the beam-hardening effect by adjusting the bowtie filter thickness such that the same effective attenuation is produced at the detector. We have selected aluminum, boron carbide/beryllium oxide, and PMMA as the materials for the previously described designs, respectively. These designs have been investigated in terms of their ability to reduce the scatter contamination in projection images acquired in a dedicated breast CT geometry. The magnitude of the scatter was measured as the scatter-to-primary ratio using experimental and Monte Carlo techniques. The distribution of the scatter was also measured at different locations in the scatter image to produce scatter distribution maps for all three bowtie filter designs. The results of this study will be useful in designing scatter correction methods and understanding the benefits of bowtie filters in dedicated breast CT.
A simple scatter correction method for dual energy contrast-enhanced digital breast tomosynthesis
Dual-Energy Contrast Enhanced Digital Breast Tomosynthesis (DE-CE-DBT) has the potential to deliver diagnostic information for vascularized breast pathology beyond that available from screening DBT. DE-CE-DBT involves a contrast (iodine) injection followed by a low energy (LE) and a high energy (HE) acquisitions. These undergo weighted subtraction then a reconstruction that ideally shows only the iodinated signal. Scatter in the projection data leads to “cupping” artifacts that can reduce the visibility and quantitative accuracy of the iodinated signal. The use of filtered backprojection (FBP) reconstruction ameliorates these types of artifacts, but the use of FBP precludes the advantages of iterative reconstructions. This motivates an effective and clinically practical scatter correction (SC) method for the projection data. We propose a simple SC method, applied at each acquisition angle. It uses scatter-only data at the edge of the image to interpolate a scatter estimate within the breast region. The interpolation has an approximately correct spatial profile but is quantitatively inaccurate. We further correct the interpolated scatter data with the aid of easily obtainable knowledge of SPR (scatter-to-primary ratio) at a single reference point. We validated the SC method using a CIRS breast phantom with iodine inserts. We evaluated its efficacy in terms of SDNR and iodine quantitative accuracy. We also applied our SC method to a patient DE-CE-DBT study and showed that the SC allowed detection of a previously confirmed tumor at the edge of the breast. The SC method is quick to use and may be useful in a clinical setting.
Development of mammography system using CdTe photon counting detector for the exposure dose reduction
Sho Maruyama, Naoko Niwa, Misaki Yamazaki, et al.
We propose a new mammography system using a cadmium telluride (CdTe) photon-counting detector for exposure dose reduction. In contrast to conventional mammography, this system uses high-energy X-rays. This study evaluates the usefulness of this system in terms of the absorbed dose distribution and contrast-to-noise ratio (CNR) at acrylic step using a Monte Carlo simulation. In addition, we created a prototype system that uses a CdTe detector and automatic movement stage. For various conditions, we measured the properties and evaluated the quality of images produced by the system. The simulation result for a tube voltage of 40 kV and tungsten/barium (W/Ba) as a target/filter shows that the surface dose was reduced more than 60% compared to that under conventional conditions. The CNR of our proposal system also became higher than that under conventional conditions. The point at which the CNRs coincide for 4 cm polymethyl methacrylate (PMMA) at the 2-mm-thick step corresponds to a dose reduction of 30%, and these differences increased with increasing phantom thickness. To improve the image quality, we determined the problematic aspects of the scanning system. The results of this study indicate that, by using a higher X-ray energy than in conventional mammography, it is possible to obtain a significant exposure dose reduction without loss of image quality. Further, the image quality of the prototype system can be improved by optimizing the balance between the shift-and-add operation and the output of the X-ray tube. In future work, we will further examine these improvement points.
On imaging with or without grid in digital mammography
The grids used in digital mammography to reduce scattered radiation from the breast are not perfect and lead to partial absorption of primary radiation at the same time as not all of the scattered radiation is absorbed. It has therefore lately been suggested to remove the grids and correct for effects of scattered radiation by post- processing the images. In this paper, we investigated the dose reduction that might be achieved if the gird were to be removed. Dose reduction is determined as a function of PMMA thickness by comparing the contrast-to-noise ratios (CNRs) of images acquired with and without grid at a constant exposure. We used a theoretical model validated with Monte Carlo simulations and phantom studies. To evaluate the CNR, we applied aluminum filters of two different sizes, 4x8 cm2 and 1x1 cm2. When the large Al filter was used, the resulting CNR value for the grid-less images was overestimated as a result of a difference in amount of scattered radiation in the background region and of the region covered by the filter, a difference that could be eliminated by selecting a region of interest close to the edge of the filter. The optimal CNR when the PMMA thickness was above about 4 cm was obtained with a grid, whereas removing the grid leaded to a dose saving in thinner PMMAs. The results suggest not removing grids in breast cancer screening.
Estimation of effective x-ray tissue attenuation differences for volumetric breast density measurement
Breast density has been identified to be a risk factor of developing breast cancer and an indicator of lesion diagnostic obstruction due to masking effect. Volumetric density measurement evaluates fibro-glandular volume, breast volume, and breast volume density measures that have potential advantages over area density measurement in risk assessment. One class of volume density computing methods is based on the finding of the relative fibro-glandular tissue attenuation with regards to the reference fat tissue, and the estimation of the effective x-ray tissue attenuation differences between the fibro-glandular and fat tissue is key to volumetric breast density computing. We have modeled the effective attenuation difference as a function of actual x-ray skin entrance spectrum, breast thickness, fibro-glandular tissue thickness distribution, and detector efficiency. Compared to other approaches, our method has threefold advantages: (1) avoids the system calibration-based creation of effective attenuation differences which may introduce tedious calibrations for each imaging system and may not reflect the spectrum change and scatter induced overestimation or underestimation of breast density; (2) obtains the system specific separate and differential attenuation values of fibroglandular and fat for each mammographic image; and (3) further reduces the impact of breast thickness accuracy to volumetric breast density. A quantitative breast volume phantom with a set of equivalent fibro-glandular thicknesses has been used to evaluate the volume breast density measurement with the proposed method. The experimental results have shown that the method has significantly improved the accuracy of estimating breast density.
Improving the spatial resolution characteristics of dedicated cone-beam breast CT technology
Prior studies have shown that breast CT (bCT) outperforms mammography in the visualization of mass lesions, yet underperforms in the detection of micro-calcifications. The Breast Tomography Project at UC Davis has successively developed and fabricated four dedicated breast CT scanners, the most recent code-named Doheny, that produce high resolution, fully tomographic images, and overcome the tissue superposition effects of mammography at equivalent radiation dose. Over 600 patients have been imaged thus far in an ongoing clinical trial. The Doheny prototype differs from prior bCT generations in its usage of a pulsed rather than continuous x-ray source and in its utilization of a CMOS flat-panel fluoroscopic detector rather than TFT. Spatial Resolution analysis performed on Doheny indicates that the MTF characteristics have been substantially improved.
Spectrum optimization for computed radiography systems
Johann Hummel, Friedrich Semturs, Marcus Kaar, et al.
Technical quality assurance (TQA) is one of the key issues in breast screening protocols where the two crucial aspects are image quality and dose. While digital radiography (DR) systems can produce excellent image quality at low dose, it appears often to be difficult with computed radiography (CR) systems to fulfill the requirements for image quality and to keep the dose below the limits. Here, the choice of the optimal spectrum can be necessary to comply with the limiting values given by the standards. To determine the optimal spectrum, we calculated the contrast-noise ratio (CNR) for different anode/filter (a/f) combinations in dependence of tube voltage. This was done for breast thicknesses of 50, 60 and 70 mm. The figure-of-merit to be optimized was the quotient of squared CNR and average glandular dose. The investigated imaging plates were made of BaFBrI:Eu from a Fuji CR system. For comparison we repeated the measurements on a Carestream system. With respect to the Fuji system we found that the two k-edges of Iodine at 33 kV and Barium at 37 kV influence the results significantly. A peak as found in DR systems is followed by two additional peaks resulting from the higher absorption at the k-edges. This can be experienced with all a/f combinations. The same effect also occurred on the Carestream system.
Poster Session: New Imaging Concepts
icon_mobile_dropdown
Feasibility study of spectral computed tomography (CT) with gold as a new contrast agent
M. Müllner, H. Schlattl, U. Oeh, et al.
Newly developed spectral CTs with a photon-counting and energy-selective detector provide the possibility to obtain additional information about an object’s absorption properties, the footprint of which can be found in the energy spectrum of the detected photons. These new CT systems are capable of yielding valuable insight into the elemental composition of the tissue and open up the way for new CT contrast agents by detecting element-specific K-edge patterns. Gold could be a promising new CT contrast agent. The major goal of this study is to determine the minimum amount of gold that is needed to use it as a spectral CT contrast agent for medical imaging in humans. To reach this goal, Monte Carlo simulations with EGSnrc were performed. The energy-selective detector, on which this study is based, has 6 energy bins and the energy thresholds can be selected freely. First different energy thresholds were analyzed to determine the best energy thresholds with respect to detecting gold. The K-edge imaging algorithm was then applied to the simulation results with these energy bins. The reconstructed images were evaluated with respect to signal-to-noise ratio, contrast-to-noise ratio and contrast. The K-edge imaging algorithm is able to convert the information in the six energy bins into three images, which correspond to the photoelectric effect, Compton scattering and gold content; however, it requires very long computing time. The simulations indicate that at least 0.2w% of gold are required to use it as a CT contrast agent in humans.
Projection-based energy weighting on photon-counting X-ray images in digital subtraction mammography: a feasibility study
Sung-Hoon Choi, Seung-Wan Lee, Yu-Na Choi, et al.
In digital subtraction mammography where subtracts the one image (with contrast medium) from the other (anatomical background) for observing the tumor structure, tumors which include more blood vessels than normal tissue could be distinguished through the enhancement of contrast-to-noise ratio (CNR). In order to improve CNR, we adopted projection-based energy weighting for iodine solutions with four different concentrations embedded in a breast phantom (50% adipose and 50% glandular tissues). In this study, a Monte Carlo simulation was used to simulate a 40 mm thickness breast phantom, which has 15 and 30 mg/cm3 iodine solutions with two different thicknesses, and an energy resolving photon-counting system. The input energy spectrum was simulated in a range of 20 to 45 keV in order to reject electronic noise and include k-edge energy of iodine (33.2 keV). The results showed that the projection-based energy weighting improved the CNR by factors of 1.05-1.86 compared to the conventional integrating images. Consequently, the CNR of images from the digital subtraction mammography could be improved by the projection-based energy weighting with photon-counting detectors.
High resolution X-ray fluorescence imaging for a microbeam radiation therapy treatment planning system
Pavel Chtcheprov, Christina Inscoe, Laurel Burk, et al.
Microbeam radiation therapy (MRT) uses an array of high-dose, narrow (~100 μm) beams separated by a fraction of a millimeter to treat various radio-resistant, deep-seated tumors. MRT has been shown to spare normal tissue up to 1000 Gy of entrance dose while still being highly tumoricidal. Current methods of tumor localization for our MRT treatments require MRI and X-ray imaging with subject motion and image registration that contribute to the measurement error. The purpose of this study is to develop a novel form of imaging to quickly and accurately assist in high resolution target positioning for MRT treatments using X-ray fluorescence (XRF). The key to this method is using the microbeam to both treat and image. High Z contrast media is injected into the phantom or blood pool of the subject prior to imaging. Using a collimated spectrum analyzer, the region of interest is scanned through the MRT beam and the fluorescence signal is recorded for each slice. The signal can be processed to show vascular differences in the tissue and isolate tumor regions. Using the radiation therapy source as the imaging source, repositioning and registration errors are eliminated. A phantom study showed that a spatial resolution of a fraction of microbeam width can be achieved by precision translation of the mouse stage. Preliminary results from an animal study showed accurate iodine profusion, confirmed by CT. The proposed image guidance method, using XRF to locate and ablate tumors, can be used as a fast and accurate MRT treatment planning system.
Development of an MRI fiducial marker prototype for automated MR-US fusion of abdominal images
C. P. Favazza, K. R. Gorny, M. J. Washburn, et al.
External MRI fiducial marker devices are expected to facilitate robust, accurate, and efficient image fusion between MRI and other modalities. Automating of this process requires careful selection of a suitable marker size and material visible across a variety of pulse sequences, design of an appropriate fiducial device, and a robust segmentation algorithm. A set of routine clinical abdominal MRI pulse sequences was used to image a variety of marker materials and range of marker sizes. The most successfully detected marker was 12.7 mm diameter cylindrical reservoir filled with 1 g/L copper sulfate solution. A fiducial device was designed and fabricated from four such markers arranged in a tetrahedral orientation. MRI examinations were performed with the device attached to phantom and a volunteer, and custom developed algorithm was used to detect and segment the individual markers. The individual markers were accurately segmented in all sequences for both the phantom and volunteer. The measured intra-marker spacings matched well with the dimensions of the fiducial device. The average deviations from the actual physical spacings were 0.45± 0.40 mm and 0.52 ± 0.36 mm for the phantom and the volunteer data, respectively. These preliminary results suggest that this general fiducial design and detection algorithm could be used for MRI multimodality fusion applications.
Comparison between optimized GRE and RARE sequences for 19F MRI studies
Chiara Dolores Soffientini, Alfonso Mastropietro, Matteo Caffini, et al.
In 19F-MRI studies limiting factors are the presence of a low signal due to the low concentration of 19F-nuclei, necessary for biological applications, and the inherent low sensitivity of MRI. Hence, acquiring images using the pulse sequence with the best signal to noise ratio (SNR) by optimizing the acquisition parameters specifically to a 19F compound is a core issue. In 19F-MRI, multiple-spin-echo (RARE) and gradient-echo (GRE) are the two most frequently used pulse sequence families; therefore we performed an optimization study of GRE pulse sequences based on numerical simulations and experimental acquisitions on fluorinated compounds. We compared GRE performance to an optimized RARE sequence. Images were acquired on a 7T MRI preclinical scanner on phantoms containing different fluorinated compounds. Actual relaxation times (T1, T2, T2*) were evaluated in order to predict SNR dependence on sequence parameters. Experimental comparisons between spoiled GRE and RARE, obtained at a fixed acquisition time and in steady state condition, showed RARE sequence outperforming the spoiled GRE (up to 406% higher). Conversely, the use of the unbalanced-SSFP showed a significant increase in SNR compared to RARE (up to 28% higher). Moreover, this sequence (as GRE in general) was confirmed to be virtually insensitive to T1 and T2 relaxation times, after proper optimization, thus improving marker independence from the biological environment. These results confirm the efficacy of the proposed optimization tool and foster further investigation addressing in-vivo applicability.
A new resonance-frequency based electrical impedance spectroscopy and its application in biomedical engineering
Sreeram Dhurjaty, Yuchen Qiu, Maxine Tan, et al.
Electrical Impedance Spectroscopy (EIS) has shown promising results for differentiating between malignant and benign tumors, which exhibit different dielectric properties. However, the performance of current EIS systems has been inadequate and unacceptable in clinical practice. In the last several years, we have been developing and testing a new EIS approach using resonance frequencies for detection and classification of suspicious tumors. From this experience, we identified several limitations of current technologies and designed a new EIS system with a number of new characteristics that include (1) an increased A/D (analog-to-digital) sampling frequency, 24 bits, and a frequency resolution of 100 Hz, to increase detection sensitivity (2) automated calibration to monitor and correct variations in electronic components within the system, (3) temperature sensing and compensation algorithms to minimize impact of environmental change during testing, and (4) multiple inductor-switching to select optimum resonance frequencies. We performed a theoretical simulation to analyze the impact of adding these new functions for improving performance of the system. This system was also tested using phantoms filled with variety of liquids. The theoretical and experimental test results are consistent with each other. The experimental results demonstrated that this new EIS device possesses the improved sensitivity and/or signal detection resolution for detecting small impedance or capacitance variations. This provides the potential of applying this new EIS technology to different cancer detection and diagnosis tasks in the future.
Poster Session: Nuclear Medical Imaging
icon_mobile_dropdown
A simple model for deep tissue attenuation correction and large organ analysis of Cerenkov luminescence imaging
Frezghi Habte, Arutselvan Natarajan, David S. Paik, et al.
Cerenkov luminescence imaging (CLI) is an emerging cost effective modality that uses conventional small animal optical imaging systems and clinically available radionuclide probes for light emission. CLI has shown good correlation with PET for organs of high uptake such as kidney, spleen, thymus and subcutaneous tumors in mouse models. However, CLI has limitations for deep tissue quantitative imaging since the blue-weighted spectral characteristics of Cerenkov radiation attenuates highly by mammalian tissue. Large organs such as the liver have also shown higher signal due to the contribution of emission of light from a greater thickness of tissue. In this study, we developed a simple model that estimates the effective tissue attenuation coefficient in order to correct the CLI signal intensity with a priori estimated depth and thickness of specific organs. We used several thin slices of ham to build a phantom with realistic attenuation. We placed radionuclide sources inside the phantom at different tissue depths and imaged it using an IVIS Spectrum (Perkin-Elmer, Waltham, MA, USA) and Inveon microPET (Preclinical Solutions Siemens, Knoxville, TN). We also performed CLI and PET of mouse models and applied the proposed attenuation model to correct CLI measurements. Using calibration factors obtained from phantom study that converts the corrected CLI measurements to %ID/g, we obtained an average difference of less that 10% for spleen and less than 35% for liver compared to conventional PET measurements. Hence, the proposed model has a capability of correcting the CLI signal to provide comparable measurements with PET data.
Improved attenuation correction for freely moving animal brain PET studies using a virtual scanner geometry
Georgios I. Angelis, William J. Ryder, Andre Z. Kyme, et al.
Attenuation correction in positron emission tomography brain imaging of freely moving animals can be very challenging since the body of the animal is often within the field of view and introduces a non negligible atten- uating factor that can degrade the quantitative accuracy of the reconstructed images. An attractive approach that avoids the need for a transmission scan involves the generation of the convex hull of the animal’s head based on the reconstructed emission images. However, this approach ignores the potential attenuation introduced by the animal’s body. In this work, we propose a virtual scanner geometry, which moves in synchrony with the animal’s head and discriminates between those events that traverse only the animal’s head (and therefore can be accurately compensated for attenuation) and those that might have also traversed the animal’s body. For each pose a new virtual scanner geometry was defined and therefore a new system matrix was calculated leading to a time-varying system matrix. This new approach was evaluated on phantom data acquired on the microPET Focus 220 scanner using a custom-made rat phantom. Results showed that when the animal’s body is within the FOV and not accounted for during attenuation correction it can lead to bias of up to 10%. On the contrary, at- tenuation correction was more accurate when the virtual scanner was employed leading to improved quantitative estimates (bias <2%), without the need to account for the animal’s body.
Optimization using detective quantum efficiency (DQE) of the high-resolution parallel-hole collimators with CdTe pixelated semiconductor SPECT system
In previous study, to improve both sensitivity and spatial resolution, we recommended to use a pixelated parallel-hole collimator with equal hole and pixel sizes based on CdTe pixelated semiconductor SPECT system. However, the tradeoff between sensitivity and spatial resolution is needed before determination of pixelated parallel-hole collimator geometric designs. The detective quantum efficiency (DQE) is a concept which takes both the sensitivity and spatial resolution into account to provide an overall measure and may be better suited for optimization. The purpose of this study was to optimize and evaluate the above-mentioned collimators using DQE to determine the best image performance of CdTe pixelated semiconductor SPECT system. We conducted a simulation study using a Geant4 Application for Tomographic Emission (GATE) simulation. To evaluate the DQE from modulation transfer function (MTF) and sensitivity, the collimator septal heights were varied from 15 to 30 mm in 5 mm increments at step and source-to-collimator distances were 4, 5, 6, and 7 cm. According to the results, the DQE decreased with increasing source-to-collimator distance and septal height. We have presented the evaluation results of pixelated parallel-hole collimators with various geometric designs. In conclusion, we successfully optimized the pixelated parallel-hole collimator, and based on our results, we recommended using lower septal height with smaller source-to-collimator distance with CdTe pixelated semiconductor SPECT system.
A novel intra-operative positron imager for rapid localization of tumor margins
Hamid Sabet, Brendan C. Stack, Vivek V. Nagarkar
We have developed an intra-operative and compact imaging tool for surgeons to detect PET- positive lesions. Currently, most such probes on the market are non-imaging, and provide no ancillary information of surveyed areas, such as clear delineations of malignant tissues. Our probe consists of a novel hybrid scintillator coupled to a compact silicon photomultiplier (SiPM) array with associated front-end electronics encapsulated in an ergonomic housing. Pulse shape discrimination electronics has been implemented and integrated into the downstream data acquisition system. The hybrid scintillator consists of a 0.4 mm thick layer of CsI:Tl scintillator coupled to a 1 mm thick LYSO crystal. To achieve high spatial resolution, CsI:Tl is pixelated to 0.5×0.5 mm2 pixels using laser ablation technique. While CsI:Tl act as beta-sensitive scintillator, LYSO senses the gamma radiation and can be used to navigate the probe to the locations of interest. The gamma response is also subtracted from the beta image for improved SNR and contrast. To achieve accurate centroid position estimation and uniform beta sensitivity over the entire imaging area, the LYSO thickness is optimized such that it acts as scintillation light diffuser by spreading CsI:Tl light over multiple SiPM pixels. The results show that the response of the two scintillators exposed to radiation could be easily distinguished based on their pulse shapes. The probe’s spatial resolution is <1.5 mm FWHM in its 10×10 mm2 effective imaging area. The probe can rapidly detect and localize nCi levels of F-18 beta radiation even in presence of strong gamma background.
Image reconstruction for the new simultaneous whole-body openPET/CT geometry
A new simultaneous whole-body PET/CT imaging geometry based on the OpenPET imaging structure has been proposed. In this geometry, multiple x-ray sources are adopted to implement the same field of view at the exactly same time for the simultaneous PET and CT imaging process. In this paper, we conducted further quantitative analysis to the new geometry by computer simulation. Then we examined and compared the iterative and analytical algorithms in terms of image quality, regarding the features of the geometry. Results indicated better images were acquired with the iterative algorithms under this geometry for the whole-body range rather than the analytical method. Improved reconstruction images were still expected with generalization of modified algorithm for the proposed geometry. Technical implementation of the geometry in clinical application should also be further considered.
Poster Session: Phantoms and Radiation Transport
icon_mobile_dropdown
Including the effect of molecular interference in the coherent x-ray scattering modeling in MC-GPU and PENELOPE for the study of novel breast imaging modalities
B. Ghammraoui, R. Peng, I. Suarez, et al.
Purpose: To present upgraded versions of MC-GPU and PenEASY Imaging, two open-source Monte Carlo codes for the simulation of radiographic projections and CT. The codes have been extended with the aim of studying breast imaging modalities that rely on the accurate modeling of coherent x-ray scatter. Methods: The simulation codes were extended to account for the effect of molecular interference in coherent scattering using experimentally measured molecular interference functions. The validity of the new model was tested experimentally using the Energy Dispersive X-Ray Diffraction (EDXRD) technique with a polychromatic x-ray source and an energy-resolved Germanium detector at a fixed scattering angle. Experiments and simulations of a full field digital mammography system with and without a 1D focused antiscatter grid were conducted for additional validation. The modified MC-GPU code was also used to examine the possibility of characterizing breast cancer within a mathematical breast phantom using the EDXRD technique. Results: The measured EDXRD spectra were correctly reproduced by the simulation with the modified code while the previous code using the Independent Atomic Approximation led to large errors in the predicted diffraction spectra. There was good agreement between the simulated and measured rejection factor for the 1D focused antiscatter grid with both models. The simulation study in a whole breast showed that the x-ray scattering profiles of adipose, fibrosis, cancer and benign tissues are differentiable. Conclusion: MC-GPU and PENELOPE were successfully extended and validated for accurate modeling of coherent x-ray scatter. The EDXRD technique with pencil-cone geometry in a whole breast was investigated by a simulation study and it was concluded that this technique has potential to characterize breast cancer lesions.
Evaluation of the resolving potency of a novel reconstruction filter on periodontal ligament space with dental cone-beam CT: a quantitative phantom study
Yuuki Houno, Toshimitsu Hishikawa, Ken-ichi Gotoh, et al.
Diagnosis of the alveolar bone condition is important for the treatment planning of periodontal disease. Especially the determination of periodontal ligament space is the most important remark because it represents the periodontal tissue support for tooth retention. However, owing to the image blur of the current cone-beam CT (CBCT) imaging technique, the periodontal ligament space is difficult to visualize. In this study, we developed an original periodontal ligament phantom (PLP) and evaluated the image quality of simulated periodontal ligament space using a novel reconstruction filter for CBCT that emphasized high frequency component. PLP was composed from two resin blocks of different materials, the bone equivalent block and the dentine equivalent block. They were assembled to make continuously changing space from 0.0 to 1.0 millimeter that mimics periodontal ligament space. PLP was placed in water and the image was obtained by using Alphard-3030 dental cone-beam CT (Asahi Roentgen Industry Co., Ltd.). Then we reconstructed the projection data with a novel reconstruction filter. The axial images were compared with conventional reconstructed images. In novel filter reconstruction images, 0.4 millimeter of the space width was steadily detected by calculation of pixel value, on the other hand 0.6 millimeter was in conventional images. With our method, the resolving potency of conebeam CT images was improved.
Unfiltered Monte Carlo-based tungsten anode spectral model from 20 to 640 kV
A Monte Carlo-based tungsten anode spectral model, conceptually similar to the previously-developed TASMIP model, was developed. This new model provides essentially unfiltered x-ray spectra with better energy resolution and significantly extends the range of tube potentials for available spectra. MCNPX was used to simulate x-ray spectra as a function of tube potential for a conventional x-ray tube configuration with several anode compositions. Thirty five x-ray spectra were simulated and used as the basis of interpolating a complete set of tungsten x-ray spectra (at 1 kV intervals) from 20 to 640 kV. Additionally, Rh and Mo anode x-ray spectra were simulated from 20 to 60 kV. Cubic splines were used to construct piecewise polynomials that interpolate the photon fluence per energy bin as a function of tube potential for each anode material. The tungsten anode spectral model using interpolating cubic splines (TASMICS) generates minimally-filtered (0.8 mm Be) x-ray spectra from 20 to 640 kV with 1 keV energy bins. The rhodium and molybdenum anode spectral models (RASMICS and MASMICS, respectively) generate minimally-filtered x-ray spectra from 20 to 60 kV with 1 keV energy bins. TASMICS spectra showed no statistically significant differences when compared with the empirical TASMIP model, the semi-empirical Birch and Marshall model, and a Monte Carlo spectrum reported in AAPM TG 195. The RASMICS and MASMICS spectra showed no statistically significant differences when compared with their counterpart RASMIP and MASMIP models. Spectra from the TASMICS, MASMICS, and RASMICS models are available in spreadsheet format for interested users.
Hybrid-model for computed tomography simulations and post-patient collimator design
Horace Xu, Kun Tao, Padmashree GK, et al.
Ray-tracing based simulation methods are widely used in modeling X-ray propagation, detection and imaging. While most of the existing simulation methods rely on analytical modeling, a novel hybrid approach comprising of statistical modeling and analytical approaches, is proposed here. Our hybrid simulator is a unique combination of analytical modeling for evoking the fundamentals of X-ray transport through ray-tracing, and a look-up-table (LUT) based approach for integrating it with the Monte Carlo simulations that model optical photon-transport within scintillator. The LUT approach for scintillation-based X-ray detection invokes depth-dependent gain factors to account for intra-pixel absorption and light-transport, together with incident-angle dependent effects for inter-pixel X-ray absorption (parallax effect). The model simulates the post-patient collimator for scatter-rejection, as an X-ray shadow on scintillator, while handling its position with respect to the pixel boundary, by a smart over-sampling strategy for high efficiency. We have validated this simulator for computed tomography system-simulations, by using real data from GE Brivo CT385. The level of accuracy of image noise and spatial resolution is better than 98%. We have used the simulator for designing the post-patient collimator, and measured modulation transfer function (MTF) for different widths of the collimator plate. Validation and simulation study clearly demonstrates that the hybrid simulator is an accurate, reliable, efficient tool for realistic system-level simulations. It could be deployed for research, design and development purposes to model any scintillator-based X-ray imaging-system (2-dimensional and 3-dimensional), while being equally applicable for medical and industrial imaging.
Physics-based modeling of X-ray CT measurements with energy-integrating detectors
Yong Long, Hewei Gao, Mingye Wu, et al.
Computer simulation tools for X-ray CT are important for research efforts in developing reconstructionmethods, designing new CT architectures, and improving X-ray source and detector technologies. In this paper, we propose a physics-based modeling method for X-ray CT measurements with energy-integrating detectors. It accurately accounts for the dependence characteristics on energy, depth and spatial location of the X-ray detection process, which is either ignored or over simplified in most existing CT simulation methods. Compared with methods based on Monte Carlo simulations, it is computationally much more efficient due to the use of a look-up table for optical collection efficiency. To model the CT measurments, the proposed model considers five separate effects: energy- and location-dependent absorption of the incident X-rays, conversion of the absorbed X-rays into the optical photons emitted by the scintillator, location-dependent collection of the emitted optical photons, quantumefficiency of converting fromoptical photons to electrons, and electronic noise. We evaluated the proposed method by comparing the noise levels in the reconstructed images from measured data and simulations of a GE LightSpeed VCT system. Using the results of a 20 cm water phantom and a 35 cm polyethylene (PE) disk at various X-ray tube voltages (kVp) and currents (mA), we demonstrated that the proposed method produces realistic CT simulations. The difference in noise standard deviation between measurements and simulations is approximately 2% for the water phantom and 10% for the PE phantom.
Quantification of biological tissue and construction of patient equivalent phantom (skull and chest) for infants (1-5 years old)
A. F. Alves, D. R. Pina, F. A. Bacchim Neto, et al.
Our main purpose in this study was to quantify biological tissue in computed tomography (CT) examinations with the aim of developing a skull and a chest patient equivalent phantom (PEP), both specific to infants, aged between 1 and 5 years old. This type of phantom is widely used in the development of optimization procedures for radiographic techniques, especially in computed radiography (CR) systems. In order to classify and quantify the biological tissue, we used a computational algorithm developed in Matlab ®. The algorithm performed a histogram of each CT slice followed by a Gaussian fitting of each tissue type. The algorithm determined the mean thickness for the biological tissues (bone, soft, fat, and lung) and also converted them into the corresponding thicknesses of the simulator material (aluminum, PMMA, and air). We retrospectively analyzed 148 CT examinations of infant patients, 56 for skull exams and 92 were for chest. The results provided sufficient data to construct a phantom to simulate the infant chest and skull in the posterior–anterior or anterior–posterior (PA/AP) view. Both patient equivalent phantoms developed in this study can be used to assess physical variables such as noise power spectrum (NPS) and signal to noise ratio (SNR) or perform dosimetric control specific to pediatric protocols.
Guidewire path simulation using equilibrium of forces
Fernando M. Cardoso, Sergio S. Furuie
Vascular diseases are among the major causes of death in developed countries and the treatment of those pathologies may require endovascular interventions, in which the physician utilizes guidewires and catheters through the vascular system to reach the injured vessel region. Several computational studies related to endovascular procedures are in constant development. So, predicting the guidewire path may be of great value for both physicians and researchers. We propose a method to simulate and predict the guidewire and catheter path inside a blood vessel based on equilibrium of forces, which leads, iteratively, to the minimum energy configuration. This technique was validated with physical models using a Ø0.33mm stainless steel guidewire. This method presented RMS error, in average, less than 1 mm. Moreover, the algorithm presented low variation (in average, σ=0.03mm) due to the variation of the input parameters. Therefore, even for a wide range of different parameters configuration, similar results are presented, which makes this technique easier to work with. Since this method is based on basic physics, it is simple, intuitive, easy to learn and easy to adapt.
Optical crosstalk in CT detectors and its effects on CT images
Detectors for computed tomography (CT) typically consist of scintillator and photodiode arrays which are coupled using optical glue. Therefore, the leakage of optical photons generated in a scintillator block to neighboring pixel photodiodes through the optical glue layer is inevitable. Passivation layers to protect the silicon photodiode as well as the silicon layer itself, which is inactive to the optical photons, are another causes for the leakage. This optical crosstalk reduces image sharpness, and eventually will blur CT images. We have quantitatively investigated the optical crosstalk in CT detectors using the Monte Carlo technique. We performed the optical Monte Carlo simulations for various thicknesses of optical components in a 129 × 129 CT detector array. We obtained the coordinates of optical photons hitting the user-defined detection plane. From the coordinate information, we calculated the collection efficiency at the detection plane and the collection efficiency at the single pixel located just below the scintillator in which the optical photons were generated. Difference between the two quantities provided the optical crosstalk. In addition, using the coordinate information, we calculated point-spread functions as well as modulation-transfer functions from which we estimated the effective aperture due to the optical photon spreading. The optical crosstalk was most severely affected by the thickness of photodiode passivation layer. The effective aperture due to the optical crosstalk was about 110% of the detector pixel aperture for a 0.1 mm-thick passivation layer, and this signal blur was appeared as a relative error of about 3-4% in mismatches between CT images with and without the optical crosstalk. The detailed simulation results are shown and will be very useful for the design of CT detectors.
A comparison of simulation tools for photon-counting spectral CT
Radin A. Nasirudin, Petar Penchev, Kai Mei, et al.
Photon-counting detectors (PCD) not only have the advantage of providing spectral information but also offer high quantum efficiencies, producing high image quality in combination with a minimal amount of radiation dose. Due to the clinical unavailability of photon-counting CT, the need to evaluate different CT simulation tools for researching different applications for photon-counting systems is essential. In this work, we investigate two different methods to simulate PCD data: Monte-Carlo based simulation (MCS) and analytical based simulation (AS). The MCS is a general-purpose photon transport simulation based on EGSnrc C++ class library. The AS uses analytical forward-projection in combination with additional acquisition parameters. MCS takes into account all physical effects, but is computationally expensive (several days per CT acquisition). AS is fast (several minutes), but lacks the accurateness of MCS with regard to physical interactions. To evaluate both techniques an entrance spectra of 100kvp, a modified CTP515 module of the CatPhan 600 phantom, and a detector system with six thresholds was simulated. For evaluation the simulated projection data are decomposed via a maximum likelihood technique, and reconstructed via standard filtered-back projection (FBP). Image quality from both methods is subjectively and objectively assessed. Visually, the difference in the image quality was not significant. When further evaluated, the relative difference was below 4%. As a conclusion, both techniques offer different advantages, while at different stages of development the accelerated calculations via AS can make a significant difference. For the future one could foresee a combined method to join accuracy and speed.
Poster Session: Phase Contrast Imaging
icon_mobile_dropdown
Optimization of grating-based phase-contrast imaging setup
Phase contrast imaging (PCI) technology has emerged over the last decade as a novel imaging technique capable of probing phase characteristics of an object as complimentary information to conventional absorption properties. In this work, we identified and provided a rationale for optimization of key parameters that determine the performance of a Talbot-Lau PCI system. The study used the Fresnel wave propagation theory and system geometry to predict optimal grating alignment conditions necessary for producing maximum-phase contrast. The moiré fringe pattern frequency and angular orientation produced in the X-ray detector plane were studied as functions of the gratings’ axial rotation. The effect of axial displacement between source-to-phase (L) and phase-to-absorption (d) gratings, on system contrast, was discussed in detail. The L-d regions of highest contrast were identified, and the dependence of contrast on the energy of the X-ray spectrum was also studied. The predictions made in this study were tested experimentally and showed excellent agreement. The results indicated that the PCI system performance is highly sensitive to alignment. The rationale and recommendations made should serve as guidance in design, development, and optimization of Talbot-Lau PCI systems.
Design of a compact high-energy setup for x-ray phase-contrast imaging
Markus Schüttler, Andre Yaroshenko, Martin Bech, et al.
The main shortcoming of conventional biomedical x-ray imaging is the weak soft-tissue contrast caused by the small differences in the absorption coefficients between different materials. This issue can be addressed by x-ray phasesensitive imaging approaches, e.g. x-ray Talbot-Lau grating interferometry. The advantage of the three-grating Talbot-Lau approach is that it allows to acquire x-ray phase-contrast and dark-field images with a conventional lab source. However, through the introduction of the grating interferometer some constraints are imposed on the setup geometry. In general, the grating pitch and the mean x-ray energy determine the setup dimensions. The minimal length of the setup increases linearly with energy and is proportional to p2, where p is the grating pitch. Thus, a high-energy (100 keV) compact grating-based setup for x-ray imaging can be realized only if gratings with aspect-ratio of approximately 300 and a pitch of 1-2 μm were available. However, production challenges limit the availability of such gratings. In this study we consider the use of non-binary phase-gratings as means of designing a more compact grating interferometer for phase-contrast imaging. We present simulation and experimental data for both monochromatic and polychromatic case. The results reveal that phase-gratings with triangular-shaped structures yield visibilities that can be used for imaging purposes at significantly shorter distances than binary gratings. This opens the possibility to design a high-energy compact setup for x-ray phase-contrast imaging. Furthermore, we discuss different techniques to achieve triangular-shaped phase-shifting structures.
Multilayer coated gratings for phase-contrast computed tomography (CT)
Zsolt Marton, Harish B. Bhandari, Harold H. Wen, et al.
By using the principle of grating interferometry, X-ray phase contrast imaging can now be performed with incoherent radiation from standard X-ray tube. This approach is in stark contrast with imaging methods using coherent synchrotron X-ray sources or micro-focus sources to improve contrast. The gratings interferometer imaging technique is capable of measuring the phase shift of hard X-rays travelling through a sample, which greatly enhances the contrast of low absorbing specimen compared to conventional amplitude contrast images. The key components in this approach are the gratings which consists of alternating layers of high and low Z (atomic number) materials fabricated with high aspect ratios. Here we report on a novel method of fabricating the grating structures using the technique of electron-beam (ebeam) thin film deposition. Alternating layers of silicon (Z=14) and tungsten (Z=74) were deposited, each measuring 100 nm each, on a specially designed echelle substrate, which resulted in an aspect ratio of ~100:1. Fabrication parameters related to the thin film deposition such as geometry, directionality, film adhesion, stress and the resulting scanning electron micrographs will be discussed in detail. Using e-beam method large-area gratings with precise multilayer coating thicknesses can be fabricated economically circumventing the expensive lithography steps.
Analysis of a deconvolution-based information retrieval algorithm in X-ray grating-based phase-contrast imaging
Florian Horn, Florian Bayer, Georg Pelzer, et al.
Grating-based X-ray phase-contrast imaging is a promising imaging modality to increase soft tissue contrast in comparison to conventional attenuation-based radiography. Complementary and otherwise inaccessible information is provided by the dark-field image, which shows the sub-pixel size granularity of the measured object. This could especially turn out to be useful in mammography, where tumourous tissue is connected with the presence of supertiny microcalcifications. In addition to the well-established image reconstruction process, an analysis method was introduced by Modregger, 1 which is based on deconvolution of the underlying scattering distribution within a single pixel revealing information about the sample. Subsequently, the different contrast modalities can be calculated with the scattering distribution. The method already proved to deliver additional information in the higher moments of the scattering distribution and possibly reaches better image quality with respect to an increased contrast-to-noise ratio. Several measurements were carried out using melamine foams as phantoms. We analysed the dependency of the deconvolution-based method with respect to the dark-field image on different parameters such as dose, number of iterations of the iterative deconvolution-algorithm and dark-field signal. A disagreement was found in the reconstructed dark-field values between the FFT method and the iterative method. Usage of the resulting characteristics might be helpful in future applications.
Energy weighting in grating-based X-ray phase-contrast imaging
Georg Pelzer, Thomas Weber, Gisela Anton, et al.
With energy-resolving photon-counting detectors in grating-based x-ray phase-contrast imaging it is possible to reduce the dose needed and optimize the imaging chain towards better performance. The advantage of photon- counting detector’s linear energy response and absence of electronic noise in attenuation based imaging is known. The access to the energy information of the photons counted provides even further potential for optimization by applying energy weighting factors. We have evaluated energy weighting for grating-based phase-contrast imaging. Measurements with the hybrid photon-counting detector Dosepix were performed. The concept of energy binning implemented in the pixel electronics allows individual storing of the energy information of the incoming photons in 16 energy bins for each pixel. With this technique the full spectral information can be obtained pixel wise from one single acquisition. On the differential phase-contrast data taken, we applied different types of energy weighting factors. The results presented in this contribution demonstrate the advantages of energy-resolved photon-counting in differential phase-contrast imaging. Using a x-ray spectrum centred significantly above the interferometers design energy leads to poor image quality. But with the proposed method and detector the quality was enhanced by 2.8 times in signal-to-noise ratio squared. As this is proportional to dose, energy- resolved photon-counting might be valuable especially for medical applications.
Comparison of propagation- and grating-based x-ray phase-contrast imaging techniques with a liquid-metal-jet source
T. Zhou, U. Lundström, Thomas Thüring, et al.
X-ray phase-contrast imaging has been developed as an alternative to conventional absorption imaging, partly for its dose advantage over absorption imaging at high resolution. Grating-based imaging (GBI) and propagation-based imaging (PBI) are two phase-contrast techniques used with polychromatic laboratory sources. We compare the two methods by experiments and simulations with respect to required dose. A simulation method based on the projection approximation is designed and verified with experiments. A comparison based on simulations of the doses required for detection of an object with respect to its diameter is presented, showing that for monochromatic radiation, there is a dose advantage for PBI for small features but an advantage for GBI at larger features. However, GBI suffers more from the introduction of polychromatic radiation, in this case so much that PBI gives lower dose for all investigated feature sizes. Furthermore, we present and compare experimental images of biomedical samples. While those support the dose advantage of PBI, they also highlight the GBI advantage of quantitative reconstruction of multimaterial samples. For all experiments a liquid-metal-jet source was used. Liquid-metal-jet sources are a promising option for laboratory-based phase-contrast imaging due to the relatively high brightness and small spot size.
Performance of X-ray grating interferometry at high energy
Matteo Abis, Thomas Thüring, Marco Stampanoni
A theoretical description of the performance of a Talbot and Talbot-Lau type interferometers is developed, providing a framework for the optimization of the geometry for monochromatic and polychromatic beams. Analytical formulas for the smallest detectable refraction angle and the visibility of the setup are derived. The polychromatic visibility of the interference fringes is particularly relevant for the design of setups with conventional X-ray tubes, and it is described in terms of the spectrum of the source and the type of beamsplitter grating. We show the practical realization of such a design by imaging a metallic screw at 100 keV.
Increasing the field of view of x-ray phase contrast imaging using stitched gratings on low absorbent carriers
J. Meiser, M. Amberger, M. Willner, et al.
X-ray phase contrast imaging has become a promising biomedical imaging technique for enhancing soft-tissue contrast. In addition to an absorption contrast image it provides two more types of image, a phase contrast and a small-angle scattering contrast image recorded at the same time. In biomedical imaging their combination allows for the conventional investigation of e.g. bone fractures on the one hand and for soft-tissue investigation like cancer detection on the other hand. Among the different methods of X-ray phase contrast imaging the grating based approach, the Talbot-Lau interferometry, has the highest potential for commercial use in biomedical imaging at the moment, because commercially available X-ray sources can be used in a compact setup. In Talbot-Lau interferometers, core elements are phase and absorption gratings with challenging specifications because of their high aspect ratios (structure height over width). For the long grating lamellas structural heights of more than 100 μm together with structural width in the micron range are requested. We are developing a fabrication process based on deep x-ray lithography and electroforming (LIGA) to fabricate these challenging structures. In case of LIGA gratings the structural area is currently limited to several centimeters by several centimeters which limit the field of view in grating based X-ray phase contrast imaging. In order to increase the grating area significantly we are developing a stitching method for gratings using a 625 μm thick silicon wafer as a carrier substrate. In this work we compare the silicon carrier with an alternative one, polyimide, for patient dose reduction and for the use at lower energies in terms of transmission and image reconstruction problems.
Effect of coherence loss in differential phase contrast imaging
Weixing Cai, Ruola Ning, Jiangkun Liu
Coherence property of x-rays is critical in the grating-based differential phase contrast (DPC) imaging because it is the physical foundation that makes any form of phase contrast imaging possible. Loss of coherence is an important experimental issue, which results in increased image noise and reduced object contrast in DPC images and DPC cone beam CT (DPC-CBCT) reconstructions. In this study, experimental results are investigated to characterize the visibility loss (a measurement of coherence loss) in several different applications, including different-sized phantom imaging, specimen imaging and small animal imaging. Key measurements include coherence loss (relative intensity changes in the area of interest in phase-stepping images), contrast and noise level in retrieved DPC images, and contrast and noise level in reconstructed DPC-CBCT images. The influence of size and composition of imaged object (uniform object, bones, skin hairs, tissues, and etc) will be quantified. The same investigation is also applied for moiré pattern-based DPC-CBCT imaging with the same exposure dose. A theoretical model is established to relate coherence loss, noise level in phase stepping images (or moiré images), and the contrast and noise in the retrieved DPC images. Experiment results show that uniform objects lead to a small coherence loss even when the attenuation is higher, while objects with large amount of small structures result in huge coherence loss even when the attenuation is small. The theoretical model predicts the noise level in retrieved DPC images, and it also suggests a minimum dose required for DPC imaging to compensate for coherence loss.
Effect of object size, position, and detector pixel size on X-ray absorption, differential phase-contrast and dark-field signal
Johannes Wolf, Michael Chabior, Jonathan Sperl, et al.
X-ray phase-contrast and dark-field imaging are two new modalities that have great potential for applications in different fields like medical diagnostics or materials science. The use of grating interferometers allows the detection of both differential phase shift and dark-field signal together with the absorption signal in a single acquisition. We present wave-optical simulations to quantitatively analyze the response of a grating-based X-ray phase-contrast and dark-field imaging setup to variations of the sample relative to the system. Specifically, we investigated changes in the size and the position of the object. Furthermore, we examined the influence of different detector pixel sizes while sample and interferometer remained unchanged. The results of this study contribute to a better understanding of the signal formation and represent a step towards the full characterization of the response of grating interferometry setups to specific sample geometries.
Poster Session: Reconstruction
icon_mobile_dropdown
Pre-computed backprojection based penalized-likelihood (PPL) reconstruction with an edge-preserved regularizer for stationary Digital Breast Tomosynthesis
Shiyu Xu, Christy Redmon Inscoe, Jianping Lu, et al.
Stationary Digital Breast Tomosynthesis (sDBT) is a carbon nanotube based breast imaging device with fast data acquisition and decent projection resolution to provide three dimensional (3-D) volume information. To- mosynthesis 3-D image reconstruction is faced with the challenges of the cone beam geometry and the incomplete and nonsymmetric sampling due to the sparse views and limited view angle. Among all available reconstruction methods, statistical iterative method exhibits particular promising since it relies on an accurate physical and statistical model with prior knowledge. In this paper, we present the application of an edge-preserved regularizer to our previously proposed precomputed backprojection based penalized-likelihood (PPL) reconstruction. By using the edge-preserved regularizer, our experiments show that through tuning several parameters, resolution can be retained while noise is reduced significantly. Compared to other conventional noise reduction techniques in image reconstruction, less resolution is lost in order to gain certain noise reduction, which may benefit the research of low dose tomosynthesis.
Digital breast tomosynthesis reconstruction with an adaptive voxel grid
In digital breast tomosynthesis (DBT) volume datasets are typically reconstructed with an anisotropic voxel size, where the in-plane voxel size usually reflects the detector pixel size (e.g., 0.1 mm), and the slice separation is generally between 0.5-1.0 mm. Increasing the tomographic angle is expected to give better 3D image quality; however, the slice spacing in the reconstruction should be reduced, otherwise one may risk losing fine-scale image detail (e.g., small microcalcifications). An alternative strategy consists of reconstructing on an adaptive voxel grid, where the voxel height at each location is adapted based on the backprojected data at this location, with the goal to improve image quality for microcalcifications. In this paper we present an approach for generating such an adaptive voxel grid. This approach is based on an initial reconstruction step that is performed at a finer slice-spacing combined with a selection of an “optimal” height for each voxel. This initial step is followed by a (potentially iterative) reconstruction acting now on the adaptive grid only.
List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor
W. J. Ryder, G. I. Angelis, R. Bashar, et al.
List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.
Statistical iterative reconstruction using fast optimization transfer algorithm with successively increasing factor in Digital Breast Tomosynthesis
Statistical iterative reconstruction exhibits particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description in transmission tomography system. However, to solve the objective function is computationally intensive compared to analytical reconstruction methods due to multiple iterations needed for convergence and each iteration involving forward/back-projections by using a complex geometric system model. Optimization transfer (OT) is a general algorithm converting a high dimensional optimization to a parallel 1-D update. OT-based algorithm provides a monotonic convergence and a parallel computing framework but slower convergence rate especially around the global optimal. Based on an indirect estimation on the spectrum of the OT convergence rate matrix, we proposed a successively increasing factor- scaled optimization transfer (OT) algorithm to seek an optimal step size for a faster rate. Compared to a representative OT based method such as separable parabolic surrogate with pre-computed curvature (PC-SPS), our algorithm provides comparable image quality (IQ) with fewer iterations. Each iteration retains a similar computational cost to PC-SPS. The initial experiment with a simulated Digital Breast Tomosynthesis (DBT) system shows that a total 40% computing time is saved by the proposed algorithm. In general, the successively increasing factor-scaled OT exhibits a tremendous potential to be a iterative method with a parallel computation, a monotonic and global convergence with fast rate.
Iterative reconstruction of volumetric modulated arc radiotherapy plans using control point basis vectors
Joseph C. Barbiere, Alexander Kapulsky, Alois Ndlovu
Volumetric Modulated Arc Radiotherapy is an innovative technique currently utilized to efficiently deliver complex treatments. Dose rate, speed of rotation, and field shape are continuously varied as the radiation source rotates about the patient. Patient specific quality assurance is performed to verify that the delivered dose distribution is consistent with the plan formulated in a treatment planning system. The purpose of this work is to present novel methodology using a Gafchromic EBT3 film image of a patient plan in a cylindrical phantom and calculating the delivered MU per control point. Images of two dimensional plan dose matrices and film scans are analyzed using MATLAB with the imaging toolbox. Dose profiles in a ring corresponding to the film position are extracted from the plan matrices for comparison with the corresponding measured film dose. The plan is made up of a series of individual static Control Points. If we consider these Control Points a set of basis vectors, then variations in the plan can be represented as the weighted sum of the basis. The weighing coefficients representing the actual delivered MU can be determined by any available optimization tool, such as downhill simplex or non-linear programming. In essence we reconstruct an image of the delivered dose. Clinical quality assurance is performed with this technique by computing a patient plan with the measured monitor units and standard plan evaluation tools such as Dose Volume Histograms. Testing of the algorithm with known changes in the reference images indicated a correlation coefficient greater than 0.99.
Investigation of the quantitative accuracy of 3D iterative reconstruction algorithms in comparison to filtered back projection method: a phantom study
Nouf Abuhadi, David Bradley, Dev Katarey, et al.
Introduction: Single-Photon Emission Computed Tomography (SPECT) is used to measure and quantify radiopharmaceutical distribution within the body. The accuracy of quantification depends on acquisition parameters and reconstruction algorithms. Until recently, most SPECT images were constructed using Filtered Back Projection techniques with no attenuation or scatter corrections. The introduction of 3-D Iterative Reconstruction algorithms with the availability of both computed tomography (CT)-based attenuation correction and scatter correction may provide for more accurate measurement of radiotracer bio-distribution. The effect of attenuation and scatter corrections on accuracy of SPECT measurements is well researched. It has been suggested that the combination of CT-based attenuation correction and scatter correction can allow for more accurate quantification of radiopharmaceutical distribution in SPECT studies (Bushberg et al., 2012). However, The effect of respiratory induced cardiac motion on SPECT images acquired using higher resolution algorithms such 3-D iterative reconstruction with attenuation and scatter corrections has not been investigated. Aims: To investigate the quantitative accuracy of 3D iterative reconstruction algorithms in comparison to filtered back projection (FBP) methods implemented on cardiac SPECT/CT imaging with and without CT-attenuation and scatter corrections. Also to investigate the effects of respiratory induced cardiac motion on myocardium perfusion quantification. Lastly, to present a comparison of spatial resolution for FBP and ordered subset expectation maximization (OSEM) Flash 3D together with and without respiratory induced motion, and with and without attenuation and scatter correction. Methods: This study was performed on a Siemens Symbia T16 SPECT/CT system using clinical acquisition protocols. Respiratory induced cardiac motion was simulated by imaging a cardiac phantom insert whilst moving it using a respiratory motion motor inducing cyclical elliptical motion of the apex of the cardiac insert. Results: Our analyses revealed that the use of the Flash 3-D reconstruction algorithm without scatter or attenuation correction has improved Spatial Resolution by 30% relative to FBP. Reduction in Spatial Resolution due to respiratory induced motion was 12% and 38% for FBP and Flash 3-D respectively. The implementation of scatter correction has resulted in a reduction in resolution by up to 6%. The application of CT-based attenuation correction has resulted in 13% and 26% reduction in spatial resolution for SPECT images reconstructed using FBP and Flash 3-D algorithms respectively. Conclusion: We conclude that iterative reconstruction (Flash-3D) provides significant improvement in image spatial resolution, however as a result the effects of respiratory induced motion have become more evident and correction of this is required before the full potential of these algorithms can be realised for myocardial perfusion imaging. Attenuation and scatter correction can improve image contrast, but may have significant detrimental effect on spatial resolution.
Poster Session: System Characterization
icon_mobile_dropdown
Focal spot measurements using a digital flat panel detector
Focal spot size is one of the crucial factors that affect the image quality of any x-ray imaging system. It is, therefore, important to measure the focal spot size accurately. In the past, pinhole and slit measurements of x-ray focal spots were obtained using direct exposure film. At present, digital detectors are replacing film in medical imaging so that, although focal spot measurements can be made quickly with such detectors, one must be careful to account for the generally poorer spatial resolution of the detector and the limited usable magnification. For this study, the focal spots of a diagnostic x-ray tube were measured with a 10-μm pinhole using a 194-μm pixel flat panel detector (FPD). The twodimensional MTF, measured with the Noise Response (NR) Method was used for the correction for the detector blurring. The resulting focal spot sizes based on the FWTM (Full Width at Tenth Maxima) were compared with those obtained with a very high resolution detector with 8-μm pixels. This study demonstrates the possible effect of detector blurring on the focal spot size measurements with digital detectors with poor resolution and the improvement obtained by deconvolution. Additionally, using the NR method for measuring the two-dimensional MTF, any non-isotropies in detector resolution can be accurately corrected for, enabling routine measurement of non-isotropic x-ray focal spots. This work presents a simple, accurate and quick quality assurance procedure for measurements of both digital detector properties and x-ray focal spot size and distribution in modern x-ray imaging systems.
Dose reduction in CT with correlated-polarity noise reduction: context-dependent spatial resolution and noise properties demonstrating two-fold dose reduction with minimal artifacts
Correlated-polarity noise reduction (CPNR) is a novel noise reduction technique that uses a statistical approach to reducing noise while maintaining excellent spatial resolution and a traditional noise appearance. It was demonstrated in application to CT imaging for the first time at SPIE 2013 and showed qualitatively excellent image quality at half of normal CT dose. In this current work, we measure quantitatively the spatial resolution and noise properties of CPNR in CT imaging. To measure the spatial resolution, we developed a metrology approach that is suitable for nonlinear algorithms such as CPNR. We introduce the formalism of Signal Modification Factor, SMF(u,v), which is the ratio in frequency space of the CPNR-processed image divided by the noise-free image, averaged over an ensemble of ROIs in a given anatomical context. SMF is a nonlinear analog to the MTF. We used XCAT computer-generated anthropomorphic phantom images followed by projection space processing with CPNR. The SMF revealed virtually no effect from CPNR on spatial resolution of the images (<7% degradation at all frequencies). Corresponding contextdependent NPS measurements generated with CPNR at half-dose were about equal to the NPS of full-dose images without CPNR. This result demonstrates for the first time the quantitative determination of a two-fold reduction in dose with CPNR with less than 7% reduction in spatial resolution. We conclude that CPNR shows strong promise as a method for reduction of noise (and hence, dose) in CT. CPNR may also be used in combination with iterative reconstruction techniques for yet further dose reduction, pending further investigation.
Validation of an image-based technique to assess the perceptual quality of clinical chest radiographs with an observer study
Yuan Lin, Kingshuk R. Choudhury, H. Page McAdams, et al.
We previously proposed a novel image-based quality assessment technique1 to assess the perceptual quality of clinical chest radiographs. In this paper, an observer study was designed and conducted to systematically validate this technique. Ten metrics were involved in the observer study, i.e., lung grey level, lung detail, lung noise, riblung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. For each metric, three tasks were successively presented to the observers. In each task, six ROI images were randomly presented in a row and observers were asked to rank the images only based on a designated quality and disregard the other qualities. A range slider on the top of the images was used for observers to indicate the acceptable range based on the corresponding perceptual attribute. Five boardcertificated radiologists from Duke participated in this observer study on a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions. The observer data were analyzed in terms of the correlations between the observer ranking orders and the algorithmic ranking orders. Based on the collected acceptable ranges, quality consistency ranges were statistically derived. The observer study showed that, for each metric, the averaged ranking orders of the participated observers were strongly correlated with the algorithmic orders. For the lung grey level, the observer ranking orders completely accorded with the algorithmic ranking orders. The quality consistency ranges derived from this observer study were close to these derived from our previous study. The observer study indicates that the proposed image-based quality assessment technique provides a robust reflection of the perceptual image quality of the clinical chest radiographs. The derived quality consistency ranges can be used to automatically predict the acceptability of a clinical chest radiograph.
Relative object detectability (ROD): a new metric for comparing x-ray image detector performance for a specified object of interest
Relative object detectability (ROD) quantifies the relative performance of two image detectors for a specified object of interest by taking the following ratio: the integral of detective quantum efficiency of a detector weighted by the frequency spectrum of the object divided by that for a second detector. Four different detectors, namely the microangiographic fluoroscope (MAF), the Dexela Model 1207 (Dex) and Hamamatsu Model C10901D-40 (Ham) CMOS xray detectors, and a flat-panel detector (FPD) were compared. The ROD was calculated for six pairs of detectors: (1) Dex/FPD, (2) MAF/FPD, (3) Ham/FPD, (4) Dex/Ham, (5) MAF/Ham and (6) MAF/Dex for wires of 5 mm fixed length, solid spheres ranging in diameter from 50 to 600 microns, and four simulated iodine-filled blood vessels of outer diameters 0.4 and 0.5 mm, each with wall thicknesses of 0.1 and 0.15 mm. Marked variation of ROD for the wires and spheres is demonstrated as a function of object size for the various detector pairs. The ROD of all other detectors relative to the FPD was much greater than one for small features and approached 1.0 as the diameter increased. The relative detectability of simulated small iodine-filled blood vessels for all detector pairs was seen to be independent of the vessel wall thickness for the same inner diameter. In this study, the ROD is shown to have the potential to be a useful figure of merit to evaluate the relative performance of two detectors for a given imaging task.
Noise performance of statistical model based iterative reconstruction in clinical CT systems
The statistical model based iterative reconstruction (MBIR) method has been introduced to clinical CT systems. Due to the nonlinearity of this method, the noise characteristics of MBIR are expected to differ from those of filtered backprojection (FBP). This paper reports an experimental characterization of the noise performance of MBIR equipped on several state-of-the-art clinical CT scanners at our institution. The thoracic section of an anthropomorphic phantom was scanned 50 times to generate image ensembles for noise analysis. Noise power spectra (NPS) and noise standard deviation maps were assessed locally at different anatomical locations. It was found that MBIR lead to significant reduction in noise magnitude and improvement in noise spatial uniformity when compared with FBP. Meanwhile, MBIR shifted the NPS of the reconstructed CT images towards lower frequencies along both the axial and the z frequency axes. This effect was confirmed by a relaxed slice thicknesstradeoff relationship shown in our experimental data. The unique noise characteristics of MBIR imply that extra effort must be made to optimize CT scanning parameters for MBIR to maximize its potential clinical benefits.
Comparison of deconvolution techniques to measure directional MTF of FDK reconstruction
To measure a spatial resolution of CT scanner, several methods have been developed using bar pattern, wires and thin plates. While these approaches are effective to measure two dimensional MTF, it is not easy to measure directional MTF using those phantoms. To overcome these limitations, Thornton et al. proposed a method to measure directional MTF using sphere phantoms, which is effective only when the cone angle is small. Recently, Baek et al. developed a method to estimate the directional MTF even with a larger cone angle, but the proposed method was analyzed using a noiseless data set. In this work, we present Wiener and Richardson-Lucy deconvolution techniques to estimate the directional MTF, and compare the estimation performance with that of the previous methods (i.e., Thornton’s and Baek’s methods). To estimate directional MTF, we reconstructed a sphere object centered at (0.01 cm, 0.01 cm, 10.01 cm) using FDK algorithm, and then calculated plane integrals of the reconstructed sphere object and the ideal sphere object. The plane integrals of sphere objects were used to estimate the directional MTF using Wiener and Richardson-Lucy deconvolution techniques. The estimated directional MTF was compared with the ideal MTF calculated from a point object, and showed an excellent agreement.
Poster Session: System Reports
icon_mobile_dropdown
A spectral CT technique using balanced K-edge filter set
Yothin Rakvongthai, William Worstell, Georges El Fakhri, et al.
In this work, we propose a novel spectral computed tomography (CT) approach that combines a conventional CT scanner with a Ross spectrometer to obtain quasi-monoenergetic measurements. The Ross spectrometer, which is a generalization of a Ross filter pair, is a set of balanced K-edge filters whose thicknesses are such that the transmitted spectra through any two filters are nearly identical except in the energy band between their respective K-edges. The proposed approach is based on these specially designed filters, which are used to synthesize a set of quasi-monoenergetic sinograms whose reconstruction yields energy-dependent attenuation coefficient (μE) images. In this way, we are able to collect data using conventional CT data acquisition electronics, then to synthesize spectral CT datasets with highly stable, rate-independent energy bin boundaries. This approach avoids the chromatic distortion due to event pile-up which can cause difficulties with single photon spectrometry-based methods. To validate our Ross Spectrometer CT concept, we performed phantom studies and acquired data with a balanced filter set consisting of thin foils of silver, tin, cerium, dysprosium and tungsten. For each energy bin, a synthesized quasi-monoenergetic CT image was reconstructed using the filtered back projection (FBP) algorithm operating on the logarithmic ratio of corresponding energy-resolved intensity and blank sinogram pairs. The reconstructed attenuation coefficients showed satisfactorily good agreement with NIST reference values of μE for water. The proposed spectral CT technique is potentially feasible and holds promise to provide a more accurate and cost-effective alternative to single-photon counting spectral CT techniques.
A flat-field correction method for photon-counting-detector-based micro-CT
So E. Park, Jae G. Kim, M. A. A. Hegazy, et al.
As low-dose computed tomography becomes a hot issue in the field of clinical x-ray imaging, photon counting detectors have drawn great attention as alternative x-ray image sensors. Even though photon-counting image sensors have several advantages over the integration-type sensors, such as low noise and high DQE, they are known to be more sensitive to the various experimental conditions like temperature and electric drift. Particularly, time-varying detector response during the CT scan is troublesome in photon-counting-detector-based CTs. To overcome the time-varying behavior of the image sensor during the CT scan, we developed a flat-field correction method together with an automated scanning mechanism. We acquired the flat-field images and projection data every view alternatively. When we took the flat-field image, we moved down the imaging sample away from the field-of-view with aid of computer controlled linear positioning stage. Then, we corrected the flat-field effects view-by-view with the flat-field image taken at given view. With a CdTe photon-counting image sensor (XRI-UNO, IMATEK), we took CT images of small bugs. The CT images reconstructed with the proposed flat-field correction method were much superior to the ones reconstructed with the conventional flat-field correction method.
Design of a nested SPECT-CT system with fully suspended CT sub-system for dedicated breast imaging
Jainil P. Shah, Steve D. Mann, Randolph L. McKinley, et al.
A fully suspended, stand-alone cone beam CT system capable of complex trajectories, in addition to a simple circular trajectory, has previously been developed and shown to minimize cone beam sampling insufficiencies and have better sampling close to the chest wall for pendant breast CT imaging. A hybrid SPECT-CT system with SPECT capable of complex 3D trajectories has already been implemented and is currently in use. Here, the individual systems are redesigned into one hybrid system where each individual component is capable of traversing independent, arbitrary trajectories around a pendant breast and anterior chest wall in a common field of view. The integration also involves key hardware upgrades: a new high resolution 40x30cm2 flat panel CT imager with an 8mm bezel on two sides for closer chest wall access, a new x-ray source, and a unique tilting mechanism to enable the spherical trajectories for CT. A novel method to tilt the CT gantry about a 3D center of rotation is developed and included in the new gantry, while preserving the fully-3D SPECT system nested within the larger CT gantry. The flexibility of the integrated system is illustrated.
Phase contrast portal imaging for image-guided microbeam radiation therapy
Keiji Umetani, Takeshi Kondoh
High-dose synchrotron microbeam radiation therapy is a unique treatment technique used to destroy tumors without severely affecting circumjacent healthy tissue. We applied a phase contrast technique to portal imaging in preclinical microbeam radiation therapy experiments. Phase contrast portal imaging is expected to enable us to obtain higherresolution X-ray images at therapeutic X-ray energies compared to conventional portal imaging. Frontal view images of a mouse head sample were acquired in propagation-based phase contrast imaging. The phase contrast images depicted edge-enhanced fine structures of the parietal bones surrounding the cerebrum. The phase contrast technique is expected to be effective in bony-landmark-based verification for image-guided radiation therapy.
Rotating and semi-stationary multi-beamline architecture study for cardiac CT imaging
Jiao Wang, Paul Fitzgerald, Hewei Gao, et al.
Over the past decade, there has been abundant research on future cardiac CT architectures and corresponding reconstruction algorithms. Multiple cardiac CT concepts have been published, including third-generation single-source CT with wide-cone coverage, dual-source CT, and electron-beam CT, etc. In this paper, we apply a Radon space analysis method to two multi-beamline architectures: triple-source CT and semi-stationary ring-source CT. In our studies, we have considered more than thirty cardiac CT architectures and triple-source CT was identified as a promising solution, offering approximately a three-fold advantage in temporal resolution, which can significantly reduce motion artifacts due to the moving heart and lungs. In this work, we describe a triple-source CT architecture with all three beamlines (i.e. source-detector pairs) limited to the cardiac field of view in order to eliminate the radiation dose outside the cardiac region. We also demonstrate the capability of performing full field of view imaging when desired, by shifting the detectors. Ring-source dual-rotating-detector CT is another architecture of interest, which offers the opportunity to provide high temporal resolution using a full-ring stationary source. With this semi-stationary architecture, we found that the azimuthal blur effect can be greater than in a fully-rotating CT system. We therefore propose novel scanning modes to reduce the azimuthal blur in ring-source rotating detector CT. Radon space analysis method proves to be a useful method in CT system architecture study.
Determination of minor and trace elements in kidney stones by x-ray fluorescence analysis
Anjali Srivastava, Brianne J. Heisinger, Vaibhav Sinha, et al.
The determination of accurate material composition of a kidney stone is crucial for understanding the formation of the kidney stone as well as for preventive therapeutic strategies. Radiations probing instrumental activation analysis techniques are excellent tools for identification of involved materials present in the kidney stone. In particular, x-ray fluorescence (XRF) can be very useful for the determination of minor and trace materials in the kidney stone. The X-ray fluorescence measurements were performed at the Radiation Measurements and Spectroscopy Laboratory (RMSL) of department of nuclear engineering of Missouri University of Science and Technology and different kidney stones were acquired from the Mayo Clinic, Rochester, Minnesota. Presently, experimental studies in conjunction with analytical techniques were used to determine the exact composition of the kidney stone. A new type of experimental set-up was developed and utilized for XRF analysis of the kidney stone. The correlation of applied radiation source intensity, emission of X-ray spectrum from involving elements and absorption coefficient characteristics were analyzed. To verify the experimental results with analytical calculation, several sets of kidney stones were analyzed using XRF technique. The elements which were identified from this techniques are Silver (Ag), Arsenic (As), Bromine (Br), Chromium (Cr), Copper (Cu), Gallium (Ga), Germanium (Ge), Molybdenum (Mo), Niobium (Nb), Rubidium (Rb), Selenium (Se), Strontium (Sr), Yttrium (Y), Zirconium (Zr). This paper presents a new approach for exact detection of accurate material composition of kidney stone materials using XRF instrumental activation analysis technique.
Workflow for the use of a high-resolution image detector in endovascular interventional procedures
R. Rana, B. Loughran, S. N. Swetadri Vasan, et al.
Endovascular image-guided intervention (EIGI) has become the primary interventional therapy for the most widespread vascular diseases. These procedures involve the insertion of a catheter into the femoral artery, which is then threaded under fluoroscopic guidance to the site of the pathology to be treated. Flat Panel Detectors (FPDs) are normally used for EIGIs; however, once the catheter is guided to the pathological site, high-resolution imaging capabilities can be used for accurately guiding a successful endovascular treatment. The Micro-Angiographic Fluoroscope (MAF) detector provides needed high-resolution, high-sensitivity, and real-time imaging capabilities. An experimental MAF enabled with a Control, Acquisition, Processing, Image Display and Storage (CAPIDS) system was installed and aligned on a detector changer attached to the C-arm of a clinical angiographic unit. The CAPIDS system was developed and implemented using LabVIEW software and provides a user-friendly interface that enables control of several clinical radiographic imaging modes of the MAF including: fluoroscopy, roadmap, radiography, and digital-subtraction-angiography (DSA). Using the automatic controls, the MAF detector can be moved to the deployed position, in front of a standard FPD, whenever higher resolution is needed during angiographic or interventional vascular imaging procedures. To minimize any possible negative impact to image guidance with the two detector systems, it is essential to have a well-designed workflow that enables smooth deployment of the MAF at critical stages of clinical procedures. For the ultimate success of this new imaging capability, a clear understanding of the workflow design is essential. This presentation provides a detailed description and demonstration of such a workflow design.
Poster Session: Tomosynthesis and Multi-energy Imaging
icon_mobile_dropdown
Feasibility of active sandwich detectors for single-shot dual-energy imaging
Seungman Yun, Jong Chul Han, Dong Woon Kim, et al.
We revisit the doubly-layered sandwich detector configuration for single-shot dual-energy x-ray imaging. In order to understand its proper operation, we investigated the contrast-to-noise performance in terms of the x-ray beam setup using the Monte Carlo methods. Using a pair of active photodiode arrays coupled to phosphor screens, we have built a sandwich detector. For better spectral separation between the projection images obtained from the front and rear detectors during a single x-ray exposure, we inserted a copper sheet between two detectors. We have successfully obtained soft tissue- and bone-enhanced images for a postmortem mouse with the developed sandwich detector using weighted logarithmic subtraction, and the image quality was comparable to those achieved by the conventional kVp-switching technique. Although some problems to be mitigated for the optimal and practical use, for example, the scatter effect and image registration, are still left, the performance of the sandwich detector for single-shot dual-energy x-ray imaging is promising. We expect that the active sandwich detector will provide motion-artifact-free dual-energy images with a reasonable image quality.
Scatter correction method with primary modulator for dual energy digital radiography: a preliminary study
In conventional digital radiography (DR) using a dual energy subtraction technique, a significant fraction of the detected photons are scattered within the body, resulting in the scatter component. Scattered radiation can significantly deteriorate image quality in diagnostic X-ray imaging systems. Various methods of scatter correction, including both measurement and non-measurement-based methods have been proposed in the past. Both methods can reduce scatter artifacts in images. However, non-measurement-based methods require a homogeneous object and have insufficient scatter component correction. Therefore, we employed a measurement-based method to correct for the scatter component of inhomogeneous objects from dual energy DR (DEDR) images. We performed a simulation study using a Monte Carlo simulation with a primary modulator, which is a measurement-based method for the DEDR system. The primary modulator, which has a checkerboard pattern, was used to modulate primary radiation. Cylindrical phantoms of variable size were used to quantify imaging performance. For scatter estimation, we used Discrete Fourier Transform filtering. The primary modulation method was evaluated using a cylindrical phantom in the DEDR system. The scatter components were accurately removed using a primary modulator. When the results acquired with scatter correction and without correction were compared, the average contrast-to-noise ratio (CNR) with the correction was 1.35 times higher than that obtained without correction, and the average root mean square error (RMSE) with the correction was 38.00% better than that without correction. In the subtraction study, the average CNR with correction was 2.04 (aluminum subtraction) and 1.38 (polymethyl methacrylate (PMMA) subtraction) times higher than that obtained without the correction. The analysis demonstrated the accuracy of scatter correction and the improvement of image quality using a primary modulator and showed the feasibility of introducing the primary modulation technique into dual energy subtraction. Therefore, we suggest that the scatter correction method with a primary modulator is useful for the DEDR system.
Assessing and improving cobalt-60 digital tomosynthesis image quality
Image guidance capability is an important feature of modern radiotherapy linacs, and future cobalt-60 units will be expected to have similar capabilities. Imaging with the treatment beam is an appealing option, for reasons of simplicity and cost, but the dose needed to produce cone beam CT (CBCT) images in a Co-60 treatment beam is too high for this modality to be clinically useful. Digital tomosynthesis (DT) offers a quasi-3D image, of sufficient quality to identify bony anatomy or fiducial markers, while delivering a much lower dose than CBCT. A series of experiments were conducted on a prototype Co-60 cone beam imaging system to quantify the resolution, selectivity, geometric accuracy and contrast sensitivity of Co-60 DT. Although the resolution is severely limited by the penumbra cast by the ~2 cm diameter source, it is possible to identify high contrast objects on the order of 1 mm in width, and bony anatomy in anthropomorphic phantoms is clearly recognizable. Low contrast sensitivity down to electron density differences of 3% is obtained, for uniform features of similar thickness. The conventional shift-and-add reconstruction algorithm was compared to several variants of the Feldkamp-Davis-Kress filtered backprojection algorithm result. The Co-60 DT images were obtained with a total dose of 5 to 15 cGy each. We conclude that Co-60 radiotherapy units upgraded for modern conformal therapy could also incorporate imaging using filtered backprojection DT in the treatment beam. DT is a versatile and promising modality that would be well suited to image guidance requirements.
2D and 3D registration methods for dual-energy contrast-enhanced digital breast tomosynthesis
Contrast-enhanced digital breast tomosynthesis (CE-DBT) uses an iodinated contrast agent to image the threedimensional breast vasculature. The University of Pennsylvania is conducting a CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 postcontrast). A hybrid subtraction scheme is proposed. First, dual-energy (DE) images are obtained by a weighted logarithmic subtraction of the high-energy and low-energy image pairs. Then, post-contrast DE images are subtracted from the pre-contrast DE image. This hybrid temporal subtraction of DE images is performed to analyze iodine uptake, but suffers from motion artifacts. Employing image registration further helps to correct for motion, enhancing the evaluation of vascular kinetics. Registration using ANTS (Advanced Normalization Tools) is performed in an iterative manner. Mutual information optimization first corrects large-scale motions. Normalized cross-correlation optimization then iteratively corrects fine-scale misalignment. Two methods have been evaluated: a 2D method using a slice-by-slice approach, and a 3D method using a volumetric approach to account for out-of-plane breast motion. Our results demonstrate that iterative registration qualitatively improves with each iteration (five iterations total). Motion artifacts near the edge of the breast are corrected effectively and structures within the breast (e.g. blood vessels, surgical clip) are better visualized. Statistical and clinical evaluations of registration accuracy in the CE-DBT images are ongoing.
Poster Session: X-ray Imaging
icon_mobile_dropdown
Model predictions for the WAXS signals of healthy and malignant breast duct biopsies
A wide-angle x-ray scatter (WAXS) measurement could potentially be used to determine whether a biopsy of a breast duct is healthy or malignant. A ductal carcinoma in situ (DCIS) occurs when the epithelial cells lining the wall start to replicate and invade the duct interior. Since cells are composed mainly of water a WAXS signal of DCIS could contain a larger component due to water. A model approximates that a breast duct biopsy consists of connective tissue (c.t.) and cells. For a 2 mm diameter 3.81 mm thick healthy duct biopsy, the volumes in cubic mm are 11.56 c.t. and 0.41 cells whereas 6.64 c.t. and 5.33 cells for DCIS. The differential linear scattering coefficients (μs) for both types of biopsies were calculated using the sum vc.tsc.t. + vcellμscell where v denotes fractional volume. The cell was assumed to be composed of water, lipids (fat), and other atoms associated with RNA, DNA, proteins, and carbohydrates. The μscell was calculated using the sum 0.771μswater + 0.023μsfat + 0.206μsother. The μs of c.t., water, and fat were available from literature whereas the independent atomic model approximation was used to calculate values for μsother. A WAXS model provided predictions of the number of 6 degree scattered photons Ns for incident 50 kV beams on healthy and malignant ducts. The sum of Ns between 31.5 ≤ E ≤ 45 keV were 1402 and 1529 for respectively the healthy and malignant biopsies. Using Poisson statistics, two Gaussian distributions, and a descision threshold set at their intersection, the false positive and false negative probabilities were 4.7% and 5.0%. This work suggests that DCIS could potentially be diagnosed via energy dispersive WAXS measurements.
X-ray coherent scatter imaging for surgical margin detection: a Monte Carlo study
Manu N. Lakshmanan, Anuj J. Kapadia, Brian P. Harrawood, et al.
Instead of having the entire breast removed (a mastectomy), breast cancer patients often receive a breast con- serving surgery (BCS) for removal of only the breast tumor. If post-surgery analysis reveals ta missed margin around the tumor tissue excised through the BCS procedure, the physician must often call the patient back for another surgery, which is both difficult and risky for the patient. If this “margin detection” could be performed during the BCS procedure itself, the surgical team could use the analysis to ensure that all tumor tissue was removed in a single surgery, thereby potentially reducing the number of call backs from breast cancer surgery. We describe here a potential technique to detect surgical tumor margins in breast cancer using x-ray coherent scatter imaging. In this study, we demonstrate the imaging ability of this technique using Monte Carlo simulations.
Limitations of anti-scatter grids when used with high resolution image detectors
Anti-scatter grids are used in fluoroscopic systems to improve image quality by absorbing scattered radiation. A stationary Smit Rontgen X-ray grid (line density: 70 lines/cm, grid ratio: 13:1) was used with a flat panel detector (FPD) of pixel size 194 micron and a high-resolution CMOS detector, the Dexela 1207 with pixel size of 75 microns. To investigate the effectiveness of the grid, a simulated artery block was placed in a modified uniform frontal head phantom and imaged with both the FPD and the Dexela for an approximately 15 x 15 cm field of view (FOV). The contrast improved for both detectors with the grid. The contrast-to-noise ratio (CNR) does not increase as much in the case of the Dexela as it improves in the case of the FPD. Since the total noise in a single frame increases substantially for the Dexela compared to the FPD when the grid is used, the CNR is degraded. The increase in the quantum noise per frame would be similar for both detectors when the grid is used due to the attenuation of radiation, but the fixed pattern noise caused by the grid was substantially higher for the Dexela compared to the FPD and hence caused a severe reduction of CNR. Without further corrective methods this grid should not be used with high-resolution fluoroscopic detectors because the CNR does not improve significantly and the visibility of low contrast details may be reduced. Either an anti-scatter grid of different design or an additional image processing step when using a similar grid would be required to deal with the problem of scatter for high resolution detectors and the structured noise of the grid pattern.
The beam stop array method to measure object scatter in digital breast tomosynthesis
Haeng-hwa Lee, Ye-seul Kim, Hye-Suk Park, et al.
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
Scatter reduction for high resolution image detectors with a region of interest attenuator
Compton scatter is the main interaction of x-rays with objects undergoing radiographic and fluoroscopic imaging procedures. Such scatter is responsible for reducing image signal to noise ratio which can negatively impact object detection especially for low contrast objects. To reduce scatter, possible methods are smaller fields-of-view, larger air gaps and the use of an anti-scatter grid. Smaller fields of view may not be acceptable and scanned-beam radiography is not practical for real-time imaging. Air gaps can increase geometric unsharpness and thus degrade image resolution. Deployment of an anti-scatter grid is not well suited for high resolution imagers due to the unavailability of high line density grids needed to prevent grid-line artifacts. However, region of interest (ROI) imaging can be used not only for dose reduction but also for scatter reduction in the ROI. The ROI region receives unattenuated x-rays while the peripheral region receives x-rays reduced in intensity by an ROI attenuator. The scatter within the ROI part of the image originates from both the unattenuated ROI and the attenuated peripheral region. The scatter contribution from the periphery is reduced in intensity because of the reduced primary x-rays in that region and the scatter fraction in the ROI is thus reduced. In this study, the scatter fraction for various kVp’s, air-gaps and field sizes was measured for a uniform head equivalent phantom. The scatter fraction in the ROI was calculated using a derived scatter fraction formula, which was validated with experimental measurements. It is shown that use of a ROI attenuator can be an effective way to reduce both scatter and patient dose while maintaining the superior image quality of high resolution detectors.
Potential use of a single scatter model in breast CBCT applications
C. Laamanen, R. J. LeClair
A model based on singly scattered photons could potentially be of use to correct for scatter effects in breast CBCT applications. Consider a simple phantom consisting of a 14 cm diameter 10.5 cm long cylindrical 50:50 mixture of fibroglandular and fat tissue with 21 cylindrical segments embedded along its central axis. One group of segments were 2 mm in diameter with compositions 0:100, 20:80, 35:65, 50:50, 65:35, 80:20, and 100:0. The remaining two groups had diameters of 5 mm and 10 mm. In order to reduce the computational time required, GEANT4 was used to simulate a scatter profile for a single projection which was then utilized in generating the large number of unique projections required for CBCT reconstruction. The scatter model was applied in an attempt to correct the cupping artifact caused by x ray scatter in the reconstructed images. The model assumed a homogeneous 50:50 phantom. The SPR generated by the model near the phantom center was at most 8% below that simulated by GEANT4. The scatter corrected images showed an almost complete removal of the cupping artifact. This simple model shows considerable promise in correcting scatter, though more research is required to determine its validity in more realistic imaging tasks.