Show all abstracts
View Session
- Front Matter: Volume 7961
- Keynote and Imaging and Health Economics
- X-ray Imaging
- Metrology
- Iterative and Statistical Reconstruction
- Detectors I
- Detectors II
- Breast Imaging
- Tomosynthesis I: Reconstruction
- Tomosynthesis II
- X-ray Imaging: Phase Contrast, Diffraction
- Image Reconstruction
- CT III: Multi-energy
- Novel Systems
- CT IV: Cone Beam
- Dose
- Special Session I: Dose
- Special Session II: Dose
- Poster Session: CT
- Poster Session: Noise and Dose, Measurement and Reduction
- Poster Session: MRI
- Poster Session: PET and SPECT
- Poster Session: X-ray Imaging
- Poster Session: Detectors
- Poster Session: Novel Systems, Other
- Poster Session: Breast Imaging
- Poster Session: Applications
Front Matter: Volume 7961
Front Matter: Volume 7961
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 7961, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Keynote and Imaging and Health Economics
Lateral organic photodetectors for imaging applications
Show abstract
Organic semiconductor detectors have always been in active research interest of researchers due to its low fabrication
cost. Vertical organic detectors have been studied in the past but not much of the works have been done on lateral
organic detectors. The lateral design has an advantage over the vertical design that it is easy to fabricate and can be
easily integrated with the backplane TFT imager circuit. Integrating an organic photodetectors with TFT imager can
improve the over all sensitivity of the imager. However the lateral design limits the fill-factor.
Here in our work we propose a new bilayered lateral organic photodetectors with Copper-Phthalocyanine (CUPC) as top
and Perylene- Tetracarboxylic Bis- Benzimidazole (PTCBI) as the bottom layer organic material. The bottom organic
semiconductor layer work as both, charge transport layer and photon absorption layer. The top and bottom layer provides
and heterojunction a potential gradient enough to separate the photo generated excitons in to electrons and holes. The
incident photons are absorbed in the two layers active layers giving an exciton. These excitons see a potential barrier at
the CUPC-PTCBI heterojunction and separated into holes and electrons. The separated electrons are directed by the
external applied electric field and thus give a increase in photocurrent.
Lateral organic photodetectors are simple to design and have low dark current. The photo-response of these photo
detectors is observed approximately three orders higher in magnitude compare able to its dark response. The dual layer
has an advantage of tuning the devices for different absorption wavelengths and were observed more stable comparable
to vertical devices.
Design and optimization of a dedicated cone-beam CT system for musculoskeletal extremities imaging
Show abstract
The design, initial imaging performance, and model-based optimization of a dedicated cone-beam CT (CBCT) scanner
for musculoskeletal extremities is presented. The system offers a compact scanner that complements conventional CT
and MR by providing sub-mm isotropic spatial resolution, the ability to image weight-bearing extremities, and the
capability for integrated real-time fluoroscopy and digital radiography. The scanner employs a flat-panel detector and a
fixed anode x-ray source and has a field of view of ~ (20x20x20) cm3. The gantry allows a "standing" configuration for
imaging of weight-bearing lower extremities and a "sitting" configuration for imaging of upper extremities and unloaded
lower extremities. Cascaded systems analysis guided the selection of x-ray technique (e.g., kVp, filtration, and dose) and
system design (e.g., magnification factor), yielding input-quantum-limited performance at detector signal of 100 times
the electronic noise, while maintaining patient dose below 5 mGy (a factor of ~2-3 less than conventional CT). A
magnification of 1.3 optimized tradeoffs between source and detector blur for a 0.5 mm focal spot. A custom antiscatter
grid demonstrated significant reduction of artifacts without loss of contrast-to-noise ratio or increase in dose. Image
quality in cadaveric specimens was assessed on a CBCT bench, demonstrating exquisite bone detail, visualization of
intra-articular morphology, and soft-tissue visibility approaching that of diagnostic CT. The capability to image loaded
extremities and conduct multi-modality CBCT/fluoroscopy with improved workflow compared to whole-body CT could
be of value in a broad spectrum of applications, including orthopaedics, rheumatology, surgical planning, and treatment
assessment. A clinical prototype has been constructed for deployment in pilot study trials.
X-ray Imaging
A laser-driven undulator x-ray source: simulation of image formation and dose deposition in mammography
Show abstract
Since overcoming some of the inherent limitations of x-ray tubes becomes increasingly harder, it is important to consider new ways of x-ray generation and to study their applications in the field of medical imaging. In the present work we investigate a novel table-top-sized x-ray source, developed in a joint project within the Cluster of Excellence "Munich Center for Advanced Photonics". It uses laser-accelerated electrons emitting x-ray radiation in a short period undulator. This source has the potential to deliver tunable x-rays with a very narrow spectral bandwidth. The main purpose of this contribution is to investigate the performance of this source in the field of mammography and to compare it to that of conventional x-ray tubes. We simulated the whole imaging process from the electron beam dynamics through the generation of the synchrotron radiation in the undulator up to the x-ray-matter interaction and detection in the mammographic setting. A Monte Carlo simulation of the absorption and scattering processes based on the Geant4 software toolkit has been developed that uses a high-resolution voxel phantom of the female breast for the accurate simulation of mammography. We present simulated mammograms generated by using quasi-monochromatic undulator radiation and by using the polychromatic spectrum of a conventional x-ray tube.
The case for single-exposure angiography using energy-resolving photon-counting detectors: a theoretical comparison of signal and noise with conventional subtraction angiography
Show abstract
The use of energy-resolving photon-counting (EPC) x-ray detectors creates an opportunity for material specific x-ray imaging. An exciting potential application of this technique is energy-resolved angiography (ERA) in which the injected-iodine signal is determined from an analysis of x-ray energies in a single exposure rather than the subtraction of a mask image from a post-contrast-injection image, as used in conventional digital subtraction angiography (DSA). We explore the possibility of single-exposure angiography using EPC detectors by theoretically calculating the
signal-difference-to-noise ratio (SDNR) per root entrance exposure (X) in an iodine-specific image that could be determined using spectral methods, and to compare this with the corresponding SDNR using DSA for the same x-ray exposure. We found that angiography using ideal EPC x-ray detectors can separate iodine-filled cavities of variable concentrations from a water-only background in a single x-ray exposure. For high iodine concentrations, the best SDNR for DSA is approximately 1.5 times the best SDNR for ERA and at low concentrations this factor reduces to 1.3 times. This difference of approximately 1.3 to 1.5 times is surprisingly small. While DSA in general provides better image quality for the same x-ray exposure, DSA has been unsuccessful in coronary applications because of
motion-related image artifacts. X-ray imaging using ideal
energy-resolving photon-counting detectors has the potential to provide DSA-like angiographic images of iodinated vasculature in a single x-ray exposure, therefore eliminating motion-related image artifacts that limit the use of DSA in cardiac applications. ERA may potentially be used in for background removal in situations where DSA cannot be used, such as in cardiac imaging.
Electron field emission Particle-In-Cell (PIC) coupled with MCNPX simulation of a CNT-based flat-panel x-ray source
Show abstract
A novel x-ray source based on carbon nanotubes (CNTs) field emitters is being developed as an alternative for medical
imaging diagnostic technologies. The design is based on an array of millions of micro sized x-ray sources similar to the
way pixels are arranged in flat panel displays. The trajectory and focusing characteristics of the field emitted electrons,
as well as the x-ray generation characteristics of each one of the proposed micro-sized x-ray tubes are simulated. The
electron field emission is simulated using the OOPIC PRO particle-in-cell code. The x-ray generation is analyzed with
the MCNPX Monte Carlo code. MCNPX is used to optimize both the bremsstrahlung radiation energy spectra and to
verify the angular distribution for 0.25-12 μm thick molybdenum, rhodium and tungsten targets. Also, different
extracting, accelerating and focusing voltages, as well as different focusing structures and geometries of the micro cells
are simulated using the OOPIC Pro particle-in-cell code. The electron trajectories, beam spot sizes, I-V curves,
bremsstrahlung radiation energy spectra, and angular distribution are all analyzed for a given cell. The simulation results
show that micro x-ray cells can be used to generate suitable electron currents using CNT field emitters and strike a thin
tungsten target to produce an adequate bremsstrahlung spectrum. The shape and trajectory of the electron beam was
modified using focusing structures in the microcell. Further modifications to the electron beam are possible and can help
design a better x-ray transmission source.
The effects of compensator design on scatter distribution and magnitude: a Monte Carlo study
Show abstract
X-ray scatter has a significant impact on image quality in kV cone-beam CT (CBCT), its effects include: CT number inaccuracy, streak and cupping artifacts, and loss of contrast. Compensators provide a method for not only decreasing the magnitude of the scatter distribution, but also reducing the structure found in the scatter distribution. Recent Monte Carlo (MC) simulations examining X-ray scatter in CBCT projection images have shown that the scatter distribution in x-ray imaging contains structure largely induced by coherent scattering. In order to maximize the reduction of x-ray scatter induced artifacts a decrease in the magnitude and structure of the scatter distribution is sought through optimal compensator design. A flexible MC model that allows for separation of scattered and primary photons has been created to simulate the CBCT imaging process. The CBCT MC model is used to investigate the effectiveness of compensators in decreasing the magnitude and structure of the scatter distribution in CBCT projection images. The influence of the compensator designs on the scatter distribution are evaluated for different anatomy (abdomen, pelvis, and head and neck) and viewing angles using a voxelized anthropomorphic phantom. The effect of compensator material composition on the amount of contamination photons in an open field is also investigated.
Correlated-polarity noise reduction: feasibility of a new statistical approach to reduce image noise
Show abstract
Reduction of image noise is an important goal in producing the highest quality medical images. A very important
benefit of reducing image noise is the ability to reduce patient exposure while maintaining adequate image quality.
Various methods have been described in the literature for reducing image noise by means of image processing, both
deterministic and statistical. Deterministic methods tend to degrade image resolution or lead to artifacts or non-uniform
noise texture that does not look "natural" to the observer. Statistical methods, including Bayesian estimation, have been
successfully applied to image processing, but may require more time-consuming steps of computing priors.
The approach described in this paper uses a new statistical method we have developed in our laboratory to reduce image
noise. This approach, Correlated-Polarity Noise Reduction (CPNR), makes an estimate of the polarity of noise at a
given pixel, and then subtracts a random value from a normal distribution having a sign that matches the estimated
polarity of the noise in the pixel. For example, if the noise is estimated to be positive in a given pixel, then a random
number that is also positive will be subtracted from that pixel.
The CPNR method reduces the noise in an image by about 20% per iteration, with little negative impact on image
resolution, few artifacts, and final image noise characteristics that appears "normal." Examples of the feasibility of this
approach are presented in application to radiography and CT, but it also has potential utility in tomosynthesis and
fluoroscopy.
Optimization of the grid frequencies and angles in digital radiography imaging
Show abstract
In order to reduce the grid artifacts, which are caused by using the
antiscatter grid in obtaining x-ray digital images, we analyze the
grid artifacts based on the multiplicative image formation model
instead of the traditional additive model, and apply filters to
suppress the artifact terms. The artifact terms are aliases of the
modulated terms from the harmonics of the grid frequency. Hence,
several filters are required to efficiently suppress the grid
artifacts. However, applying filters also distort the original image
that will be recovered. If the distance between the origin and the
center frequency of the artifact term is relatively large, then we
can suppress the artifact term less distorting the original image.
In this paper, by increasing the distances for a given sampling
frequency of the image detector, we design antiscatter grids, which
are good in terms of efficient grid artifact reduction. In order to
design optimal grids, we formulate min-max optimization problems and
provide optimal grid frequencies for a fixed grid angle with respect
to the sampling direction of the image detector and optimal grid
angles for a fixed grid frequency. We then propose using rotated
grids with the optimal grid angles in digital radiography imaging.
By applying band-rejection filters to the artifact, we can
considerably reduce the grid artifacts when comparing to the
traditional non-rotated grid case for real x-ray images.
Metrology
A novel method to measure the zero-frequency DQE of a non-linear imaging system
Show abstract
A new method of measuring the zero-frequency value of the detective quantum efficiency (DQE) of x-ray detectors is described. The method is unique in that it uses what we call a "simulated neutral-attenuator" method to determine the system gain derived from image-based measurements of x-ray transmission through a thin copper foil of known thickness. Since this method uses only low-contrast image structure, it is a true measure of the "small-signal" system gain which is assumed piece-wise linear. A theoretical expression is derived for the linearized pixel value which is the pixel value that a linear system would have for the test conditions. Combining this with the measured detector exposure and zero-frequency value of the Wiener noise power spectrum provides the DQE. It is shown this method gives a DQE value that is in agreement with conventional test methods on a linear flat-panel detector, and that the same DQE value is obtained when using both raw (linear) and processed (non-linear) images.
Use of sphere phantoms to measure the 3D MTF of FDK reconstructions
Show abstract
To assess the resolution performance of modern CT scanners, a method to measure the 3D MTF is needed.
Computationally, a point object is an ideal test phantom but is difficult to apply experimentally. Recently, Thornton et al.
described a method to measure the directional MTF using a sphere phantom. We tested this method for FDK
reconstructions by simulating a sphere and a point object centered at (0.01 cm , 0.01 cm, 0.01 cm) and (0.01 cm, 0.01 cm,
10.01 cm) and compared the directional MTF estimated from the reconstructed sphere with that measured from an ideal
point object. While the estimated MTF from the sphere centered at (0.01 cm , 0.01 cm, 0.01 cm) showed excellent
agreement with that from the point object, the estimated MTF from a sphere centered at (0.01 cm , 0.01 cm, 10.01 cm)
had significant errors, especially along the fz axis. We found that this is caused by the long tails of the impulse response
of the FDK reconstruction far off the central plane. We developed and tested a new method to estimate the directional
MTF using the sphere data. The new method showed excellent agreement with the MTF from an ideal point object.
Caution should be used when applying the original method in cases where the impulse response may be wide.
3D noise power spectrum applied on clinical MDCT scanners: effects of reconstruction algorithms and reconstruction filters
Show abstract
The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography
(CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were
computed for different acquisition reconstruction parameters.
A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and
helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch,
the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP),
adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal
plane using a reformat process. Then 2D and 3D NPS methods were computed.
In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction
when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while
the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction
algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift
of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was
impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.
The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However,
impact of the non-stationarity noise effect may need further investigations.
NPS comparison of anatomical noise characteristics in mammography, tomosynthesis, and breast CT images using power law metrics
Show abstract
Digital mammography is the current standard for breast cancer screening, however breast tomosynthesis and breast CT
(bCT) have been studied in clinical trials. At our institution, 30 women (BIRADS 4 and 5) underwent IRB-approved
imaging by mammography, breast tomosynthesis, and bCT on the same day. Twenty three data sets were used for
analysis. The 2D noise power spectrum (NPS) was computed and averaged for each data set. The NPS was computed
for different slice thicknesses of dx × N, where dx ≈ 0.3 mm and N=1-64, on the bCT data. Each 2D NPS was radially
averaged, and the 1D data were fit using a power law function as proposed by Burgess: NPS(f) = αf-β. The value of β was determined over a range of frequencies corresponding to anatomical noise, for each patient and each modality.
Averaged over the 30 women (26 for bCT, 28 for tomosynthesis, 28 for mammography), for mammography β=3.06
(0.25), for CC tomosynthesis β=2.91 (0.35), and for axial bCT β=1.72 (0.47). For sagittal bCT β=1.77 (0.36) and for
coronal bCT, β=1.88 (0.45). The computation of β versus slice thickness on the coronal bCT data set led to β≈1.7 for
N=1, asymptotically reaching β ≈ 3 for larger slice thickness. These results suggest that there is a fundamental
difference in breast anatomic noise as characterized by β, between thin slices (<2 mm) and thicker slices. Tomosynthesis
was found to have anatomic noise properties closer to mammography than breast CT, most likely due to the relatively
thick slice sensitivity profile of tomosynthesis.
Imaging properties of the magnification factor in digital mammography by the generalized MTF (GMTF)
Show abstract
Our aim in this study was to examine the resolution effects of breast thickness in magnification technique by evaluating
generalized modulation transfer function (GMTF) including the effect of focal spot, effective pixel size and the scatter.
The PMMAs ranging from 10 to 40 mm in thickness were placed on a standard supporting platform that was positioned
to achieve magnification factors ranging from 1.2 to 2.0. As the magnification increased, the focal spot MTF degraded
while the detector MTF improved. A small focal spot resulted in an improvement of GMTF due to a smaller effective
pixel size by magnification. In contrast, a large focal spot resulted in significant degradation of GMTF due to dominating
the effect of focal spot blurring. The resolution of small focal spot did improve slightly with increasing PMMA thickness
for magnification factors less than 1.8. System resolution decreased with increasing PMMA thickness for magnification
factors greater than 1.8, since focal spot blur begins to dominate spatial resolution. In particular, breast thickness had a
large effect on the resolution at lower frequencies as a low frequency drop effect. Hence, the effect of compressed breast
thickness should be considered for the standard magnification factor of 1.8 that is most commonly used in clinical
practice. Our results should provide insights for determining optimum magnification in clinical application of digital
mammography, and our approaches can be extended to a wide diversity of radiological imaging systems.
Iterative and Statistical Reconstruction
Predictive models for observer performance in CT: applications in protocol optimization
Show abstract
The relationship between theoretical descriptions of imaging performance (Fourier-based) and the
performance of real human observers was investigated for detection tasks in multi-slice CT. The detectability
index for the Fisher-Hotelling model observer and non-prewhitening model observer (with and without
internal noise and eye filter) was computed using: 1) the measured modulation transfer function (MTF) and
noise-power spectrum (NPS) for CT; and 2) a Fourier description of imaging task. Based upon CT images of
human patients with added simulated lesions, human observer performance was assessed via an observer
study in terms of the area under the ROC curve (Az). The degree to which the detectability index correlated
with human observer performance was investigated and results for the non-prewhitening model observer with
internal noise and eye filter (NPWE) were found to agree best with human performance over a broad range of
imaging conditions. Results provided initial validation that CT image acquisition and reconstruction
parameters can be optimized for observer performance rather than system performance (i.e., contrast-to-noise
ratio, MTF, and NPS). The NPWE model was further applied for the comparison of FBP with a novel modelbased
iterative reconstruction algorithm to assess its potential for dose reduction.
High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI
Show abstract
Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging.
However, medical doctors hesitate to accept this new technology because visual impression of IRT images are
different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the
mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization
of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments
of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art
clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other.
The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise
patches were calculated for the comparison.
Adaptive iterative reconstruction
Show abstract
It is well known that, in CT reconstruction, Maximum A Posteriori (MAP) reconstruction based on a Poisson noise
model can be well approximated by Penalized Weighted Least Square (PWLS) minimization based on a data dependent
Gaussian noise model. We study minimization of the PWLS objective function using the Gradient Descent (GD) method,
and show that if an exact inverse of the forward projector exists, the PWLS GD update equation can be translated into an
update equation which entirely operates in the image domain. In case of non-linear regularization and arbitrary noise
model this means that a non-linear image filter must exist which solves the optimization problem. In the general case of
non-linear regularization and arbitrary noise model, the analytical computation is not trivial and might lead to image
filters which are computationally very expensive. We introduce a new iteration scheme in image space, based on a
regularization filter with an anisotropic noise model. Basically, this approximates the statistical data weighting and
regularization in PWLS reconstruction. If needed, e.g. for compensation of the non-exactness of backprojector, the
image-based regularization loop can be preceded by a raw data based loop without regularization and statistical data
weighting. We call this combined iterative reconstruction scheme Adaptive Iterative Reconstruction (AIR). It will be
shown that in terms of low-contrast visibility, sharpness-to-noise and contrast-to-noise ratio, PWLS and AIR
reconstruction are similar to a high degree of accuracy. In clinical images the noise texture of AIR is also superior to the
more artificial texture of PWLS.
Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration
Show abstract
Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An
accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution
image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge
storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel
computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram
blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models
the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring
matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The
geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing
the difference between the factored system matrix and the original system matrix. The resulting factored system matrix
has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage
and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs,
which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce
the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image
reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.
Precision of iodine quantification in hepatic CT: effects of reconstruction (FBP and MBIR) and imaging parameters
Show abstract
In hepatic CT imaging, the lesion enhancement after the injection of contrast media is of quantitative
interest. However, the precision of this quantitative measurement may be dependent on the imaging
techniques such as dose and reconstruction algorithm. To determine the impact of different techniques,
we scanned an iodinated liver phantom with acquisition protocols of different dose levels, and
reconstructed images with different algorithms (FBP and MBIR) and slice thicknesses. The contrast of
lesions was quantified from the images, and its precision was calculated for each protocol separately.
Results showed that precision was improved by increasing dose, increasing slice thickness, and using
MBIR reconstruction. When using MBIR instead of FBP, the same precision can be achieved at 50% less
dose. To our knowledge, this is the first investigation of the quantification precision in hepatic CT
imaging using iterative reconstructions.
An iterative dual energy CT reconstruction method for a K-edge contrast material
M. Depypere,
J. Nuyts,
N. van Gastel,
et al.
Show abstract
We present and evaluate an iterative dual energy CT reconstruction algorithm for a K-edge contrast material in
microCT imaging. This allows improved discrimination of contrast enhanced structures such as vasculature from
surrounding bony structures. The energy dependence of the attenuation is modeled by decomposing the linear
attenuation coefficient into three basis functions. Any material without a K-edge in the imaging energy range can
be modeled by two basis functions describing the Compton scatter and the photoelectric effect respectively. A K-edge
material is described by using its mass attenuation coefficient as third basis function. During reconstruction
the basis function coefficients are determined for each voxel of the image by maximizing the likelihood of the data.
The relative weights of the Compton and photoelectric components are constrained to those of expected materials
to reduce the number of unknowns to two per voxel. The proposed method is validated on simulated and real
microCT projections. The presented method was found to perform better than a typical post-reconstruction
approach with respect to beam-hardening and noise at the expense of increased computation time.
Detectors I
Novel synthesis of large area ZnTe:O films for high-resolution imaging applications
Show abstract
Oxygen doped zinc telluride is a bright scintillator with one of the highest X-ray conversion efficiencies. These
properties make it an ideal choice for wide range of X-ray imaging applications in biology and medicine. With an
emission wavelength of 680 nm it is ideally suited for use with silicon imagers such as CCDs. In this paper we report a
new co-evaporation process where the oxygen dopant concentration in the evaporated film is controlled by simultaneous
evaporation of zinc oxide and zinc telluride charge. To date we have fabricated as large as 40 cm2 area films measuring
50 μm to 500 μm in thickness. The fabrication and characterization details of these and other films are discussed in this
paper.
12-inch-wafer-scale CMOS active-pixel sensor for digital mammography
Show abstract
This paper describes the development of an active-pixel sensor (APS) panel, which has a field-of-view of 23.1×17.1 cm
and features 70-μm-sized pixels arranged in a 3300×2442 array format, for digital mammographic applications. The
APS panel was realized on 12-inch wafers based on the standard complementary metal-oxide-semiconductor (CMOS)
technology without physical tiling processes of several small-area sensor arrays. Electrical performance of the developed
panel is described in terms of dark current, full-well capacity and leakage current map. For mammographic imaging, the
optimized CsI:Tl scintillator is experimentally determined by being combined with the developed panel and analyzing im
aging characteristics, such as modulation-transfer function, noise-power spectrum, detective quantum efficiency, image l
ag, and contrast-detail analysis by using the CDMAM 3.4 phantom. With these results, we suggest that the developed
CMOS-based detector can be used for conventional and advanced digital mammographic applications.
Noise performance limits of advanced x-ray imagers employing poly-Si-based active pixel architectures
Show abstract
A decade after the clinical introduction of active matrix, flat-panel imagers (AMFPIs), the performance of this
technology continues to be limited by the relatively large additive electronic noise of these systems - resulting in
significant loss of detective quantum efficiency (DQE) under conditions of low exposure or high spatial frequencies. An
increasingly promising approach for overcoming such limitations involves the incorporation of in-pixel amplification
circuits, referred to as active pixel architectures (AP) - based on low-temperature polycrystalline silicon (poly-Si) thin-film
transistors (TFTs). In this study, a methodology for theoretically examining the limiting noise and DQE
performance of circuits employing 1-stage in-pixel amplification is presented. This methodology involves sophisticated
SPICE circuit simulations along with cascaded systems modeling. In these simulations, a device model based on the RPI
poly-Si TFT model is used with additional controlled current sources corresponding to thermal and flicker (1/f) noise.
From measurements of transfer and output characteristics (as well as current noise densities) performed upon individual,
representative, poly-Si TFTs test devices, model parameters suitable for these simulations are extracted. The input
stimuli and operating-point-dependent scaling of the current sources are derived from the measured current noise
densities (for flicker noise), or from fundamental equations (for thermal noise). Noise parameters obtained from the
simulations, along with other parametric information, is input to a cascaded systems model of an AP imager design to
provide estimates of DQE performance. In this paper, this method of combining circuit simulations and cascaded
systems analysis to predict the lower limits on additive noise (and upper limits on DQE) for large area AP imagers with
signal levels representative of those generated at fluoroscopic exposures is described, and initial results are reported.
Characterization and comparison of lateral amorphous semiconductors with embedded Frisch grid detectors on 0.18um CMOS processed substrate for medical imaging applications
Show abstract
An indirect digital x-ray detector is designed, fabricated, and tested. The detector integrates a high speed, low noise
CMOS substrate with two types of amorphous semiconductors on the circuit surface. Using a laterally oriented layout
a-Si:H or a-Se can be used to coat the CMOS circuit and provide high speed photoresponse to complement the high speed
circuits possible on CMOS technology. The circuit also aims to reduce the effect of slow carriers by integrated a Frisch
style grid on the photoconductive layer to screen for the slow carriers. Simulations show a uniform photoresponse for
photons absorbed on the top layer and an enhanced response when using a Frisch grid. EQE and noise results are
presented. Finally, possible applications and improvements to the area of indirect x-ray imaging that are capable of easily
being implemented on the substrate are suggested.
Low-noise thin-film transistor array for digital x-ray imaging detectors
Show abstract
A new and novel detector structure is now being investigated to minimize the readout noise of large area
TFT arrays. A conventional TFT panel consists of orthogonal arrays of gate lines and data lines. The
parasitic capacitance from the crossover of these lines results in a sizable data line capacitance. During
image readout, the thermal noise of the charge integrator is greatly magnified by the ratio of the data line
capacitance to the feedback capacitance of the charge amplifier. The swinging of the gate voltage will
also inject charges in and out of the imaging holding pixel capacitors and contribute to the switching noise
in the readout image. By redesigning the layout of the TFT arrays and by coupling linear light source to
the bottom side of the TFT array in the same direction as the gate lines, the crossover of gate lines and
data lines can be avoided and the data line capacitance can be greatly reduced. Instead of addressing
each row of transistors by the switching of the gate control voltage, linear light source with collimators are
used to optically switch on and off the amorphous silicon transistors. The transistor switching noise from
the swinging of the gate voltages is reduced. By minimizing the data line capacitance and avoiding the
swinging of the gate control voltage, the basic TFT readout noise is minimized and lower dose x-rays
images can be obtained. This design is applicable to both Direct Conversion and
Indirect Conversion panels.
Detectors II
Performance characterization of a silicon strip detector for spectral computed tomography utilizing a laser testing system
Show abstract
A new silicon strip detector with sub-millimeter pixel size operated in single photon-counting mode has been
developed for use in spectral computed tomography (CT). An ultra fast application specific integrated circuit
(ASIC) specially designed for fast photon-counting application is used to process the pulses and sort them into
eight energy bins. This report characterizes the ASIC and detector in terms of thermal noise (0.77 keV RMS),
energy resolution when electron-hole pairs are generated in the detector diode (1.5 keV RMS) and Poissonian
count rate with retained count rate linearity and energy resolution (200 Mcps•mm-2).
The performance of the photon-counting detector has been tested using a picosecond pulsed laser system to
inject energy into the detector, simulating x-ray interactions. The laser testing results indicate a good energy-discriminating
capability of the detector, assigning the pulses to higher and higher energy bins as the intensity
of the laser pulses are increased.
Quantum-counting CT in the regime of count-rate paralysis: introduction of the pile-up trigger method
Show abstract
The application of quantum-counting detectors in clinical Computed Tomography (CT) is challenged by extreme
X-ray fluxes provided by modern high-power X-ray tubes. Scanning of small objects or sub-optimal patient
positioning may lead to situations where those fluxes impinge on the detector without attenuation. Even in
operation modes optimized for high-rate applications, with small pixels and high bias voltage, CdTe/CdZnTe
detectors deliver pulses in the range of several nanoseconds. This can result in severe pulse pile-up causing
detector paralysis and ambiguous detector signals. To overcome this problem we introduce the pile-up trigger,
a novel method that provides unambiguous detector signals in rate regimes where classical rising-edge counters
run into count-rate paralysis. We present detailed CT image simulations assuming ideal sensor material not
suffering from polarization effects at high X-ray fluxes. This way we demonstrate the general feasibility of the
pile-up trigger method and quantify resulting imaging properties such as contrasts, image noise and dual-energy
performance in the high-flux regime of clinical CT devices.
6-Li enriched Cs2LiYCl6:Ce based thermal neutron detector coupled with CMOS solid-state photomultipliers for a portable detector unit
Chad Whitney,
Christopher Stapels,
Erik Johnson,
et al.
Show abstract
For detecting neutrons, 3-He tubes provide sensitivity and a unique capability for detecting and discriminating
neutron signals from background gamma-ray signals. A solid-state scintillation-based detector provides an alternative to
3-He for neutron detection. A real-time, portable, and low cost thermal neutron detector has been constructed from a
6Li-enriched Cs2LiYCl6:Ce (CLYC) scintillator crystal coupled with a CMOS solid-state photomultiplier (SSPM).
These components are fully integrated with a miniaturized multi-channel analyzer (MCA) unit for calculation and
readout of the counts and count rates.
CLYC crystals and several other elpasolites including Cs2LiLaCl6:Ce (CLLC) and Cs2LiLaBr6:Ce (CLLB) have
been considered for their unique properties in detecting neutrons and discriminating gamma ray events along with
providing excellent energy resolution comparable to NaI(Tl) scintillators. CLYC's slower rise and decay time for
neutrons (70ns and 900ns respectively) relative to a faster rise and decay time for gamma ray events (6ns and 55ns
respectively) allows for pulse shape discrimination in mixed radiation fields.
Light emissions from CLYC crystals are detected using an array of avalanche photodiodes referred to as solid-state
photomultipliers. SSPMs are binary photon counting devices where the number of pixels activated is directly
proportional to the light output of the CLYC scintillator which is proportional to the energy deposited from the radiation
field. SSPMs can be fabricated using standard CMOS processes and inherently contain the low noise performance
associated with ordinary photomultiplier tubes (PMT) while providing a light and compact solution for portable neutron
detectors.
Integration of an amorphous silicon passive pixel sensor array with a lateral amorphous selenium detector for large area indirect conversion x-ray imaging applications
Show abstract
Previously, we reported on a single-pixel detector based on a lateral a-Se metal-semiconductor-metal structure, intended
for indirect conversion X-ray imaging. This work is the continuous effort leading to the first prototype of an indirect
conversion X-ray imaging sensor array utilizing lateral amorphous selenium. To replace a structurally-sophisticated
vertical multilayer amorphous silicon photodiode, a lateral a-Se MSM photodetector is employed which can be easily
integrated with an amorphous silicon thin film transistor passive pixel sensor array. In this work, both 2×2 macro-pixel
and 32×32 micro-pixel arrays were fabricated and tested along with discussion of the results.
Simulation of one-dimensionally polarized x-ray semiconductor detectors
Show abstract
A pixelated X-ray semiconductor detector (="direct converter") is studied which contains an inhomogeneous electric
field parallel to the depth axis caused by different concentrations of ionized dopants. X-ray energy depositions and
charge movements within the detector are modeled in Monte-Carlo simulations giving access to a statistical analysis of
electron drift times and current pulse widths for various degrees of static polarization. Charges induced on the pixel
electrodes and pulse heights are evaluated and put to histograms of spectral detector responses and pulse height spectra,
respectively, considering energy measurements before and after electronic pulse shaping. For n-doped semiconductors,
the detector performance degrades due to pulse broadening. In contrast, a moderate p-doping can improve the detector
performance in terms of shorter electron pulses, as long as the detector is not limited by dynamic polarization.
Electrical interface characteristics (I-V), optical time of flight measurements, and the x-ray (20 keV) signal response of amorphous-selenium/crystalline-silicon heterojunction structures
David M. Hunter,
Chu An Ho,
George Belev,
et al.
Show abstract
We have investigated the dark current, optical TOF (time of flight) properties, and the X-ray response of amorphousselenium
(a-Se)/crystalline-silicon (c-Si) heterostructures for application in digital radiography. The structures have been
studied to determine if an x-ray generated electron signal, created in an a-Se layer, could be directly transferred to a c-Si
based readout device such as a back-thinned CCD (charge coupled device). A simple first order band-theory of the structure
indicates that x-ray generated electrons should transfer from the
a-Se to the c-Si, while hole transfer from p-doped c-Si to
the a-Se should be blocked, permitting a low dark signal as required. The structures we have tested have a thin metal bias
electrode on the x-ray facing side of the a-Se which is deposited on the c-Si substrate. The heterostructures made with
pure a-Se deposited on epitaxial p-doped (5×10 14 cm-3) c-Si exhibited very low dark current of 15 pA cm-2 at a negative
bias field of 10 V μm-1 applied to the a-Se. The optical TOF (time of flight) measurements show that the applied bias
drops almost entirely across the a-Se layer and that the a-Se hole and electron mobilities are within the range of commonly
accepted values. The x-ray signal measurements demonstrate the structure has the expected x-ray quantum efficiency. We
have made a back-thinned CCD coated with a-Se and although most areas of the device show a poor x-ray response, it does
contain small regions which do work properly with the expected x-ray sensitivity. Improved understanding of the a-Se/c-Si
interface and preparation methods should lead to properly functioning devices.
Breast Imaging
Comparison of 3D and 2D breast density estimation from synthetic ultrasound tomography images and digital mammograms of anthropomorphic software breast phantoms
Show abstract
Breast density descriptors were estimated from ultrasound tomography (UST) and digital mammogram (DM) images of
46 anthropomorphic software breast phantoms. Each phantom simulated a 450 ml or 700 ml breast with volumetric
percent density (PD) values between 10% and 50%. The UST based volumetric breast density (VBD) estimates were
calculated by thresholding the reconstructed UST images. Percent density (PD) values from DM images were estimated
interactively by a clinical breast radiologist using Cumulus software. Such obtained UST VBD and Cumulus PD
estimates were compared with the ground truth VBD values available from phantoms. The UST VBD values showed a
high correlation with the ground truth, as evidenced by the Pearson correlation coefficient of r=0.93. The Cumulus PD
values also showed a high correlation with the ground truth (r=0.84), as well as with the UST VBD values (r=0.78).
The consistency in measuring the UST VBD and Cumulus PD values was analyzed using the standard error of the
estimation by linear regression (σE). The σE value for Cumulus PD was 1.5 times higher compared to the UST VBD
(6.54 vs. 4.21). The σE calculated from two repeated Cumulus estimation sessions (σE=4.66) was comparable with the
UST. Potential sources of the observed errors in density measurement are the use of global thresholding and (for
Cumulus) the human observer variability. This preliminary study of simulated phantom UST images showed promise
for non-invasive estimation of breast density.
The effect of characteristic x-rays on the spatial and spectral resolution of a CZT-based detector for breast CT
Show abstract
In an effort to improve the early stage detection and diagnosis of breast cancer, a number of research groups have been
investigating the use of x-ray computerized tomography (CT) systems dedicated for use in imaging the breast.
Preliminary results suggest that dedicated breast CT systems can provide improved visualization of 3D breast tissue as
compared to conventional mammography. However, current breast CT prototypes that are being investigated have
limitations resulting in less than desirable spatial resolution, lesion contrast, and signal-to-noise (SNR) ratio. Another
option is a CT breast imaging system that uses a cadmium zinc telluride (CZT) based detector operating in a photon
counting mode. This paper uses a Monte Carlo simulation to evaluate the effect of characteristic x-rays on spatial and
spectral resolution for a CZT detector used for breast CT. It is concluded that using CZT of 500-750 μm would not cause
significant differences in spatial or spectral resolution, nor in stopping power as compared to using CZT with thickness
2-3 mm.
Analysis of multilayer and single layer X-ray detectors for contrast-enhanced mammography using imaging task
Show abstract
A multilayer (single-shot) detector has previously been proposed for contrast-enhanced mammography. The
multilayer detector has the benefit of avoiding motion artifacts due to simultaneous acquisition of both high
and low energy images. A single layer (dual-shot) detector has the benefit of better control over the energy
separation since the incident beams can be produced and filtered separately. In this paper the performance of
the multilayer detector is compared to that of a single layer detector using an ideal observer detectability index
which is determined from an extended cascaded systems model and a defined imaging task. The detectors are
assumed to have amorphous selenium direct conversion layers, however the same theoretical techniques used here
may be applied to other types of integrating detectors. The anatomical noise caused by variation of glandularity
within the breast is known to dominate the noise power spectrum at low frequencies due to its inverse power
law dependence and is thus taken into account in our model to provide an accurate estimate of the detectability
index. The conditions leading to the optimal detectability index, such as tube voltage, filtration, and weight
factor are reported for both detector designs.
Optimization of mammography with respect to anatomical noise
Show abstract
Beam quality optimization in mammography traditionally considers detection of a target obscured by quantum
noise on a homogenous background. It can be argued that this scheme does not correspond well to the
clinical imaging task because real mammographic images contain a complex superposition of anatomical structures,
resulting in anatomical noise that may dominate over quantum noise. Using a newly developed spectral
mammography system, we measured the correlation and magnitude of the anatomical noise in a set of mammograms.
The results from these measurements were used as input to an observer-model optimization that included
quantum noise as well as anatomical noise. We found that, within this framework, the detectability of tumors
and microcalcifications behaved very differently with respect to beam quality and dose. The results for small
microcalcifications were similar to what traditional optimization methods would yield, which is to be expected
since quantum noise dominates over anatomical noise at high spatial frequencies. For larger tumors, however,
low-frequency anatomical noise was the limiting factor. Because anatomical structure has similar energy dependence
as tumor contrast, optimal x-ray energy was significantly higher and the useful energy region wider than
traditional methods suggest. Measurements on a tissue phantom confirmed these theoretical results. Furthermore,
since quantum noise constitutes only a small fraction of the noise, the dose could be reduced substantially
without sacrificing tumor detectability. Exposure settings used clinically are therefore not necessarily optimal
for this imaging task. The impact of these findings on the mammographic imaging task as a whole is, however,
at this stage unclear.
Issues in characterizing anatomic structure in digital breast tomosynthesis
Show abstract
Normal mammographic backgrounds have power spectra that can be described using a power law
P(f) = c/fβ, where β ranges from 1.5 to 4.5. Anatomic noise can be the dominant noise source
in a radiograph. Many researchers are characterizing anatomic noise by β, which can be measured
from an image. We investigated the effect of sampling distance, offset, and region of interest (ROI)
size on β. We calculated β for tomosynthesis projection view and reconstructed images, and we
found that ROI size affects the value of β. We evaluated four different square ROI sizes (1.28, 2.56,
3.2, and 5.12 cm), and we found that the larger ROI sizes yielded larger β values in the projection
images.
The β values change rapidly across a single projection view; however, despite the variation
across the breast, different sampling schemes (which include a variety of sampling distances and
offsets) produced average β values with less than 5% variation. The particular location and number
of samples used to calculate β does not matter as long as the whole image is covered, but the size
of the ROI must be chosen carefully.
Evaluation of photon-counting spectral breast tomosynthesis
Show abstract
We have designed a mammography system that for the first time combines photon-counting spectral imaging with
tomosynthesis. The present study is a comprehensive physical evaluation of the system; tomosynthesis, spectral
imaging, and the combination of both are compared using an ideal-observer model that takes anatomical noise
into account. Predictions of signal and noise transfer through the system are verified by contrast measurements
on a tissue phantom and 3D measurements of MTF and NPS. Clinical images acquired with the system are
discussed in view of the model predictions.
Tomosynthesis I: Reconstruction
Tomosynthesis imaging with 2D scanning trajectories
Show abstract
Tomosynthesis imaging in chest radiography provides volumetric information with the potential for improved diagnostic
value when compared to the standard AP or LAT projections. In this paper we explore the image quality benefits of 2D
scanning trajectories when coupled with advanced image reconstruction approaches. It is intuitively clear that 2D
trajectories provide projection data that is more complete in terms of Radon space filling, when compared with
conventional tomosynthesis using a linearly scanned source. Incorporating this additional information for obtaining
improved image quality is, however, not a straightforward problem. The typical tomosynthesis reconstruction algorithms
are based on direct inversion methods e.g. Filtered Backprojection (FBP) or iterative algorithms that are variants of the
Algebraic Reconstruction Technique (ART). The FBP approach is fast and provides high frequency details in the image
but at the same time introduces streaking artifacts degrading the image quality. The iterative methods can reduce the
image artifacts by using image priors but suffer from a slow convergence rate, thereby producing images lacking high
frequency details. In this paper we propose using a fast converging optimal gradient iterative scheme that has advantages
of both the FBP and iterative methods in that it produces images with high frequency details while reducing the image
artifacts. We show that using favorable 2D scanning trajectories along with the proposed reconstruction method has the
advantage of providing improved depth information for structures such as the spine and potentially producing images
with more isotropic resolution.
Dynamic reconstruction and rendering of 3D tomosynthesis images
Johnny Kuo,
Peter A. Ringer,
Steven G. Fallows,
et al.
Show abstract
Dynamic Reconstruction and Rendering (DRR) is a fast and flexible tomosynthesis image reconstruction and display
implementation. By leveraging the computational efficiency gains afforded by off-the-shelf GPU hardware,
tomosynthesis reconstruction can be performed on demand at real-time, user-interactive frame rates. Dynamic
multiplanar reconstructions allow the user to adjust reconstruction and display parameters interactively, including axial
sampling, slice location, plane tilt, magnification, and filter selection. Reconstruction on-demand allows tomosynthesis
images to be viewed as true three-dimensional data rather than just a stack of two-dimensional images. The speed and
dynamic rendering capabilities of DRR can improve diagnostic accuracy and lead to more efficient clinical workflows.
Adaptive diffusion regularization for enhancement of microcalcifications in digital breast tomosynthesis (DBT) reconstruction
Show abstract
Digital breast tomosynthesis (DBT) has been shown to increase mass detection. Detection of microcalcifications in DBT
is challenging because of the small, subtle signals to be searched in the large breast volume and the noise in the
reconstructed volume. We developed an adaptive diffusion (AD) regularization method that can differentially regularize
noise and potential signal regions during reconstruction based on local contrast-to-noise ratio (CNR) information. This
method adaptively applies different degrees of regularity to signal and noise regions, as guided by a CNR map for each
DBT slice within the image volume, such that potential signals will be preserved while noise is suppressed. DBT scans
of an American College of Radiology phantom and the breast of a subject with biopsy-proven calcifications were
acquired with a GE prototype DBT system at 21 angles in 3° increments over a ±30° range. Simultaneous algebraic
reconstruction technique (SART) was used for DBT reconstruction. The AD regularization method was compared to the
non-convex total p-variation (TpV) method and SART with no regularization (NR) in terms of the CNR and the full
width at half maximum (FWHM) of the central gray-level line profile in the focal plane of a calcification. The results
demonstrated that the SART regularized by the AD method enhanced the CNR and preserved the sharpness of
microcalcifications compared to reconstruction without regularization. The AD regularization was superior to the TpV
method for subtle microcalcifications in terms of the CNR while the FWHM was comparable. The AD regularized
reconstruction has the potential to improve the CNR of microcalcifications in DBT for human or machine detection.
Comparison of model-observer and human-observer performance for breast tomosynthesis: effect of reconstruction and acquisition parameters
Show abstract
The problem of optimizing the acquisition and reconstruction parameters for breast-cancer detection with digital breast
tomosynthesis (DBT) is becoming increasingly important due to the potential of DBT for clinical screening. Ideally, one
wants a set of parameters suitable for both microcalcification (MC) and mass detection that specifies the lowest possible
radiation dose to the patient. Attacking this multiparametric optimization problem using human-observer studies (which
are the gold standard) would be very expensive. On the other hand, there are numerous limitations to having existing
mathematical model observers as replacements. Our aim is to develop a model observer that can reliably mimic human
observers at clinically realistic DBT detection tasks. In this paper, we present a novel visual-search (VS) model observer
for MC detection and localization. Validation of this observer against human data was carried out in a study with
simulated DBT test images. Radiation dose was a study parameter, with tested acquisition levels of 0.7, 1.0 and 1.5
mGy. All test images were reconstructed with a penalized-maximum-likelihood reconstruction method. Good agreement
at all three dose levels was obtained between the VS and human observers. We believe that this new model observer has
the potential to take the field of image-quality research in a new direction with a number of practical clinical
ramifications.
A second pass correction method for calcification artifacts in digital breast tomosynthesis
Show abstract
Digital breast tomosynthesis (DBT) allows a quasi-3D reconstruction of the breast with high in-plane and poor
depth resolution by the principles of limited angle tomography. The limited angular range and the coarse
angular sampling result in prominent streak artifacts arising from high-contrast structures such as calcifications.
These artifacts do not only degrade the image quality but also hold the risk of overlaying suspicious tissue
structure in neighbouring slices, which might therefore be overlooked. This work presents a second pass method
for correcting these kinds of high-contrast streak artifacts. In a first pass reconstruction the candidate highcontrast
calcifications are segmented and subtracted from the original projection data to generate a subsequent
artifact-free second pass reconstruction. The method is demonstrated in a simulation study using software breast
phantoms, which have been derived from segmented MRI data.
Tomosynthesis II
3D task-based performance assessment metrics for optimization of performance and dose in breast tomosynthesis
Show abstract
This study aimed to investigate a method for empirically evaluating 3D imaging task performance of breast
tomosynthesis imaging systems. A simulation and experimental approach was used to develop a robust
method for performance assessment. To identify a method for experimentally assessing the 3D modulation
transfer function (MTF), a breast tomosysnthesis system was first simulated using cascaded system analysis
to model the signal and noise characteristics of the projections. A range of spheres with varying contrast and
size were reconstructed using filtered back projection from which the 3D MTF was evaluated. Results
revealed that smaller spheres result in lower artifacts in the measured MTF, where a sphere of 0.5 mm was
found ideal for experimental purposes. A clinical tomosynthesis unit was used as a platform for quantifying
the effect of acquisition and processing parameters (e.g., angular extent and sampling, dose, and voxel size)
on breast imaging performance. The 3D noise-power spectrum (NPS) was measured using a uniform phantom
and 3D MTF was measured using 0.5 mm ruby spheres. These metrics were combined with a mathematical
description of imaging task to generate a figure of merit called the detectability index for system evaluation
and optimization. Clinically relevant imaging tasks were considered, such as the detection and localization of
a spherical mass. The detectability index was found to provide a useful metric that accounts for the complex
3D imaging characteristics of breast tomosynthesis. Results highlighted the dependence of optimal technique
on the imaging task. They further provided initial validation of an empirically assessed figure of merit for
clinical performance assessment and optimization of breast tomosynthesis systems.
Dose and diagnostic image quality in digital tomosynthesis imaging of facial bones in pediatrics
Show abstract
The purpose of this study was to evaluate the use of digital tomosynthesis (DT) for pediatric facial bone imaging. We
compared the eye lens dose and diagnostic image quality of DT facial bone exams relative to digital radiography (DR)
and computed tomography (CT), and investigated whether we could modify our current DT imaging protocol to reduce
patient dose while maintaining sufficient diagnostic image quality. We measured the dose to the eye lens for all three
modalities using high-sensitivity thermoluminescent dosimeters (TLDs) and an anthropomorphic skull phantom. To
assess the diagnostic image quality of DT compared to the corresponding DR and CT images, we performed an observer
study where the visibility of anatomical structures in the DT phantom images were rated on a four-point scale. We then
acquired DT images at lower doses and had radiologists indicate whether the visibility of each structure was adequate for
diagnostic purposes. For typical facial bone exams, we measured eye lens doses of 0.1-0.4 mGy for DR, 0.3-3.7 mGy for
DT, and 26 mGy for CT. In general, facial bone structures were visualized better with DT then DR, and the majority of
structures were visualized well enough to avoid the need for CT. DT imaging provides high quality diagnostic images of
the facial bones while delivering significantly lower doses to the lens of the eye compared to CT. In addition, we found
that by adjusting the imaging parameters, the DT effective dose can be reduced by up to 50% while maintaining
sufficient image quality.
A 3D linear system model for the optimization of dual-energy contrast-enhanced digital breast tomosynthesis
Show abstract
Digital breast tomosynthesis (DBT) is a three-dimensional (3D) x-ray imaging modality that has been shown to decrease
the obscuring effect of breast structural noise, thereby increasing lesion conspicuity. To further improve breast cancer
detection, much recent work has been devoted to the development of contrast enhanced DBT (CEDBT). Taking
advantage of angiogenesis in malignant tissue, CEDBT involves the injection of radio-opaque material (i.e. iodine) and
measures the relative increase in uptake of contrast in breast cancer. Either temporal or dual energy subtraction
techniques may be used to implement CEDBT. Our present work is to develop a cascaded linear system model for DBT
with a CEDBT option to calculate the ideal observer signal to noise ratio (SNR) of lesions in the presence of structural
noise, evaluate the efficacy of CEDBT in the removal of structural noise, and examine the associated increase in x-ray
quantum noise. Our model will include the effects of dual energy subtraction on signal and noise transfer, and transfer of
power-law form anatomical noise through a DBT system using a modified filtered backprojection (FBP) algorithm. This
model will be used for the optimization of x-ray techniques and reconstruction filters in CEDBT.
Effects of image lag and scatter for dual-energy contrast-enhanced digital breast tomosynthesis using a CsI flat-panel based system
Show abstract
Dual-energy contrast-enhanced digital breast tomosynthesis (CE-DBT) using an iodinated contrast agent is an imaging
technique providing 3D functional images of breast lesion vascularity and tissue perfusion. The iodine uptake in the
breast is very small and causes only small changes in x-ray transmission; typically less than 5%. This presents
significant technical challenges on the imaging system performance. The purpose of this paper was to characterize
image lag and scattered radiation and their effects on image quality for dual-energy CE-DBT using a CsI(Tl) phosphor-based
detector. Lag was tested using typical clinical acquisition sequences and exposure parameters and under various
detector read-out modes. The performance of a prototype anti-scatter grid and its potential benefit on the magnitude and
range of the cupping artifact were investigated. Analyses were performed through phantom experiments. Our results
illustrate that the magnitude of image lag is negligible and breast texture cancelation is almost perfect when the detector
is read out several times between x-ray exposures. The anti-scatter grid effectively reduces scatter and the cupping
artifact.
Investigation of the effect of tube motion in breast tomosynthesis: continuous or step and shoot?
Show abstract
Digital breast tomosynthesis (DBT) is a 3D modality that may have the potential to complement or replace 2D
mammography. One major design aspect of DBT systems is the choice of tube motion: continuous tube motion during x-ray
exposure or the step and shoot method where the tube is held fixed while x-rays are released. Systems with
continuous tube motion experience focal spot motion blurring but a reduced patient motion blurring due to potentially
faster total acquisition times when compared to the step and shoot approach. In order to examine the influence of focus
motion on lesion detectability, a simulation environment was developed where lesions such as microcalcifications and
masses are inserted into different thicknesses of theoretical materials. A version of the power law noise method was
employed to approximate realistic anatomical breast volumes. The simulated projection images were reconstructed and
appropriate metrics (peak contrast, contrast and signal-difference-to-noise ratio) of the lesions in the two different modes
were compared. Results suggest an increase of the peak contrasts in the microcalcification data sets by 8 - 9 % for the
step-and-shoot method when compared to the continuous mode (p <0.05). While the contrast and signal-difference-to-noise-
ratio calculated for the same two modes almost overlapped for the mass datasets showing a difference of only 1-2%.
Real-time scanning beam digital x-ray image guidance system for transbronchial needle biopsy
Show abstract
We investigate a real-time digital tomosynthesis (DTS) imaging modality, based on the scanning beam digital
x-ray (SBDX) hardware, used in conjunction with an electromagnetic navigation bronchoscopy (ENB) system
to provide improved image guidance for minimally invasive transbronchial needle biopsy (TBNbx). Because the
SBDX system source uses electron beams, steered by electromagnets, to generate x-rays, and the ENB system
generates an electromagnetic field to localize and track steerable navigation catheters, the two systems will affect
each other when operated in proximity. We first investigate the compatibility of the systems by measuring the
ENB system localization error as a function of distance between the two systems. The SBDX system reconstructs
DTS images, which provide depth information, and so we investigate the improvement in lung nodule visualization
using SBDX system DTS images and compare them to fluoroscopic images currently used for biopsy verification.
Target localization error remains below 2mm (or virtually error free) if the volume-of-interest (VOI) is at least
50cm away from the SBDX system source and detector. Inside this region, tomographic angle ranges from 3° to
10° depending on the VOI location. Improved lung nodule (≤ 20mm diameter) contrast is achieved by imaging
the VOI near the SBDX system detector, where the tomographic angle is maximized. The combination of the
SBDX image guidance with an ENB system would provide real-time visualization during biopsy with improved
localization of the target and needle/biopsy instruments, thereby increasing the average and lowering the variance
of the yield for TBNbx.
X-ray Imaging: Phase Contrast, Diffraction
Towards x-ray differential phase contrast imaging on a compact setup
T. Thüring,
P. Modregger,
B. R. Pinzer,
et al.
Show abstract
A new imaging setup, aimed to perform differential X-ray phase contrast (DPC) imaging with a Talbot interferometer
on a microfocus X-ray tube, is demonstrated. The main features compared to recently proposed setups
are an extremely short source to detector distance, high spatial resolution and a large field of view. The setup
is designed for an immediate integration into a industrial micro CT scanner. In this paper, technical challenges
of a compact setup, namely the critical source coherence and divergence, are discussed. A theoretical analysis
using wave optics based computer simulations is performed to estimate the DPC signal visibility and the size
of the field of view for a given setup geometry. The maximization of the signal visibility as a function of the
inter-grating distance yields the optimal grating parameters. Imaging results using the optimized grating parameters
are presented. The reduction of the field of view, being a consequence of the high beam divergence,
was solved by fabricating new, cylindrically bent diffraction gratings. The fabrication process of these gratings
required a change of the currently used wafer materials and an adaption of the manufacturing techniques. The
implementation of the new setup represents a major step forward for the industrial application of the DPC
technique.
Beam hardening in x-ray differential phase contrast computed tomography
Show abstract
The effects of beam hardening have been an issue from the beginning of x-ray computed tomography. Polyenergetic
beams are attenuated more at lower energies, resulting in the so-called hardening of the beam. Beam
hardening artifacts in diagnostic CT images are a result of data inconsistency in the fundamental imaging equation
in conventional absorption CT. In theory, in phase contrast imaging, the fundamental imaging equation is
related only to a line integral of electron density, which is energy independent. However, due to unaccounted
absorption in the imaging equation for phase contrast, beam hardening artifacts will make their way into phase
contrast images. In this work, we use grating based differential phase contrast imaging, which uses a polyenergetic
source, and extracts phase information from a set of intensity images. The energy dependence in the imaging
equation for differential phase contrast imaging, coupled with the beam hardening present in the measured intensity
data, results in beam hardening artifacts in the reconstructed results. We demonstrate the magnitude of the
beam hardening effects in phase contrast reconstructions and compare it to standard absorption reconstructions.
Spectroscopic measurements concerning grating-based x-ray phase-contrast imaging
Thomas Weber,
Peter Bartl,
Florian Bayer,
et al.
Show abstract
We present energy-dependent measurement results, regarding
grating-based X-ray phase-contrast imaging. These
were done using a Talbot grating interferometer according to the proposal made by Weitkamp et al.1 using a
commercial microfocus X-ray tube producing a standard tungsten spectrum and a Talbot-Lau grating interferometer
with a medical X-ray tube. The spectroscopic, pixelated, photon-counting Timepix detector was used
for photon detection. Using these setups, we measured the visibilities for different energies at different source
to phase-grating distances. These results showed a constant maximum visibility, shifting to higher energies increasing
the distance. This behaviour can be explained by changing the distance leads to a change in the setup's
effective design energy. The results also showed that the expected drop of visibility for a misaligned setup does
not take place when a polychromatic X-ray source is used. Additionally, images obtained with phase-contrast
imaging at different energy thresholds are presented and evaluated. This knowledge about the energy-dependency
of the setup's parameters will help to optimise X-ray phase-contrast imaging towards the clinical application.
3D diffraction tomography for visualization of contrast media
Vinay M. Pai,
Ashley Stein,
Megan Kozlowski,
et al.
Show abstract
In x-ray CT, the ability to selectively isolate a contrast agent signal from the surrounding soft tissue and bone can greatly
enhance contrast visibility and enable quantification of contrast concentration. We present here a 3D diffraction
tomography implementation for selectively retaining volumetric diffraction signal from contrast agent particles that are
within a banded size range while suppressing the background signal from soft tissue and bone. For this purpose, we
developed a CT implementation of a single-shot x-ray diffraction imaging technique utilizing gratings. This technique
yields both diffraction and absorption images from a single grating-modulated projection image through analysis in the
spatial frequency domain. A solution of iron oxide nano-particles, having very different x-ray diffraction properties from
tissue, was injected into ex vivo chicken wing and in vivo rat specimens respectively and imaged in a 3D diffraction CT
setup. Following parallel beam reconstruction, it is noted that while the soft tissue, bone and contrast media are observed
in the absorption volume reconstruction, only the contrast media is observed in the diffraction volume reconstruction.
This 3D diffraction tomographic reconstruction permits the visualization and quantification of the contrast agent isolated
from the soft tissue and bone background.
Image Reconstruction
Penalized-likelihood reconstruction for sparse data acquisitions with unregistered prior images and compressed sensing penalties
Show abstract
This paper introduces a general reconstruction technique for using unregistered prior images within model-based penalized-
likelihood reconstruction. The resulting estimator is implicitly defined as the maximizer of an objective composed
of a likelihood term that enforces a fit to data measurements and that incorporates the heteroscedastic statistics of the
tomographic problem; and a penalty term that penalizes differences from prior image. Compressed sensing (p-norm)
penalties are used to allow for differences between the reconstruction and the prior. Moreover, the penalty is parameterized
with registration terms that are jointly optimized as part of the reconstruction to allow for mismatched images. We
apply this novel approach to synthetic data using a digital phantom as well as tomographic data derived from a conebeam
CT test bench. The test bench data includes sparse data acquisitions of a custom modifiable anthropomorphic lung
phantom that can simulate lung nodule surveillance. Sparse reconstructions using this approach demonstrate the simultaneous
incorporation of prior imagery and the necessary registration to utilize those priors.
Quantification of temporal resolution and its reliability in the context of TRI-PICCS and dual source CT
Show abstract
Temporal resolution is an important issue especially in cardiac CT. To quantify it, often merely the time that is
needed to acquire rawdata that contribute to a reconstructed image is used. In combination with more complex
reconstruction algorithms, which aim to improve the temporal resolution, (e.g. TRI-PICCS) this procedure
has proven to be inadequate. This study proposes and evaluates a more accurate simulation-based technique to
assess the temporal resolution of a CT system (including its reconstruction algorithm). To calculate the temporal
resolution of the system on a single point within the field of measurement, a vessel which performs a cardiac
motion pattern is simulated at this position. The motion pattern is adapted such that the accuracy loss caused
by motion exactly meets a defined threshold and then the temporal resolution can be taken from that motion
pattern. Additionally the dependency of the temporal resolution on the direction of the motion is evaluated to
obtain a measure of the reliability. The method is applied to single source and dual source full scan and short
scan reconstructions as well as on TRI-PICCS reconstructions. The results give an accurate impression on the
system response to motion. In conclusion, the proposed method allows quantifying the temporal resolution of a
CT system as a function of many parameters (motion direction, source position, object position, reconstruction
algorithm).
Evaluation of a novel CT image reconstruction algorithm with enhanced temporal resolution
H. Schöndube,
T. Allmendinger,
K. Stierstorfer,
et al.
Show abstract
We present an evaluation of a novel algorithm that is designed to enhance temporal resolution in CT beyond the
short-scan limit by making use of a histogram constraint.
A minimum scan angle of 180° plus fan angle is needed to acquire complete data for reconstructing an image.
Conventionally, this means that a temporal resolution of half the gantry rotation time is achievable in the isocenter
and that an enhancement of temporal resolution can only be accomplished by a faster gantry rotation or by using
a dual-source system. In this work we pursue a different approach, namely employing an iterative algorithm to
reconstruct images from less than 180° of projections and using a histogram constraint to prevent the occurrence
of limited-angle artifacts. The method is fundamentally different from previously published approaches using
prior images and TV minimization. Furthermore, motion detection is used to enhance dose usage in those parts
of the image where temporal resolution is not critical. We evaluate the technique with patient and phantom
scans as well as using simulated data.
The proposed method yields good results and image quality, both with simulated and with clinical data. Our
evaluations show that an enhancement of temporal resolution to a value equivalent to about 120° of projections is
viable, which corresponds to an enhancement of temporal resolution by about 30%. Furthermore, by employing
motion detection, a substantial noise reduction can be achieved in those parts of the image where no motion
occurs.
A Compton imaging algorithm for on-line monitoring in hadron therapy
J. E. Gillam,
C. Lacasta,
I. Torres-Espallardo,
et al.
Show abstract
Hadron therapy, a subject of study by the ENVISION project, promises to provide enhanced accuracy in the
treatment of cancer. The Bragg-peak, characteristic of the hadron-beam structure provides larger dose to the
tumor while being able to spare surrounding tissue - even tissues in the beam-path, beyond the tumor-site.
However, increased dose gradients require more precise treatment, as small beam misalignment can result in dose
to healthy, often delicate, surrounding tissue. The requirement for accuracy necessitates imaging during therapy,
yet the lack of a transmitted beam makes this difficult. The particulate beam interacts with the target material
producing neutrons, positron emitting isotopes and a broad spectra of gamma radiation. Photons from positron-annihilation
allow in-beam PET to provide on-line measurements of dose deposition during therapy. However,
ib-PET suffers from low statistics and lost projections due to low sensitivity and detector constraints respectively.
Instead, Compton imaging of gamma radiation is proposed to provide on-line monitoring for hadron therapy.
Compton imaging suffers similarly from low statistics, especially, as is the case here, when incident energy is
unknown. To surmount this problem, a method of Compton image reconstruction is proposed and tested using
simulated data, which reconstructs incident energy along with the spatial variation in emission density. Through
incident energy estimation, a larger range of measurements are available for image-reconstruction - greatly
increasing the sensitivity of the system. It is shown in this preliminary study that, even with few statistics, a
reasonable estimate of the beam path is calculable.
Method for reducing windmill artifacts in multislice CT images
Show abstract
Thin-slice images reconstructed from helical multi-slice CT scans typically display artifacts known as windmill
artifacts, which arise from not satisfying the Nyquist sampling criteria in the patient longitudinal direction.
Since these are essentially aliasing artifacts, they can be reduced or removed by trading off resolution, either
globally (by reconstructing thicker slices) or locally (by local smoothing of the strong gradients). The obvious
drawback to this approach is the associated loss in resolution. Another approach is to utilize an x-ray tube with
the capability to modulate the focal spot in the z-direction, to effectively improve the sampling rate.
This work presents a new method for windmill artifact reduction based on total variation minimization in the
image domain, which is capable of removing windmill artifacts while at the same time preserving the resolution
of anatomic structures within the images. This is a big improvement over previous reconstruction methods that
sacrifice resolution, and it provides practically the same benefits as a z-switching x-ray tube with a much simpler
impact to the overall CT system.
Helical x-ray differential phase contrast computed tomography
Show abstract
Helical computed tomography revolutionized the field of x-ray computed tomography two decades ago. The simultaneous translation of an image object with a standard computed tomography acquisition allows for fast volumetric scan for long image objects. X-ray phase sensitive imaging methods have been studied over the past few decades to provide new contrast mechanisms for imaging an object. A Talbot-Lau grating interferometer based differential phase contrast imaging method has recently demonstrated its potential for implementation in clinical and industrial applications. In this work, the principles of helical computed tomography are extended to differential phase contrast imaging to produce volumetric reconstructions based on fan-beam data. The method demonstrates the potential for helical differential phase contrast CT to scan long objects with relatively small detector coverage in the axial direction.
CT III: Multi-energy
Synthetic CT: simulating arbitrary low dose single and dual energy protocols
Show abstract
CT protocol selection (kVp, mAs, filtration) can greatly affect the dose delivered to a patient and the quality of
the resulting images. While it is imperative to get diagnostic quality images from a study, the dose to the patient
should be minimized. With synthetic CT, protocol optimization is made simple by simulating realistic scans
of arbitrary low dose protocols from a previously acquired dual energy scan. For single energy protocols, the
simulated projections have the same statistical properties as projections from an actual scan. The reconstruction
of these synthesized projections then provides realistic images at a different protocol. For dual energy protocols,
the material decomposition of the simulated protocol is directly synthesized. Moreover, the dose distribution
from an arbitrary protocol (single or dual energy) can be found and used in conjunction with the predicted image
quality for protocol design.
We demonstrate single energy synthetic CT on a clinical study by synthesizing a 120 kVp image from a dual
energy dataset. The synthesized image is compared to an actual 120 kVp image on the same patient, showing
excellent agreement. We also describe a framework for implementing synthetic CT in software that is intuitive
to use and allows radiologists to see the impact of protocol selection on image quality and dose distribution. A
simple GUI demonstrates the vision for synthetic CT by allowing for the comparison of several dose reduction
techniques: filtration, mA modulation, partial scan, or shielding. In particular, objects such as a breast shield
can be simulated and virtually inserted as part of the original scan. In each case, the kVp and mAs can be
adjusted while the synthesized image and dose profile are updated in real-time. With such software, synthetic
CT can be applied as an educational and scientific tool for radiologists concerned with dose and image quality.
A tabletop clinical x-ray CT scanner with energy-resolving photon counting detectors
Show abstract
Photon counting x-ray detectors (PCXDs) are an emerging technology in x-ray computed tomography (CT) as they have
the potential to overcome some of the most significant limitations of current CT with energy integrating detectors.
Among these are: insufficient tissue contrast, relatively high radiation dose, tissue non-specificity, and the non-quantitative
nature. In contrast, CT with PCXDs has shown promise in producing higher contrast, tissue specific,
quantitative images at lower dose. Novel applications for PCXDs include k-edge and functional imaging and material
decomposition. A limiting factor, however, is the high photon flux that occurs in clinical applications resulting in signal
pulse pile up in the detector. Faster detectors and new strategies for data corrections and image reconstruction algorithms
are needed to overcome these limitations. A research tabletop x-ray CT scanner was developed with the following aims:
1) to characterize and calibrate the PCXD; 2) to acquire CT projection data under conditions similar to those of clinical
CT; and 3) to reconstruct images using correction schemes specific for PCXDs. The scanner employs a commercial
clinical x-ray tube, a PCXD with two energy thresholds, and allows scanning of objects of up to 40 cm in diameter. This
paper presents measurements of detector quantities crucial for data corrections and calibration, such as energy response,
deadtime, and count rates. Reconstructed CT images are presented and qualitative results from material decomposition
are shown.
Investigating possible improvements in image quality with energy-weighting photon-counting breast CT
Show abstract
In an effort to improve the early stage detection and diagnosis of breast cancer, a number of research groups have been
investigating the use of x-ray computerized tomography (CT) systems dedicated for use in imaging the breast. For a
number of reasons, the performance of energy integrating detectors are sub-optimal for use in CT imaging of the breast.
It is expected that the next generation of x-ray detectors for digital radiography and CT will have the capability of
counting individually measured photons and recording their energy. In this paper, we used computer simulations to
evaluate improvements in image quality that can be attained using energy weighting photon counting detectors for breast
CT and a lower kVp settings. Results from this study suggest that improvements in SNR performance can be attained
with photon counting detectors as compared to energy integrating detectors.
Temporal and spectral reconstruction algorithms for x-ray CT
Show abstract
X-ray CT imaging of dynamic physiological processes entails the reconstruction of volumetric images of objects with x-ray
attenuation properties that vary over time and energy. We show how the same algebraic model can be used to
represent both temporal and spectral information. This model enables the formulation of algorithms capable of
recovering information in either dimension. These dimensions can also be combined to develop algorithms that recover
both dimensions simultaneously. We present such an algorithm, describe its implementation, and test it in simulations.
Material separation in x-ray CT with energy resolved photon-counting detectors
Show abstract
The objective of the study was to demonstrate that more than two types of materials can be effectively separated with x-ray
CT using a recently developed energy resolved photon-counting detector. We performed simulations and physical
experiments using an energy resolved photon-counting detector with six energy thresholds. For comparison, dual-kVp
CT with an integrating detector was also simulated. Iodine- and gadolinium-based contrast agents, as well as several
soft-tissue- and bone-like materials were imaged. We plotted the attenuation coefficients for the various materials in a
scatter plot for pairs of energy windows. In both simulations and physical experiments, the contrast agents were easily
separable from other non-contrast-agent materials in the scatter plot between two properly chosen energy windows. This
separation was due to discontinuities in the attenuation coefficient around their unique K-edges. The availability of more
than two energy thresholds in a photon-counting detector allowed the separation with one or more contrast agents
present. Compared with dual-kVp methods, CT with an energy resolved photon-counting detector provided a larger
separation and the freedom to use different energy window pairs to specify the desired target material. We concluded
that an energy resolved photon-counting detector with more than two thresholds allowed the separation of more than two
types of materials, e.g., soft-tissue-like, bone-like, and one or more materials with K-edges in the energy range of
interest. They provided advantages over dual-kVp CT in terms of the degree of separation and the number of materials
that can be separated simultaneously.
Novel Systems
An inverse geometry CT system with stationary source arrays
Show abstract
Traditional CT systems face a tradeoff between temporal resolution, volumetric coverage and cone beam artifacts and
also have limited ability to customize the distribution of incident x-rays to the imaging task. Inverse geometry CT
(IGCT) can overcome some of these limitations by placing a small detector opposite a large, rotating scanned source
array. It is difficult to quickly rotate this source array to achieve rapid imaging, so we propose using stationary source
arrays instead and investigate the feasibility of such a system. We anticipate that distinct source arrays will need to be
physically separated, creating gaps in the sinogram. Symmetry can be used to fill the missing rays except those
connecting gaps. With three source arrays, a large triangular field of view emerges. As the small detector orbits the
patient, each source spot must be energized at multiple specifically designed times to ensure adequate sampling. A
timing scheme is proposed that avoids timing clashes, efficiently uses the detector, and allows for simple collimation.
The two-dimensional MTF, noise characteristics, and artifact levels are all found to be comparable to parallel-beam
systems. A complete, 100 millisecond volumetric scan may be feasible.
Dual-energy micro-CT imaging for differentiation of iodine- and gold-based nanoparticles
Show abstract
Spectral CT imaging is expected to play a major role in the diagnostic arena as it provides material decomposition on
an elemental basis. One fascinating possibility is the ability to discriminate multiple contrast agents targeting different
biological sites. We investigate the feasibility of dual energy micro-CT for discrimination of iodine (I) and gold (Au)
contrast agents when simultaneously present in the body. Simulations and experiments were performed to measure
the CT enhancement for I and Au over a range of voltages from
40-to-150 kVp using a dual source micro-CT system.
The selected voltages for dual energy micro-CT imaging of Au and I were 40 kVp and 80 kVp. On a massconcentration
basis, the relative average enhancement of Au to I was 2.75 at 40 kVp and 1.58 at 80 kVp. We have
demonstrated the method in a preclinical model of colon cancer to differentiate vascular architecture and
extravasation. The concentration maps of Au and I allow quantitative measure of the bio-distribution of both agents.
In conclusion, dual energy micro-CT can be used to discriminate probes containing I and Au with immediate impact
in pre-clinical research.
Design and development of MR-compatible SPECT systems for simultaneous SPECT-MR imaging of small animals
Show abstract
We describe a continuing design and development of MR-compatible SPECT systems for simultaneous SPECT-MR
imaging of small animals. A first generation prototype SPECT system was designed and constructed to fit inside a MRI
system with a gradient bore inner diameter of 12 cm. It consists of 3 angularly offset rings of 8 detectors (1"x1", 16x16
pixels MR-compatible solid-state CZT). A matching 24-pinhole collimator sleeve, made of a tungsten-compound,
provides projections from a common FOV of ~25 mm. A birdcage RF coil for MRI data acquisition surrounds the
collimator. The SPECT system was tested inside a clinical 3T MRI system. Minimal interference was observed on the
simultaneously acquired SPECT and MR images. We developed a sparse-view image reconstruction method based on
accurate modeling of the point response function (PRF) of each of the 24 pinholes to provide artifact-free SPECT
images. The stationary SPECT system provides relatively low resolution of 3-5 mm but high geometric efficiency of 0.5-
1.2% for fast dynamic acquisition, demonstrated in a SPECT renal kinetics study using Tc-99m DTPA. Based on these
results, a second generation prototype MR-compatible SPECT system with an outer diameter of 20 cm that fits inside a
mid-sized preclinical MRI system is being developed. It consists of 5 rings of 19 CZT detectors. The larger ring diameter
allows the use of optimized multi-pinhole collimator designs, such as high system resolution up to ~1 mm, high
geometric efficiency, or lower system resolution without collimator rotation. The anticipated performance of the new
system is supported by simulation data.
Freehand SPECT in low uptake situations
Show abstract
3D functional imaging in the operating room can be extremely useful for some procedures like SLN mapping
or SLN biopsies. Freehand SPECT is an example of such an imaging modality, combining manually scanned,
hand-held 1D gamma detectors with spatial positioning systems in order to reconstruct localized 3D SPECT
images, for example in the breast or neck region. Standard series expansion methods are applied together with
custom physical models of the acquisition process and custom filtering procedures to perform 3D tomographic
reconstruction from sparse, limited-angle and irregularly sampled data. A Freehand SPECT system can easily be
assembled on a mobile cart suitable for use in the operating room. This work addresses in particular the problem
of objects with low uptake (like sentinel lymph nodes), where reconstruction tends to be difficult due to low signal
to noise ratio. In a neck-like phantom study, we show that four simulated nodes of 250 microliter volume with
0.06% respectively 0.03% uptake of a virtual 70MBq injection of Tc99m (the typical activity for SLN procedures
at our hospital) in a background of water can be reconstructed successfully using careful filtering procedures in
the reconstruction pipeline. Ten independent Freehand SPECT scans of the phantom were performed by several
different operators, with an average scan duration of 5.1 minutes. The resulting reconstructions show an average
spatial accuracy within voxel dimensions (2.5mm) compared to CT and exhibit correct relative quantification.
Forward model of Cerenkov luminescence tomography with the third-order simplified spherical harmonics approximation
Show abstract
Applying Cerenkov luminescence tomography (CLT) to localizing Cerenkov light sources in situ is still in its
nascent stage. One of the obstacles hindering the development of the CLT is the lack of dedicated imaging
mode. In this contribution, the paper presented a Cerenkov optical imaging mode, in which the propagation of
optical photons inside tissues generated by the Vavilov-Cerenkov effect is modeled based on simplified spherical
harmonics approximation. As a significantly more transport-like and computational-efficient approximation
theory, the performance of the third-order simplified spherical harmonics approximation (SP3) in the CLT
forward is investigated in stages. Finally, the performance of the proposed forward model is validated using
numerical phantoms and compared with the simulation data based on the Monte Carlo method.
A preclinical SPECT camera with depth-of-interaction compensation using a focused-cut scintillator
Show abstract
Preclinical SPECT offers a powerful means to understand the molecular pathways of metabolic activity in animals.
SPECT cameras using pinhole collimators offer high resolution that is needed for visualizing small structures in
laboratory animals. One of the limitations of pinhole geometries is that increased magnification causes some rays to
travel through the scintillator detector at steep angles, introducing parallax errors due to variable depth-of-interaction
in the scintillator, especially towards the edges of the detector field of view. These parallax errors
ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can
easily penetrate through millimeters of scintillator material. A pixellated, focused-cut scintillator, with its pixels
laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus
open up a new regime of sub-mm preclinical SPECT. We have built a 4-pinhole prototype gamma camera for
preclinical SPECT imaging, using an EMCCD camera coupled to a 3 mm thick CsI(Tl) scintillator whose pixels are
focused towards each 500 μm-diameter pinhole aperture of the four pinholes. The focused-cut scintillator was
fabricated using a laser ablation process that allows for cuts with very high aspect ratios. We present preliminary
results from our phantom experiments.
CT IV: Cone Beam
Evaluation of an erbium modulator in x-ray scatter correction using primary modulation
Show abstract
A primary modulator made of erbium is evaluated in X-ray scatter correction using primary modulation. Our
early studies have shown that erbium is the optimal modulator material for an X-ray cone-beam computed tomography
(CBCT) system operated at 120 kVp, exhibiting minimum beam hardening which otherwise weakens the
modulator's ability to separate scatter from primary. In this work, the accuracy of scatter correction is compared
for two copper modulators (105 and 210 μm of thickness) and one erbium modulator (25.4 μm of thickness) with
the same modulation frequencies. The variations in the effective transmission factors of these three modulators
as functions of object filtrations are first measured to show the magnitudes of beam hardening caused by the
modulators themselves. Their scatter correction performances are then tested using a Catphan©600 phantom
on our tabletop CBCT system. With and without 300 μm of copper in the beam, the measured variations for
these three modulators are 4.3%, 7.8%, and 0.9%, respectively. Using the 105- and 210-μm copper modulators,
our scatter correction method reduces the average CT number error from 327.3 Hounsfield units (HU) to 19.4
and 20.9 HU in the selected regions of interest, and enhances the contrast-to-noise ratio (CNR) from 10.7 to 16.5
and 15.9, respectively. With the 25.4-μm erbium modulator, the CT number error is markedly reduced to 2.8
HU and the CNR is further increased to 17.4.
Analysis of vertical and horizontal circular C-arm trajectories
Show abstract
C-arm angiography systems offer great flexibility in the acquisition of trajectories for computed tomography.
Theoretically, these systems are able to scan patients while standing in an upright position. This would allow novel
insights into structural changes of the human anatomy while weight bearing. However, this would require a scan on a
horizontal trajectory parallel to the ground floor which is currently not supported by standard C-arm CT acquisition
protocols.
In this paper, we compared the standard vertical and the new horizontal scanning trajectories by analysis of the source
positions and source to detector distances during the scan. We employed a C-arm calibration phantom to compute the
exact scan geometry. Based on the analysis of the projection matrices, we computed the source position in 3D and the
source to detector distance for each projection. We then used the calibrated scan geometries to reconstruct the calibration
phantom. Based on this reconstruction in comparison to the ideal phantom geometry we also evaluated the geometric
reconstruction error.
As expected, both the vertical and the horizontal scan trajectories exhibit a significant C-arm "wobble". But in both kinds
of trajectories, the reproducibility over several scans was comparable. We were able to reconstruct the calibration
phantom with satisfactory geometric reconstruction accuracy. With a reconstruction error of 0.2 mm, we conclude that
horizontal C-arm scans are possible and show properties similar to those of vertical C-arm scans.
The remaining challenge is compensation for the involuntary movement of the standing subject during a weight-bearing
acquisition. We investigated this using an optical tracking system and found that the average movement at the knee while
standing upright for 5 seconds is between 0.42 mm and 0.54 mm, and goes up to as much as 12 mm when the subject is
holding a 60° squat. This involuntary motion is much larger than the reconstruction accuracy. Hence, we expect artifacts
in reconstructions to be significant for upright positions, and overwhelming in squat positions if no motion correction is
applied.
Functional phase-correlated micro-CT imaging of small rodents with low dose
Stefan Sawall,
Frank Bergner,
Andreas Hess,
et al.
Show abstract
Functional imaging of an animals thoracic region requires cardiac and respiratory gating. The information on
respiratory motion and ECG required for double-gating are extracted from the rawdata and used to select the
projections appropriate for a given motion phase. A conventional phase-correlated reconstruction (PC) therefore
uses only a small amount of the total projections acquired. Thus the resulting images comprise a high noise
level unless acquired with very high dose, and streak artifacts may occur due to the sparse angular sampling.
Here, we are aiming at getting high fidelity images even for relatively low dose values. To overcome these issues
we implemented an iterative reconstruction method encompassing a five-dimensional (spatial, cardiac-temporal,
respiratory-temporal) edge-preserving filter. This new phase-correlated low-dose (LDPC) reconstruction method
is evaluated using retrospectively-gated, contrast-enhanced micro CT data of mice. The scans performed comprise
7200 projections within 10 rotations over 5 minutes. A tube voltage of 65 kV was used resulting in an
administered dose of about 500 mGy. 20 respiratory phases and 10 cardiac phases are reconstructed. Using
LDPC reconstruction the image noise is typically reduced by a factor of about six and artifacts are almost
removed. Reducing the number of projections available for reconstruction shows that we can get comparable
image quality with only 200 mGy. LDPC enables high fidelity low-dose double-gated imaging of free breathing
rodents without compromises in image quality. Compared to PC image noise is significantly reduced with LDPC
and the administered dose can be reduced accordingly.
Scatter correction for cone-beam computed tomography using moving blocker strips
Show abstract
One well-recognized challenge of cone-beam computed tomography (CBCT) is the presence of scatter contamination
within the projection images. Scatter degrades the CBCT image quality by decreasing the contrast, introducing shading
artifacts and leading to inaccuracies in the reconstructed CT number. We propose a blocker-based approach to
simultaneously estimate scatter signal and reconstruct the complete volume within the field of view (FOV) from a single
CBCT scan. A physical strip attenuator (i.e., "blocker"), consists of lead strips, is inserted between the x-ray source and
the patient. The blocker moves back and forth along z-axis during the gantry rotation. The two-dimensional (2D) scatter
fluence is estimated by interpolating the signal from the blocked regions. A modified Feldkamp-Davis-Kress (FDK)
algorithm and an iterative reconstruction based on the constraint optimization are used to reconstruct CBCT images from
un-blocked projection data after the scatter signal is subtracted. An experimental study is performed to evaluate the
performance of the proposed scatter correction scheme. The scatter-induced shading/cupping artifacts are substantially
reduced in CBCT using the proposed strategy. In the experimental study using a CatPhan©600 phantom, CT number
errors in the selected regions of interest are reduced from 256 to less than 20. The proposed method allows us to
simultaneously estimate the scatter signal in projection data, reduce the imaging dose and obtain complete volumetric
information within the FOV.
Single-scan scatter correction for cone-beam CT using a stationary beam blocker: a preliminary study
Show abstract
The performance of cone-beam CT (CBCT) is greatly limited by scatter artifacts. The existing measurement-based
methods have promising advantages as a standard scatter correction solution, except that they currently require multiple
scans or moving the beam blocker during data acquisition to compensate for the missing primary data. These approaches
are therefore unpractical in clinical applications. In this work, we propose a new measurement-based scatter correction
method to achieve accurate reconstruction with one single scan and a stationary beam blocker, two seemingly
incompatible features which enable simple and effective scatter correction without increase of scan time or patient dose.
Based on CT reconstruction theory, we distribute the blocked areas over one projection where primary signals are
considered to be redundant in a full scan. The CT image quality is not degraded even with primary loss. Scatter is
accurately estimated by interpolation and scatter-corrected CT images are obtained using an FDK-based reconstruction.
In a Monte Carlo simulation study, we first optimize the beam blocker geometry using projections on the Shepp-Logan
phantom and then carry out a complete simulation of a CBCT scan on a water phantom. With the scatter-to-primary ratio
around 1.0, our method reduces the CT number error from 293 to 2.9 Hounsfield unit (HU) around the phantom center.
The proposed approach is further evaluated on a CBCT tabletop system. On the Catphan©600 phantom, the
reconstruction error is reduced from 202 to 10 HU in the selected region of interest after the proposed correction.
Dose
Verification of the performance accuracy of a real-time skin-dose tracking system for interventional fluoroscopic procedures
Show abstract
A tracking system has been developed to provide real-time feedback of skin dose and dose rate during interventional
fluoroscopic procedures. The dose tracking system (DTS) calculates the radiation dose rate to the patient's skin using
the exposure technique parameters and exposure geometry obtained from the x-ray imaging system digital network
(Toshiba Infinix) and presents the cumulative results in a color mapping on a 3D graphic of the patient. We performed a
number of tests to verify the accuracy of the dose representation of this system. These tests included comparison of
system-calculated dose-rate values with ionization-chamber (6 cc PTW) measured values with change in kVp, beam
filter, field size, source-to-skin distance and beam angulation. To simulate a cardiac catheterization procedure, the
ionization chamber was also placed at various positions on an Alderson Rando torso phantom and the dose agreement
compared for a range of projection angles with the heart at isocenter. To assess the accuracy of the dose distribution
representation, Gafchromic film (XR-RV3, ISP) was exposed with the beam at different locations. The DTS and film
distributions were compared and excellent visual agreement was obtained within the cm-sized surface elements used for
the patient graphic. The dose (rate) values agreed within about 10% for the range of variables tested. Correction factors
could be applied to obtain even closer agreement since the variable values are known in real-time. The DTS provides
skin-dose values and dose mapping with sufficient accuracy for use in monitoring diagnostic and interventional x-ray
procedures.
Energy deposition in the breast during CT scanning: quantification and implications for dose reduction
Show abstract
Studies suggest that dose to the breast leads to a higher lifetime attributable cancer incidence risk from a chest CT scan
for women compared to men. Numerous methods have been proposed for reducing dose to the breast during CT
scanning, including bismuth shielding, tube current modulation, partial-angular scanning, and reduced kVp. These
methods differ in how they alter the spectrum and fluence across projection angle. This study used Monte Carlo CT
simulations of a voxelized female phantom to investigate the energy (dose) deposition in the breast as a function of both
photon energy and projection angle. The resulting dose deposition matrix was then used to investigate several questions
regarding dose reduction to the breast: (1) Which photon energies deposit the most dose in the breast, (2) How does
increased filtration compare to tube current reduction in reducing breast dose, and (3) Do reduced kVp scans reduce dose
to breast, and if so, by what mechanism? The results demonstrate that while high-energy photons deposit more dose per
emitted photon, the low-energy photons deposit more dose to the breast for a 120 kVp acquisition. The results also
demonstrate that decreasing the tube current for the AP views to match the fluence exiting a shield deposits nearly the
same dose to the breast as when using a shield (within ~1%). Finally, results suggest that the dose reduction observed
during lower kVp scans is caused by reduced photon fluence rather than the elimination of high-energy photons from the
beam. Overall, understanding the mechanisms of dose deposition in the breast as a function of photon energy and
projection angle enables comparisons of dose reduction methods and facilitates further development of optimized dose
reduction schemes.
Uncertainties of organ absorbed doses to patients from 18f-choline
Show abstract
Radiation doses of radiopharmaceuticals to patients in nuclear medicine are, as the standard method, estimated by the
administered activity, medical imaging (e.g. PET imaging), compartmental modeling and Monte Carlo simulation of
radiation with reference digital human phantoms. However, in each of the contributing terms, individual uncertainty due
to measurement techniques, patient variability and computation methods may propagate to the uncertainties of the
calculated organ doses to the individual patient. To evaluate the overall uncertainties and the quality assurance of internal
absorbed doses, a method was developed within the framework of the MADEIRA Project (Minimizing Activity and
Dose with Enhanced Image quality by Radiopharmaceutical Administrations) to quantitatively analyze the uncertainties
in each component of the organ absorbed doses after administration of 18F-choline to prostate cancer patients undergoing
nuclear medicine diagnostics.
First, on the basis of the organ PET and CT images of the patients as well as blood and urine samples, a model structure
of 18F-choline was developed and the uncertainties of the model parameters were determined. Second, the model
parameter values were sampled and biokinetic modeling using these sampled parameter values were performed. Third,
the uncertainties of the new specific absorbed fraction (SAF) values derived with different phantoms representing
individual patients were presented. Finally, the uncertainties of absorbed doses to the patients were calculated by
applying the ICRP/ICRU adult male reference computational phantom. In addition to the uncertainty analysis, the
sensitivity of the model parameters on the organ PET images and absorbed doses was indicated by coupling the model
input and output using regression and partial correlation analysis.
The results showed that the uncertainty factors of absorbed dose to patients are in most cases less than a factor of 2
without taking into account the uncertainties caused by the variability and uncertainty of individual human phantoms.
The sensitivity study showed that the metabolic transfer parameter from the blood to soft tissues has a strong influence
on blood sample collection from the beginning until 500 min. post administration; the transfer pathways between blood
and liver impact strongly the liver imaging during the time course. The results of this study suggest that organ image
acquisition of liver and kidneys after 100 min. as well as blood and urine sample collection are necessary for the
reduction of uncertainties of absorbed dose estimates to patients.
The feasibility of universal DLP-to-risk conversion coefficients for body CT protocols
Show abstract
The effective dose associated with computed tomography (CT) examinations is often
estimated from dose-length product (DLP) using scanner-independent conversion
coefficients. Such conversion coefficients are available for a small number of
examinations, each covering an entire region of the body (e.g., head, neck, chest,
abdomen and/or pelvis). Similar conversion coefficients, however, do not exist for
examinations that cover a single organ or a sub-region of the body, as in the case of a
multi-phase liver examination. In this study, we extended the DLP-to-effective dose
conversion coefficient (k factor) to a wide range of body CT protocols and derived the
corresponding DLP-to-cancer risk conversion coefficient (q factor). An extended cardiactorso
(XCAT) computational model was used, which represented a reference adult male
patient. A range of body CT protocols used in clinical practice were categorized based on
anatomical regions examined into 10 protocol classes. A validated Monte Carlo program
was used to estimate the organ dose associated with each protocol class. Assuming the
reference model to be 20 years old, effective dose and risk index (an index of the total
risk for cancer incidence) were then calculated and normalized by DLP to obtain the k and q factors. The k and q factors varied across protocol classes; the coefficients of
variation were 28% and 9%, respectively. The small variation exhibited by the q factor
suggested the feasibility of universal q factors for a wide range of body CT protocols.
X-ray dose reduction by adaptive source equalization and electronic region-of-interest control
Show abstract
Radiation dose is particularly a concern in pediatric cardiac fluoroscopy procedures, which account for 7% of
all cardiac procedures performed. The Scanning-Beam Digital X-ray (SBDX) fluoroscopy system has already
demonstrated reduced dose in adult patients owing to its high-DQE photon-counting detector, reduced detected
scatter, and the elimination of the anti-scatter grid. Here we show that the unique flexible illumination platform
of the SBDX system will enable further dose area product reduction, which we are currently developing for
pediatric patients, but which will ultimately benefit all patients. The SBDX system has a small-area detector
array and a large-area X-ray source with up to 9,000 individually-controlled X-ray focal spots. Each focal spot
illuminates a small fraction of the full field of view. To acquire a frame, each focal spot is activated for a fixed
number of 1-microsecond periods. Dose reduction is made possible by reducing the number of activations of
some of the X-ray focal spots during each frame time. This can be done dynamically to reduce the exposure
in areas of low patient attenuation, such as the lung field. This spatially-adaptive illumination also reduces the
dynamic range in the full image, which is visually pleasing. Dose can also be reduced by the user selecting a
region of interest (ROI) where full image quality is to be maintained. Outside the ROI, the number of activations
of each X-ray focal spot is reduced and the image gain is correspondingly increased to maintain consistent image
brightness. Dose reduction is dependent on the size of the ROI and the desired image quality outside the ROI.
We have developed simulation software that is based on real data and can simulate the performance of the
equalization and ROI filtration. This software represents a first step toward real-time implementation of these
dose-reduction methods. Our simulations have shown that dose area product reductions of 40% are possible
using equalization, and dose savings as high as 74% are possible with the ROI approach. The dose reduction
achieved in clinical use will depend on patient anatomy.
Effect of contrast magnitude and resolution metric on noise-resolution tradeoffs in x-ray CT imaging: a comparison of non-quadratic penalized alternating minimization and filtered backprojection algorithms
Show abstract
Purpose: To assess the impact of contrast magnitude and spatial resolution metric choices on the noise-resolution
tradeoff of a non-quadratic penalized statistical iterative algorithm, Alternating Minimization (AM), in x-ray
transmission CT.
Methods: Monoenergetic Poisson-counting CT data were simulated for a water phantom containing circular inserts of
varying contrast (7% to 238%). The data was reconstructed with conventional filtered backprojection (FBP) and two
non-quadratic penalty parameterizations of AM. A range of smoothing strengths is reconstructed for each algorithm to
quantify the noise-resolution tradeoff curve. Modulation transfer functions (MTFs) were estimated from the circular
contrast-insert edges and then integrated up to a cutoff frequency as a single-parameter measure of local spatial
resolution. Two cutoff frequencies and two resolution comparison values are investigated for their effect on reported
tradeoff advantage.
Results: The noise-resolution tradeoff curve was always more favorable for AM than FBP. For strongly edge-preserving
penalty functions, this advantage was found to be dependent upon the contrast for which resolution is quantified for
comparison. The magnitude of the reported dose reduction potential of the AM algorithm was shown to be dependent on
the resolution metric choices, though the general contrast-dependence was always evident.
Conclusions: The penalized AM algorithm shows the potential to reconstruct images of comparable quality using a
fraction of the dose required by FBP. The contrast-dependence on the tradeoff advantage implies that statistical
algorithms using non-quadratic penalty functions should be assessed using contrasts relevant to the intended clinical
task.
Special Session I: Dose
Definitions and outlook targeting x-ray exposure of patients in diagnostic imaging
Dieter F. Regulla
Show abstract
Computer tomography (CT) is vital and currently irreplaceable in diagnostic radiology. But CT operates with ionizing
radiation which may cause cancer or non-cancer diseases in humans. The degree of radiation impact depends on the dose
administered by an investigation. And this is the core issue: Even CT exams executed lege artis, administer doses to
patients which by magnitude are far beyond the level of hitherto known doses of conventional film-screen techniques.
Patients undergoing one or multiple CT examinations, digital angiographies or interventions will be exposed to effective
doses between roughly several mSv and several 100 mSv depending on type and frequency of the diagnostic
investigations. From the radiation protection point of view, there is therefore the worldwide problem of formulating firm
rules for the control of these high-dose investigations, as dose limits can not be established for reasons of the medical
benefit. This makes the difference compared with radiation protection for occupationally exposed persons. What remains
is "software", namely "justification" and "optimization". Justification requires balancing the interests between the health
benefit and the potential harm of an exam which has to be responsibly executed by the physician himself; therefore the
radiologists' associations are in the duty to prepare practicable rules for justification. Optimization again needs a
cooperative solution, and that is the establishment of reference doses for diagnostic examinations, to be checked by the
technical service of the producers' companies. Experts and authorities have been aware of the high-dose dilemma in
diagnostic imaging since long. It is time for the reflection of active solutions and their implementation into practice.
How do we measure dose and estimate risk?
Show abstract
Radiation exposure due to medical imaging is a topic of emerging importance. In Europe this topic has been dealt with
for a long time and in other countries it is getting more and more important and it gets an aspect of public interest in the
latest years. This is mainly true due to the fact that the average dose per person in developed countries is increasing
rapidly since threedimensional imaging is getting more and more available and useful for diagnosis. This paper
introduces the most common dose quantities used in medical radiation exposure characterization, discusses usual ways
for determination of such quantities as well as some considerations how these values are linked to radiation risk
estimation. For this last aspect the paper will refer to the linear non threshold theory for an imaging application.
The accuracy of estimated organ doses from Monte Carlo CT simulations using cylindrical regions of interest within organs
Show abstract
The purpose of this study was to investigate the accuracy of Monte Carlo simulated organ doses using cylindrical ROIs
within the organs of patient models as an alternative method to full organ segmentations. Full segmentation and
placement of circular ROIs at the approximate volumetric centroid of liver, kidneys and spleen were performed for 20
patient models. For liver and spleen, ROIs with 2cm diameter were placed on 5 consecutive slices; for the kidneys 1cm
ROIs were used. Voxelized models were generated and both fixed and modulated tube current simulations were
performed and organ doses for each method (full segmentation and ROIs) were recorded. For the fixed tube current
simulations, doses simulated using circular ROIs differed from those simulated using full segmentations: for liver, these
differences ranged from -5.6% to 10.8% with a Root Mean Square (RMS) difference of 5.9%. For spleen these
differences ranged from -9.5% to 5.7% with an RMS of 5.17%; and for kidney the differences ranged from -12.9% to
14.4% for left kidney with an RMS of 6.8%, and from -12.3% to 12.8% for right kidney with an RMS of 6.6%. Full
body segmentations need expertise and are time consuming. Instead using circular ROIs to approximate the full
segmentation would simplify this task and make dose calculations for a larger set of models feasible. It was shown that
dose calculations using ROIs are comparable to those using full segmentations. For the fixed current simulations the
maximum RMS value was 6.8% and for the TCM it was 6.9%.
An algorithm for intelligent sorting of CT-related dose parameters
Show abstract
Imaging centers nationwide are seeking innovative means to record and monitor CT-related radiation dose in
light of multiple instances of patient over-exposure to medical radiation. As a solution, we have developed RADIANCE,
an automated pipeline for extraction, archival and reporting of CT-related dose parameters. Estimation
of whole-body effective dose from CT dose-length product (DLP)-an indirect estimate of radiation dose-requires
anatomy-specific conversion factors that cannot be applied to total DLP, but instead necessitate individual
anatomy-based DLPs. A challenge exists because the total DLP reported on a dose sheet often includes multiple
separate examinations (e.g., chest CT followed by abdominopelvic CT). Furthermore, the individual reported
series DLPs may not be clearly or consistently labeled. For example, Arterial could refer to the arterial phase
of the triple liver CT or the arterial phase of a CT angiogram. To address this problem, we have designed an
intelligent algorithm to parse dose sheets for multi-series CT examinations and correctly separate the total DLP
into its anatomic components. The algorithm uses information from the departmental PACS to determine how
many distinct CT examinations were concurrently performed. Then, it matches the number of distinct accession
numbers to the series that were acquired, and anatomically matches individual series DLPs to their appropriate
CT examinations. This algorithm allows for more accurate dose analytics, but there remain instances where automatic
sorting is not feasible. To ultimately improve radiology patient care, we must standardize series names
and exam names to unequivocally sort exams by anatomy and correctly estimate whole-body effective dose.
Special Session II: Dose
Dose reduction using prior image constrained compressed sensing (DR-PICCS)
Show abstract
A technique for dose reduction using prior image constrained compressed sensing (DR-PICCS) in computed
tomography (CT) is proposed in this work. In DR-PICCS, a standard FBP reconstructed image is forward
projected to get a fully sampled projection data set. Meanwhile, it is low-pass filtered and used as the prior
image in the PICCS reconstruction framework. Next, the prior image and the forward projection data are
used together by the PICCS algorithm to obtain a low noise DR-PICCS reconstruction, which maintains the
spatial resolution of the original FBP images. The spatial resolution of DR-PICCS was studied using a Catphan
phantom by MTF measurement. The noise reduction factor, CT number change and noise texture were studied
using human subject data consisting of 20 CT colonography exams performed under an IRB-approved protocol.
In each human subject study, six ROIs (two soft tissue, two colonic air columns, and two subcutaneous fat)
were selected for the CT number and noise measurements study. Skewness and kurtosis were used as figures of
merit to indicate the noise texture. A Bland-Altman analysis was performed to study the accuracy of the CT
number. The results showed that, compared with FBP reconstructions, the MTF curve shows very little change
in DR-PICCS reconstructions, spatial resolution loss is less than 0.1 lp/cm, and the noise standard deviation
can be reduced by a factor of 3 with DR-PICCS. The CT numbers in FBP and DR-PICCS reconstructions agree
well, which indicates that DR-PICCS does not change CT numbers. The noise textures indicators measured
from DR-PICCS images are in a similar range as FBP images.
A clinical comparison study of a novel statistical iterative and filtered backprojection reconstruction
Show abstract
The conventional filtered backprojection (FBP) algorithm employed in reduced dose MDCT acquisitions provides low
reconstruction quality, e.g. high noise level, and many artifacts. Thus, there is the need for efficient reconstruction
methods that have dose reduction potential while providing high reconstruction quality. In this work we present a
comparison study between a statistical iterative reconstruction algorithm called iDose and the FBP algorithm. iDose is a
hybrid iterative reconstruction algorithm which provides enhanced image quality while reducing the radiation dose
compared to conventional algorithms. We report on the performance of the two algorithms with respect to uniformity,
noise characteristics, spatial resolution, and patient studies. With respect to the uniformity of the Hounsfield Units (HU),
we found that the mean HU value remains stable while employing iDose. With iDose the noise is significantly reduced.
This is reflected by an improvement in the contrast-to-noise ratio and in the noise-power-spectrum compared to the FBP.
The measurements of the modulation-transfer-function confirm that with iDose there is no decline in spatial resolution.
In clinical studies, slices reconstructed with the iDose algorithm showed significantly lower mean noise. Inspired by our
phantom and clinical results, we come to the conclusion that iDose is an important tool when considering the reduction
of radiation dose in CT. However, continuous efforts to reduce radiation dose should be further proceeded.
Poster Session: CT
Iterative CT reconstruction integrating SART and conjugate gradient
Show abstract
Iterative CT reconstruction methods have advantages over analytical reconstruction methods because
of their robustness to both noise and incomplete projection data, which have great potential
for dose reduction in real applications. The SART algorithm, which is one of the well-established
iterative reconstruction methods, has been examined extensively, and GPU has been applied to
improve their efficiency. Although it has been proved that SART may globally converge, its convergence
is very slow, especially after the first several iterations. Hundreds of iterations may be
needed for accurate reconstruction. This slow convergence requires heavy data transfer between
global memory and texture memory inside GPU. Therefore, preconditioned conjugate gradient (CG)
method, which converges much faster than SART, may be combined with SART for better performance.
Since CG is sensitive to initialization, the reconstruction results from SART after a few
iterations may be used as the initialization for CG. Preliminary experimental results on CPU show
that this framework converges much faster than SART and CG, which demonstrates its potential
in real applications.
Iterative helical cone-beam CT reconstruction using graphics hardware: a simulation study
Show abstract
This paper presents a simulation study on iterative reconstruction methods for helical cone-beam
CT using graphics hardware. While analytic methods have been proposed to achieve an exact
reconstruction for helical cone-beam CT, these methods may not perform well for projection noise
and imaging geometry variations, especially for spiral variations. Iterative methods such as SART
are advantageous in this aspect. Since SART is computationally intense, graphics hardware (GPU)
may be utilized to handle the computations and increase the efficiency of SART. GPU SART
reconstruction results are presented for spiral cone-beam CT, and these results are compared with
the Katsevich reconstructions in presence of projection noise and spiral variations. The comparison
shows that SART is accurate, efficient, and more robust to projection noise and spiral variations
than the Katsevich method.
Iterative volume of interest image reconstruction in helical cone beam X-Ray CT using a stored system matrix approach
Show abstract
We present an efficient scheme for the forward and backward projector implementation for helical cone-beam
x-ray CT reconstruction using a pre-calculated and stored system matrix approach. Because of the symmetry of
a helical source trajectory, it is sufficient to calculate and store the system matrix entries for one image slice only
and for all source positions illuminating it. The system matrix entries for other image slices are copies of those
stored values. In implementing an iterative image reconstruction method, the internal 3D image volume can be
based on a non-Cartesian grid so that no system matrix interpolation is needed for the repeated forward and
backward projection calculation. Using the proposed scheme, the memory requirement for the reconstruction of
a full field-of-view of clinical scanners is manageable on current computing platforms. The same storage principle
can be generalized and applied to iterative volume-of-interest image reconstruction for helical cone-beam CT.
We demonstrate by both computer simulations and clinical patient data the speed and image quality of VOI
image reconstruction using the proposed stored system matrix approach. We believe the proposed method may
contribute to bringing iterative reconstruction to the clinical practice.
Accelerate multi-dimensional CT scanner simulation with GPU
Yingjie Han,
Jiangtao Gao,
Osamu Miyazaki
Show abstract
CT scanner simulation virtually simulates the projection process of CT without actually scanning. It is very useful to
design, evaluate and develop CT systems which are evolving into some directions. However, in order to simulate
multiple detector rows, multiple x-ray energy and other dimensions simultaneously, it becomes time consuming because
of large amount of computation. In this paper, we present a solution to this problem with CUDA architecture on GPU.
Our solution contains three steps. First, CPU prepares the data that will be used by GPU. Then, GPU kernel is launched
to calculate the projection of all rays through the phantom data in parallel. In order to get maximum memory bandwidth,
we optimized the data storage by padding 2D arrays to ensure the global memory access coalesced. Finally, post
processing is done on CPU. Our experiment environment includes a dual core CPU and a NVIDIA Quadro FX 1800
GPU with CUDA compute capability 1.1. We used three kinds of phantom data to test the performance. It is found that
our solution gets the same image quality in double precision but gains a speed increase of more than 10 times faster than
using CPU only.
OpenCL: a viable solution for high-performance medical image reconstruction?
Show abstract
Reconstruction of 3-D volumetric data from C-arm CT projections is a computationally demanding task. For
interventional image reconstruction, hardware optimization is mandatory. Manufacturers of medical equipment
use a variety of high-performance computing (HPC) platforms, like FPGAs, graphics cards, or multi-core CPUs.
A problem of this diversity is that many different frameworks and (vendor-specific) programming languages are
used. Furthermore, it is costly to switch the platform, since the code has to be re-written, verified, and optimized.
OpenCL, a relatively new industry standard for HPC, promises to enable portable code. Its key idea is to
abstract hardware in a way that allows an efficient mapping onto real CPUs, GPUs, and other hardware. The
code is compiled for the actual target by the device driver.
In this work we investigated the suitability of OpenCL as a tool to write portable code that runs efficiently
across different hardware. The problems chosen are back- and forward-projection, the most time-consuming
parts of (iterative) reconstruction. We present results on three platforms, a multi-core CPU system and two
GPUs, and compare them against manually optimized native implementations.
We found that OpenCL allows to share a common framework in one language across platforms. However,
considering differences in the underlying architecture, a hardware-oblivious implementation cannot be expected
to deliver maximal performance. By optimizing the OpenCL code for the specific hardware we reached over 90%
of native performance for both problems, back- and forward-projection, on all platforms.
Improved total variation regularized image reconstruction (iTV) applied to clinical CT data
Show abstract
Compresssed sensing seems to be very promising for image reconstruction in computed tomography. In the last
years it has been shown, that these algorithms are able to handle incomplete data sets quite well. As cost function
these algorithms use the l1-norm of the image after it has been transformed by a sparsifying transformation.
This yields to an inequality-constrained convex optimization problem.
Due to the large size of the optimization problem some heuristic optimization algorithms have been proposed
in the last years. The most popular way is optimizing the rawdata and sparsity cost functions separately in an
alternating manner.
In this paper we will follow this strategy. Thereby we present a new method to adapt these optimization steps.
Compared to existing methods which perform similar, the proposed method needs no a priori knowledge about
the rawdata consistency. It is ensured that the algorithm converges to the best possible value of the rawdata cost
function, while holding the sparsity constraint at a low value. This is achieved by transferring both optimization
procedures into the rawdata domain, where they are adapted to each other.
To evaluate the algorithm, we process measured clinical datasets. To cover a wide field of possible applications, we
focus on the problems of angular undersampling, data lost due to metal implants, limited view angle tomography
and interior tomography. In all cases the presented method reaches convergence within less than 25 iteration
steps, while using a constant set of algorithm control parameters. The image artifacts caused by incomplete
rawdata are mostly removed without introducing new effects like staircasing. All scenarios are compared to an
existing implementation of the ASD-POCS algorithm, which realizes the stepsize adaption in a different way.
Additional prior information as proposed by the PICCS algorithm can be incorporated easily into the optimization
process.
Ring artifact corrections in flat-panel detector based cone beam CT
Show abstract
The use of flat-panel detectors (FPDs) is becoming increasingly popular in the cone beam volume and multi-slice
CT imaging. But due to the deficient semiconductor array processing, the diagnostic quality of the FPD-based CT
images in both CT systems is degraded by different types of artifacts known as the ring and radiant artifacts. Several
techniques have been already published in eliminating the stripe artifacts from the projection data of the multi-slice
CT system or in other words, from the sinogram image with a view to suppress the ring and radiant artifacts from
the 2-D reconstructed CT images. On the other hand, till now a few articles have been reported to remove the
artifacts from the cone beam CT images. In this paper, an effective approach is presented to eliminate the artifacts
from the cone beam projection data using the sinogram based stripe artifact removal methods. The improvement in
the required diagnostic quality is achieved by applying them both in horizontal and vertical sinograms constituted
sequentially from the stacked cone beam projections. Finally, some real CT images have been used to demonstrate
the effectiveness of the proposed technique in eliminating the ring and radiant artifacts from the cone beam volume
CT images. A comparative study with the conventional sinogram based approaches is also presented to see the
effectiveness of the proposed technique.
Backprojection-filtration image reconstruction from partial cone-beam data for scatter correction
Show abstract
In this work, we proposed a novel scatter correction method for a circular cone-beam computed tomography (CBCT)
using a hardware-based approach that completes both data acquisition and scatter correction in a single rotation. We
utilized (quasi-)redundancy in the circular cone-beam data, and applied the chord-based backprojection-filtration (BPF)
algorithm to avoid the problem of filtering discontinuous data that would occur if conventional filtered-backprojection
(FBP) algorithms were used. A single scan was performed on a cylindrical uniform phantom with beam-block strips
between the source and the phantom, and the scatter was estimated for each projection from the data under the blocked
regions. The beam-block strips (BBSs) were aligned parallel to the rotation axis, and the spacing between the strips was
determined so that the data within the spaces constitute at least slightly more than the minimum data required for image
reconstruction. The results showed that the image error due to scatter (about 30 % of the attenuation coefficient value) has
been successfully corrected by the proposed algorithm.
Fast 4D cone-beam reconstruction using the McKinnon-Bates algorithm with truncation correction and nonlinear filtering
Show abstract
A challenge in using on-board cone beam computed tomography (CBCT) to image lung tumor motion prior to radiation
therapy treatment is acquiring and reconstructing high quality 4D images in a sufficiently short time for practical use.
For the 1 minute rotation times typical of Linacs, severe view aliasing artifacts, including streaks, are created if a
conventional phase-correlated FDK reconstruction is performed. The McKinnon-Bates (MKB) algorithm provides an
efficient means of reducing streaks from static tissue but can suffer from low SNR and other artifacts due to data
truncation and noise. We have added truncation correction and bilateral nonlinear filtering to the MKB algorithm to
reduce streaking and improve image quality. The modified MKB algorithm was implemented on a graphical processing
unit (GPU) to maximize efficiency. Results show that a nearly 4x improvement in SNR is obtained compared to the
conventional FDK phase-correlated reconstruction and that high quality 4D images with 0.4 second temporal resolution
and 1 mm3 isotropic spatial resolution can be reconstructed in less than 20 seconds after data acquisition completes.
Contrast adaptive total p-norm variation minimization approach to CT reconstruction for artifact reduction in reduced-view brain perfusion CT
Show abstract
Perfusion CT (PCT) examinations are getting more frequently used for diagnosis of acute brain diseases such as
hemorrhage and infarction, because the functional map images it produces such as regional cerebral blood flow (rCBF),
regional cerebral blood volume (rCBV), and mean transit time (MTT) may provide critical information in the emergency
work-up of patient care. However, a typical PCT scans the same slices several tens of times after injection of contrast
agent, which leads to much increased radiation dose and is inevitability of growing concern for radiation-induced cancer
risk. Reducing the number of views in projection in combination of TV minimization reconstruction technique is being
regarded as an option for radiation reduction. However, reconstruction artifacts due to insufficient number of X-ray
projections become problematic especially when high contrast enhancement signals are present or patient's motion
occurred.
In this study, we present a novel reconstruction technique using contrast-adaptive TpV minimization that can reduce
reconstruction artifacts effectively by using different p-norms in high contrast and low contrast objects. In the proposed
method, high contrast components are first reconstructed using thresholded projection data and low p-norm total
variation to reflect sparseness in both projection and reconstruction spaces. Next, projection data are modified to contain
only low contrast objects by creating projection data of reconstructed high contrast components and subtracting them
from original projection data. Then, the low contrast projection data are reconstructed by using relatively high p-norm
TV minimization technique, and are combined with the reconstructed high contrast component images to produce final
reconstructed images.
The proposed algorithm was applied to numerical phantom and a clinical data set of brain PCT exam, and the resultant
images were compared with those using filtered back projection (FBP) and conventional TV reconstruction algorithm.
Our results show the potential of the proposed algorithm for image quality improvement, which in turn may lead to dose
reduction.
Expectation maximization and total variation-based model for computed tomography reconstruction from undersampled data
Show abstract
Computerized tomography (CT) plays an important role in medical imaging, especially for diagnosis and therapy.
However, higher radiation dose from CT will result in increasing of radiation exposure in the population. Therefore,
the reduction of radiation from CT is an essential issue. Expectation maximization (EM) is an iterative
method used for CT image reconstruction that maximizes the likelihood function under Poisson noise assumption.
Total variation regularization is a technique used frequently in image restoration to preserve edges, given
the assumption that most images are piecewise constant. Here, we propose a method combining expectation
maximization and total variation regularization, called EM+TV. This method can reconstruct a better image
using fewer views in the computed tomography setting, thus reducing the overall dose of radiation. The numerical
results in two and three dimensions show the efficiency of the proposed EM+TV method by comparison with
those obtained by filtered back projection (FBP) or by EM only.
A comparison of four algorithms for metal artifact reduction in CT imaging
Show abstract
Streak artifacts caused by the presence of metal have been a significant problem in CT imaging since its inception in
1972. With the fast evolving medical device industry, the number of metal objects implanted in patients is increasing
annually. This correlates directly with an increased likelihood of encountering metal in a patient CT scan, thus
necessitating the need for an effective and reproducible metal artifact reduction (MAR) algorithm. Previous comparisons
between MAR algorithms have typically only evaluated a small number of patients and a limited range of metal
implants. Although the results of many methods are promising, the reproducibility of these results is key to
providing more tangible evidence of their effectiveness. This study presents a direct comparison between the
performances, assessed by board certified radiologists, of four MAR algorithms: 3 non-iterative and one iterative
method, all applied and compared to the original clinical images. The results of the evaluation indicated a negative mean
score in almost all uses for two of the non-iterative methods, signifying an overall decrease in the diagnostic quality of
the images, generally due to perceived loss of detail. One non-iterative algorithm showed a slight improvement. The
iterative algorithm was superior in all studies by producing a considerable improvement in all uses.
A study on regularization parameter choice for interior tomography based on truncated Hilbert transform
Show abstract
For interior tomography based on truncated Hilbert transform (THT), the recently proposed truncated singular value
decomposition (TSVD) reconstruction method uses a regularization parameter given directly. In this paper, a method of
choosing the regularization parameter is presented based on L-curve to get an optimal regularization parameter in
theoretical sense. Furthermore, we develop a Tikhonov regularization method in comparison to TSVD. Simulation results
indicate that both of the two regularization methods with the optimal regularization parameters have good performances
on the image quality for both cases of noise-free and noisy projections.
Interior tomography from low-count local projections and associated Hilbert transform data
Show abstract
This paper presents a statistical interior tomography approach combining an optimization of the truncated Hilbert
transform (THT) data. With the introduction of the compressed sensing (CS) based interior tomography, a statistical
iteration reconstruction (SIR) regularized by the total variation (TV) has been proposed to reconstruct an interior region
of interest (ROI) with less noise from low-count local projections. After each update of the CS based SIR, a THT
constraint can be incorporated by an optimizing strategy. Since the noisy differentiated back-projection (DBP) and its
corresponding noise variance on each chord can be calculated from the Poisson projection data, an object function is
constructed to find an optimal THT of the ROI from the noisy DBP and the present reconstructed image. Then the
inversion of this optimized THT on each chord is performed and the resulted ROI will be the initial image of next update
for the CS based SIR. In addition, a parameter in the optimization of THT step can be used to determine the stopping rule
of the iteration heuristically. Numerical simulations are performed to evaluate the proposed approach. Our results
indicate that this approach can reconstruct an ROI with high accuracy by reducing the noise effectively.
Compressed sensing algorithms for fan-beam CT image reconstruction
Show abstract
Compressed sensing can recover a signal that is sparse in some way from a small number of
samples. For CT imaging, this has the potential to obtain good reconstruction from a smaller
number of projections or views, thereby reducing the amount of patient radiation. In this
work, we applied compressed sensing to fan beam CT image reconstruction , which is a special
case of an important 3D CT problem (cone beam CT). We compared the performance of
two compressed sensing algorithms, denoted as the LP and the QP, in simulation. Our results
indicate that the LP generally provides smaller reconstruction error and converges faster, hence
is more preferable.
Low-dose dual-energy cone-beam CT using a total-variation minimization algorithm
Show abstract
Dual-energy cone-beam CT is an important imaging modality in diagnostic applications, and may also find its use
in other applications such as therapeutic image guidance. Despite of its clinical values, relatively high radiation dose of
dual-energy scan may pose a challenge to its wide use. In this work, we investigated a low-dose, pre-reconstruction type of
dual-energy cone-beam CT (CBCT) using a total-variation minimization algorithm for image reconstruction. An empirical
dual-energy calibration method was used to prepare material-specific projection data. Raw data acquired at high and low
tube voltages are converted into a set of basis functions which can be linearly combined to produce material-specific data
using the coefficients obtained through the calibration process. From much fewer views than are conventionally used,
material specific images are reconstructed by use of the total-variation minimization algorithm. An experimental study
was performed to demonstrate the feasibility of the proposed method using a micro-CT system. We have reconstructed
images of the phantoms from only 90 projections acquired at tube voltages of 40 kVp and 90 kVp each. Aluminum-only
and acryl-only images were successfully decomposed. A low-dose dual-energy CBCT can be realized via the proposed
method by greatly reducing the number of projections.
Refinement of motion correction strategies for lower-cost CT for under-resourced regions of the world
Show abstract
This paper describes a recently developed post-acquisition motion correction strategy for application to lower-cost
computed tomography (LCCT) for under-resourced regions of the world. Increased awareness regarding global health
and its challenges has encouraged the development of more affordable healthcare options for underserved people
worldwide. In regions such as sub-Saharan Africa, intermediate level medical facilities may serve millions with
inadequate or antiquated equipment due to financial limitations. In response, the authors have proposed a LCCT design
which utilizes a standard chest x-ray examination room with a digital flat panel detector (FPD). The patient rotates on a
motorized stage between the fixed cone-beam source and FPD, and images are reconstructed using a Feldkamp
algorithm for cone-beam scanning.
One of the most important proofs-of-concept in determining the feasibility of this system is the successful correction of
undesirable motion. A 3D motion correction algorithm was developed in order to correct for potential patient motion,
stage instabilities and detector misalignments which can all lead to motion artifacts in reconstructed images. Motion will
be monitored by the radiographic position of fiducial markers to correct for rigid body motion in three dimensions.
Based on simulation studies, projection images corrupted by motion were re-registered with average errors of 0.080 mm,
0.32 mm and 0.050 mm in the horizontal, vertical and depth dimensions, respectively. The overall absence of motion
artifacts in motion-corrected reconstructions indicates that reasonable amounts of motion may be corrected using this
novel technique without significant loss of image quality.
Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing
Show abstract
Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g.
lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally
provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An
alternative solution is to access the required computation power over the internet from a cloud computing service, which
is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation
resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this
paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We
parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the
standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a
very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce
implementation might become necessary in the future. All the experiments for this paper, including development and
testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.
Quantitative evaluation method of noise texture for iteratively reconstructed x-ray CT images
Show abstract
Recently, iterative image reconstruction algorithms have been extensively studied in x-ray CT in order to produce
images with lower noise variance and high spatial resolution. However, the images thus reconstructed often
have unnatural image noise textures, the potential impact of which on diagnostic accuracy is still unknown.
This is particularly pronounced in total-variation-minimization-based image reconstruction, where the noise
background often manifests itself as patchy artifacts. In this paper, a quantitative noise texture evaluation
metric is introduced to evaluate the deviation of the noise histogram from that of images reconstructed using
filtered backprojection. The proposed texture similarity metric is tested using TV-based compressive sampling
algorithm (CSTV). It was demonstrated that the metric is sensitive to changes in the noise histogram independent
of changes in noise level. The results demonstrate the existence tradeoff between the texture similarity metric
and the noise level for the CSTV algorithm, which suggests a potential optimal amount of regularization. The
same noise texture quantification method can also be utilized to evaluate the performance of other iterative
image reconstruction algorithms.
An efficient scatter correction algorithm based on pre-reconstructed images of contrast enhancement and sparse-viewed Monte Carlo simulation
Show abstract
X-ray scatter is a major degrading factor in x-ray cone beam (CB) CT imaging. In the scatter corrections by using Monte
Carlo (MC) simulation, the initial MC modeling is not accurate based on the scatter-polluted volume images, thus the
correction has to be performed iteratively, which is one of the reasons that MC based correction is computational
demanding. In this paper, we found a relationship that the ratio of IS (scatter) to IP+S (primary plus scatter) can be
approximated by the weighted data in Radon space. On this basis, we develop a strategy called Projection Contrast
Enhancement based Pre-Correction (PCEPC). PCEPC is efficient, achieving a scatter pre-correction with enhanced
image Quality (Q) of ~0.7 (Q=1 for scatter-free images; Q=0 for scatter-contaminated images without correction). By
using the results of PCEPC, more accurate MC modeling on the scanned object is feasible with less iterations or even in
a non-iterative way, namely as the PCEPC-MC method. An exemplary non-iterative PCEPC-MC is implemented, in
which the scatter fluence of eighteen views equally distributed over 2π is simulated by MC toolkit EGSnrc, enhancing
the Q further to ~0.8.
Task-based comparative study of iterative image reconstruction methods for limited-angle x-ray tomography
Show abstract
For tomography that has available only projection views from a limited angular span, such as
is the case in an x-ray tomosynthesis system, the image reconstruction problem is ill-posed.
Reconstruction methods play an important role in optimizing the image quality for human
interpretation. In this work we compare three popular iterative image reconstruction methods that have
been applied to digital tomosynthesis systems: the simultaneous algebraic reconstruction technique
(SART), the maximum-likelihood (ML) and the total-variation regularized least-square reconstruction
method (TVLS). Quality of the images reconstructed from these three methods is assessed through
task-based performance. Two tasks are considered in this work: lesion detection and shape
discrimination. Area under the ROC curve (AUC) is used as the figure-of-merit. Our simulation results
indicate that TVLS and SART perform very similarly and better than the ML in terms of lesion
detectability, while the ML performs better than the other two in terms of shape discrimination ability.
Limited data tomographic image reconstruction via dual formulation of total variation minimization
Show abstract
The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast,
however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition
of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a
limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the
cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise
corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm
is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we
derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is
comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient
of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After
a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be
implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the
TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low
dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our
finding.
Cone-beam CT data-driven pose correction for analytic reconstruction methods
Show abstract
A precise knowledge of the geometry information of a cone-beam CT is required for high quality
image reconstruction. In some applications, however, the acquisition geometry is either not well
characterized or not repeatable, for example, in the case of gantry vibration and patient motion.
An offline correction using a calibration pattern and an online calibration using an external tracking
system may be used to measure and correct CT geometry parameters during reconstruction.
Both approaches have limitations though. A new method is proposed in this paper to estimate
pose parameters using the acquired cone-beam projection data only. During the pose estimation
process each 2D projection data is registered to the 3D volume reconstructed from the current,
inaccurate pose estimate. Pose parameters are then corrected incrementally using the registration
results. Applying this 2D-3D registration method to the FDK reconstruction method, we are able
to estimate rotational parameters to within an average total angular deviation of 0.5 degrees, and
center-of-rotation to an average of 0.02% of the source-to-detector distance (SID) in the detector
plane. The image quality of CT reconstructions is comparable to those using exact geometry.
A simple image based method for obtaining electron density and atomic number in dual energy CT
Show abstract
The extraction of electron density and atomic number information in computed tomography is possible when
image values can be sampled using two different effective energies. The foundation for this extraction lies in
the ability to express the linear attenuation coefficient using two basis functions that are dependent on electron
density and atomic number over the diagnostic energy range used in CT. Material basis functions separate images
into clinically familiar quantities such as 'bone' images and 'soft tissue' images. Physically, all basis function
choices represent the expression of the linear attenuation coefficient in terms of a photoelectric and a Compton
scattering term. The purpose of this work is to develop a simple dual energy decomposition method that requires
no a priori knowledge about the energy characteristics of the imaging system. It is shown that the weighted sum
of two basis images yields an electron density image where the weights for each basis image are the electron density
of that basis image's basis material. Using the electron density image, effective atomic number information can
also be obtained. These methods are performed solely in the image domain and require no spectrum or detector
energy response information as required by some other dual energy decomposition methods.
A scatter artifact reduction technique in dual-energy computed tomography systems
Show abstract
Spectral CT research and development has recently become a hot topic in industry and in academia. Different
approaches have been developed for spectral CT imaging. As a result of the capability to generate monochromatic-energy
images, beam hardening artifacts have been largely reduced. However, X-ray scatter is still present, and the
associated scatter artifact can still be present in the base material images. This paper proposes an approach for scatter
artifact reduction for dual-energy CT. Phantoms as well as clinical data have been evaluated to demonstrate the
effectiveness of this approach.
Investigation of a method to estimate the MTF and NPS of CT towards creating an international standard
Show abstract
The current IEC standard method for characterizing noise in CT scanners is based on the pixel standard deviation of the CT image of a water-equivalent uniform phantom. However, the standard deviation does not account for correlations in the noise, potentially generating misleading results about image quality. With this paper we
investigate a method for estimating the Fourier based noise power spectrum (NPS) for the characterization of noise in CT, for CT scanners with linear, non-adaptive reconstruction algorithms. The IEC currently evaluates the deterministic properties of CT scanners with the Fourier based modulation transfer function (MTF). By accounting for the spatial correlations in both the stochastic and deterministic description of an imaging system, the system signal-to-noise ratio (SNR) can be determined more accurately. In this paper we investigate a method for estimating the MTF and the NPS of a CT scanner in the axial plane. Furthermore, we present examples of the Fourier SNR calculated from the MTF and the NPS in order to demonstrate that it gives more reasonable results than the pixel SNR. The MTF was estimated by following methods available in current literature. For the characterization of noise we used a standard water phantom, while for the point spread function (PSF) we used a tungsten wire phantom in air. Images were taken at four different source current settings and reconstructed with four different lters. We showed that the pixel SNR ranks the reconstruction lters differently from the Fourier SNR.
XCAT/DRASIM: a realistic CT/human-model simulation package
Show abstract
The aim of this research is to develop a complete CT/human-model simulation package by integrating the 4D eXtended
CArdiac-Torso (XCAT) phantom, a computer generated NURBS surface based phantom that provides a realistic model
of human anatomy and respiratory and cardiac motions, and the DRASIM (Siemens Healthcare) CT-data simulation
program. Unlike other CT simulation tools which are based on simple mathematical primitives or voxelized phantoms,
this new simulation package has the advantages of utilizing a realistic model of human anatomy and physiological
motions without voxelization and with accurate modeling of the characteristics of clinical Siemens CT systems. First,
we incorporated the 4D XCAT anatomy and motion models into DRASIM by implementing a new library which
consists of functions to read-in the NURBS surfaces of anatomical objects and their overlapping order and material
properties in the XCAT phantom. Second, we incorporated an efficient ray-tracing algorithm for line integral calculation
in DRASIM by computing the intersection points of the rays cast from the x-ray source to the detector elements through
the NURBS surfaces of the multiple XCAT anatomical objects along the ray paths. Third, we evaluated the integrated
simulation package by performing a number of sample simulations of multiple x-ray projections from different views
followed by image reconstruction. The initial simulation results were found to be promising by qualitative evaluation. In
conclusion, we have developed a unique CT/human-model simulation package which has great potential as a tool in the
design and optimization of CT scanners, and the development of scanning protocols and image reconstruction methods
for improving CT image quality and reducing radiation dose.
Poster Session: Noise and Dose, Measurement and Reduction
Longitudinal tube modulation for chest and abdominal CT examinations: impact on effective patient doses calculations
F. Zanca,
K. Michielsen,
M. Depuydt,
et al.
Show abstract
Purpose: In multi-slice CT, manufacturers have implemented automatic tube current modulation (TCM) algorithms.
These adjust tube current in the x-y plane (angular modulation) and/or along the z-axis (z-axis modulation) according to
the size and attenuation of the scanned body part. Current methods for estimating effective dose (ED) values in CT do
not account for such new developments. This study investigated the need to take TCM into account when calculating ED
values, using clinical data.
Methods: The effect of TCM algorithms as implemented on a GE BrightSpeed 16, a Philips Brilliance 64 and a Siemens
Sensation 64 CT scanners was investigated. Here, only z-axis modulation was addressed, considering thorax and
abdomen CT examinations collected from 534 adult patients. Commercially available CT dosimetry software (CT expo
v.1.7) was used to compute EDTCM (ED accounting for TCM) as the sum of ED of successive slices. A two-step
approach was chosen: first we estimated the relative contribution of each slice assuming a constant tube current. Next a
weighted average was taken based upon the slice specific tube current value. EDTCM was than compared to patient ED
estimated using average mA of all slices.
Results and Conclusions: The proposed method is relatively simple and uses as input: the parameters of each protocol,
a fitted polynomial function of weighting factors for each slice along the scan length and mA values of the individual
patient examination. Results show that z-axis modulation does not have a strong impact on ED for the Siemens and the
GE scanner (difference ranges from -4.1 to 3.3 percent); for the Philips scanner the effect was more important,
(difference ranges from -8.5 to 6.9 percent), but still all median values approached zero (except for one case, where the
median reached -5.6%), suggesting that ED calculation using average mA is in general a good approximation for EDTCM.
Higher difference values for the Philips scanner are due to a stronger current modulation in respect to the other scanners
investigated. It would be interesting to repeat the study by collecting patients in a prospective way, for whom the weight
and height are know and use a dedicated patient dosimetry software to calculate the dose. If the use of TCM has a larger
impact on calculated effective dose, appropriate correction factors should be used.
Dosimetric quality control of Eclipse treatment planning system using pelvic digital test object
Show abstract
Last year, we demonstrated the feasibility of a new method to perform dosimetric quality control of Treatment Planning
Systems in radiotherapy, this method is based on Monte-Carlo simulations and uses anatomical Digital Test Objects
(DTOs). The pelvic DTO was used in order to assess this new method on an ECLIPSE VARIAN Treatment Planning
System. Large dose variations were observed particularly in air and bone equivalent material.
In this current work, we discuss the results of the previous paper and provide an explanation for observed dose
differences, the VARIAN Eclipse (Anisotropic Analytical) algorithm was investigated. Monte Carlo simulations (MC)
were performed with a PENELOPE code version 2003. To increase efficiency of MC simulations, we have used our
parallelized version based on the standard MPI (Message Passing Interface). The parallel code has been run on a 32-
processor SGI cluster. The study was carried out using pelvic DTOs and was performed for low- and high-energy photon
beams (6 and 18MV) on 2100CD VARIAN linear accelerator. A square field (10x10 cm2) was used. Assuming the MC
data as reference, χ index analyze was carried out. For this study, a distance to agreement (DTA) was set to 7mm
while the dose difference was set to 5% as recommended in the TRS-430 and TG-53 (on the beam axis in 3-D
inhomogeneities). When using Monte Carlo PENELOPE, the absorbed dose is computed to the medium, however
the TPS computes dose to water. We have used the method described by Siebers et al. based on Bragg-Gray
cavity theory to convert MC simulated dose to medium to dose to water. Results show a strong consistency between
ECLIPSE and MC calculations on the beam axis.
Estimation of organ and effective dose to the patient during spinal surgery with a cone-beam O-arm system
Show abstract
The purpose of this study was to estimate organ and effective dose to the patient during spinal surgery with a cone-beam
O-arm system. The absorbed dose to radiosensitive organs and effective dose were calculated on mathematically
simulated phantom corresponding to a 15-year-old patient using PCXMC 2.0. Radiation doses were calculated at every
15° of the x-ray tube projection angle at two regions: thoracic spine and lumbar spine. Two different scan settings were
investigated: 120 kV/128 mAs (standard) and 80 kV/80 mAs (low-dose). The effect on effective dose by changing the
number of simulated projection angles (24, 12 and 4) was investigated. Estimated effective dose with PCXMC was
compared with calculated effective dose using conversion factors between dose length product (DLP) and effective dose.
The highest absorbed doses were received by the breast, lungs (thoracic spine) and stomach (lumbar spine). The effective
doses using standard settings were 5 times higher than those delivered with low-dose settings (2-3 scans: 7.9-12 mSv
versus 1.5-2.4 mSv). There was no difference in estimated effective dose using 24 or 12 projection angles. Using 4
projection angles at every 90° was not enough to accurate simulate the x-ray tube rotating around the patient. Conversion
factors between DLP and effective dose were determined. Our conclusion is that the O-arm has the potential to deliver
high radiation doses and consequently there is a strong need to optimize the clinical scan protocols.
Monte Carlo modeling of the scatter radiation doses in IR
Show abstract
Purpose: To use Monte Carlo techniques to compute the scatter radiation dose distribution patterns around patients
undergoing Interventional Radiological (IR) examinations.
Method: MCNP was used to model the scatter radiation air kerma (AK) per unit kerma area product (KAP) distribution
around a 24 cm diameter water cylinder irradiated with monoenergetic x-rays. Normalized scatter fractions (SF) were
generated defined as the air kerma at a point of interest that has been normalized by the Kerma Area Product incident on
the phantom (i.e., AK/KAP). Three regions surrounding the water cylinder were investigated consisting of the area
below the water cylinder (i.e., backscatter), above the water cylinder (i.e., forward scatter) and to the sides of the water
cylinder (i.e., side scatter).
Results: Immediately above and below the water cylinder and in the side scatter region, values of normalized SF
decreased with the inverse square of the distance. For z-planes further away, the decrease was exponential. Values of
normalized SF around the phantom were generally less than 10-4. Changes in normalized SF with x-ray energy were less
than 20% and generally decreased with increasing x-ray energy. At a given distance from region where the x-ray beam
enters the phantom, the normalized SF was higher in the backscatter regions, and smaller in the forward scatter regions.
The ratio of forward to back scatter normalized SF was lowest at 60 keV and highest at 120 keV.
Conclusion: Computed SF values quantify the normalized fractional radiation intensities at the operator location relative
to the radiation intensities incident on the patient, where the normalization refers to the beam area that is incident on the
patient. SF values can be used to estimate the radiation dose received by personnel within the procedure room, and which
depend on the imaging geometry, patient size and location within the room. Monte Carlo techniques have the potential
for simulating normalized SF values for any arrangement of imaging geometry, patient size and personnel location and
are therefore an important tool for minimizing operator doses in IR.
Fluence estimation by deconvolution via l1-norm minimization
Show abstract
Advances in radiotherapy irradiation techniques have led to very complex treatments requiring for a more stringent
control. The dosimetric properties of electronic portal imaging devices (EPID) encouraged their use for treatment
verification. Two main approaches have been proposed: the forward approach, where measured portal dose images
are compared to predicted dose images and the backward approach, where EPID images are used to estimate the
dose delivered to the patient. Both approaches need EPID images to be converted into a fluence distribution
by deconvolution. However, deconvolution is an ill-posed problem which is very sensitive to small variations on
input data. This study presents the application of a deconvolution method based on l1-norm minimization; this
is a method known for being very stable while working with noisy data. The algorithm was first evaluated on
synthetic images with different noise levels, the results were satisfactory. Deconvolution algorithm was then applied
to experimental portal images; the required EPID response kernel and energy fluence images were computed
by Monte-Carlo calculation, accelerator treatment head and EPID models had already been commissioned in
a previous work. The obtained fluence images were in good agreement with simulated fluence images. This
deconvolution algorithm may be generalized to an inverse problem with a general operator, where image formation
is not longer modeled by a convolution but by a linear operation that might be seen as a position-dependent
convolution. Moreover, this procedure would be detector independent and could be used for any detector type
provided its response function is known.
A novel noise suppression solution in cone-beam CT images
Show abstract
The Feldkamp (FDK) algorithm is the most popular algorithm for circular cone-beam computed tomography (CBCT).
While, due to the spatially invariant interpolation process and pre-weighting factor adopted in numerical implementation
of the algorithms, non-uniform noise propagation have been observed across the field-of-view. In this study, instead of
using spatially invariant interpolation, we consider the spatially variant weighting to compensate for the non-uniformity.
The simulation study demonstrates the effectiveness of the presented solution.
Noise reduction by projection direction dependent diffusion for low dose fan-beam x-ray computed tomography
Show abstract
We propose a novel method to reduce the noise in fan-beam computed tomography (CT) imaging. First, the inverse
Radon transform is induced for a family of differential expression of projection function. Second, the diffusion partial
differential equation (PDE) is generalized from image space to projection space in parallel-beam geometry. Third, the
diffusion PDE is further induced from parallel-beam geometry to fan-beam geometry. Finally, the projection direction
dependent diffusion is developed to reduce CT noise, which arises from the quantum variation in the low dose exposure
of a medical x-ray CT (XCT) system. The proposed noise reduction processes projections iteratively and dependently on
x-ray path position, followed by a general CT reconstruction. Numerical simulation studies have demonstrated its
feasibility in the noise reduction of low dose fan-beam XCT imaging.
Radiation dose reduction in computed tomography (CT) using a new implementation of wavelet denoising in low tube current acquisitions
Show abstract
Radiation dose reduction remains at the forefront of research in computed tomography. X-ray tube parameters such as
tube current can be lowered to reduce dose; however, images become prohibitively noisy when the tube current is too
low. Wavelet denoising is one of many noise reduction techniques. However, traditional wavelet techniques have the
tendency to create an artificial noise texture, due to the nonuniform denoising across the image, which is undesirable
from a diagnostic perspective. This work presents a new implementation of wavelet denoising that is able to achieve
noise reduction, while still preserving spatial resolution. Further, the proposed method has the potential to improve those
unnatural noise textures. The technique was tested on both phantom and animal datasets (Catphan phantom and timeresolved
swine heart scan) acquired on a GE Discovery VCT scanner. A number of tube currents were used to
investigate the potential for dose reduction.
Noise characteristics of x-ray differential phase contrast CT
Show abstract
The noise characteristics of x-ray differential phase contrast computed tomography (DPC-CT) were investigated.
Both theoretical derivation and experimental results demonstrated that the dependence of noise variance on spatial
resolution in DPC-CT follows an inverse linear law. This behavior distinguishes DPC-CT from conventional
absorption based x-ray CT, where the noise variance varies inversely with the cube of the spatial resolution.
This anomalous noise behavior in DPC-CT is due to the Hilbert filtering kernel used in the CT reconstruction
algorithm, which equally weights all spatial frequency content. Additionally, we demonstrate that the noise
power of DPC-CT is scaled by the inverse of spatial frequency and is highly concentrated at the low spatial
frequencies, whereas conventional absorption CT increases in power at the high spatial frequencies.
Contrast-to-noise of a non-ideal multi-bin photon counting x-ray detector
Show abstract
The contrast-to-noise (CNR) is optimized over the weights and energy threshold settings of energy bins in a photon
counting detector using "delta-pulse" model simulations. Comparison is made to single-bin photon counting and energy
integration detectors. The CNR2 for iodine imaging is about a factor of 2.5X higher for a perfect, photon counting
detector compared to energy integration. Monte Carlo simulations are used to determine the impact of pile-up and other
factors that degrade the spectral performance. The benefits of multi-bin photon counting vanish at about 40-60% tail
fraction and 20-30 keV RMS noise. Because of pile-up, the CNR2 benefit also decreases as the incident count rate
approaches the maximum periodic rate (MPR). However the impact of pile-up is less for a three-bin detector than for a
two-bin detector when the multiple bins are weighted optimally.
MCNP simulation of radiation doses distributions in water phantoms simulating interventional radiology patients
Show abstract
Purpose: To investigate the dose distributions in water cylinders simulating patients undergoing Interventional
Radiological examinations.
Method: The irradiation geometry consisted of an x-ray source, dose-area-product chamber, and image intensifier as
currently used in Interventional Radiology. Water cylinders of diameters ranging between 17 and 30 cm were used to
simulate patients weighing between 20 and 90 kg. X-ray spectra data with peak x-ray tube voltages ranging from 60 to
120 kV were generated using XCOMP3R. Radiation dose distributions inside the water cylinder (Dw) were obtained
using MCNP5. The depth dose distribution along the x-ray beam central axis was normalized to free-in-air air kerma
(AK) that is incident on the phantom. Scattered radiation within the water cylinders but outside the directly irradiated
region was normalized to the dose at the edge of the radiation field. The total absorbed energy to the directly irradiated
volume (Ep) and indirectly irradiated volume (Es) were also determined and investigated as a function of x-ray tube
voltage and phantom size.
Results: At 80 kV, the average Dw/AK near the x-ray entrance point was 1.3. The ratio of Dw near the entrance point to
Dw near the exit point increased from ~ 26 for the 17 cm water cylinder to ~ 290 for the 30 cm water cylinder. At 80 kV,
the relative dose for a 17 cm water cylinder fell to 0.1% at 49 cm away from the central ray of the x-ray beam. For a 30
cm water cylinder, the relative dose fell to 0.1% at 53 cm away from the central ray of the x-ray beam. At a fixed x-ray
tube voltage of 80 kV, increasing the water cylinder diameter from 17 to 30 cm increased the Es/(Ep+Es) ratio by about
50%. At a fixed water cylinder diameter of 24 cm, increasing the tube voltage from 60 kV to 120 kV increased the
Es/(Ep+Es) ratio by about 12%. The absorbed energy from scattered radiation was between 20-30% of the total energy
absorbed by the water cylinder, and was affected more by patient size than x-ray beam energy.
Conclusion: MCNP offers a powerful tool to study the absorption and transmission of x-ray energy in phantoms that can
be designed to represent patients undergoing Interventional Radiological procedures. This ability will permit a
systematic investigation of the relationship between patient dose and diagnostic image quality, and thereby keep patient
doses As Low As Reasonably Achievable (ALARA).
Noise reduction in dual-source CT scanning
M. Petersilka,
B. Krauss,
K. Stierstorfer
Show abstract
With dual source CT, special attention has to be paid to scattered radiation. X-ray quanta from the second source (tube
B) get scattered from the object in the scan field and are registered by the first detector (detector A) and vice versa
("cross scattered radiation"). Depending on object and CT scan parameters, the ratio of scattered radiation
intensity s over primary radiation intensity p can even exceed unity. In order to restore contrast and to avoid artifacts,
the scattered radiation signal needs to be determined and subtracted from the measured signal. However, the thus
corrected projections experience a noise increase proportional to √(1+s/p). In CT, line integrals are often redundantly
measured. With dual source CT, the rays within redundant projection data can be differently affected by cross-scattered
radiation. Typically, rays which are complementary to those that experience maximum scattered intensity are affected
by rather low scattered intensity. The current work presents a noise-weighted reconstruction scheme, depending on the
scatter-to-primary ratio s/p, which leads to a minimal noise increase in these situations. The proposed weighting
scheme can also be combined with other means of noise reduction in the case of scattered radiation correction, e.g. a
subsequent spatial signal filtering. The effect of the weighting scheme is evaluated with phantom scans using typical
dual energy CT parameter settings. For a water cylinder of 40 cm diameter, the weighting scheme leads to a pixel noise
reduction of up to 16%. For a semi-anthropomorphic liver phantom a noise decrease of up to 5% was observed.
Relative dose in dual energy fast-kVp switching and conventional kVp imaging: spatial frequency dependent noise characteristics and low contrast imaging
Show abstract
Dual energy computed tomography offers unique diagnostic value by enabling access to material density, effective
atomic number, and energy specific spectral characteristics, which remained indeterminate with conventional kVp
imaging. Gemstone Spectral Imaging (GSI) is one of the dual energy methods based on fast kVp switching between two
x-ray spectra, 80 kVp and 140 kVp nominal, in adjacent projections. The purpose of this study was to compare relative
dose between GSI monochromatic and conventional kVp imaging for equivalent image noise characteristics. A spatialfrequency
domain noise power spectrum (NPS) was used as a more complete noise descriptor for the comparison of the
two image types. Uniform 20cm water phantom images from GSI and conventional 120 kVp scans were used for NPS
calculation. In addition, a low contrast imaging study of the two image types with equivalent noise characteristics was
conducted for contrast-to-noise-ratio (CNR) and low contrast detectability (LCD) in the Catphan600® phantom. From
three GSI presets ranging from medium to low dose, we observed that conventional 120kVp scan requires ~ 7% -
18% increase in dose to match the noise characteristics in optimal noise GSI monochromatic image; and that the 65 keV
monochromatic image CNR for a 0.5% contrast object is 22% higher compared to corresponding 120 kVp scan. Optimal
use of the two energy spectra within GSI results in reduced noise and improved CNR in the monochromatic images,
indicating the potential for use of this image type in routine clinical applications.
Poster Session: MRI
Determination of 3D flow velocity distributions from single-plane angiographic sequences
Show abstract
Understanding 3D flow-velocity fields may be valuable during interventional procedures. Thus, we are developing
methods to calculate 3D flow fields from single-plane angiographic sequences. The vessel geometry is selected. Flow
fields are generated based on laminar flow conditions. X-ray-attenuating contrast is propagated through the vessel using
the flow fields. Angiograms are generated at 30 frames/second using ray-casting. Vessel profile data are extracted from
the angiograms along lines perpendicular to the vessel axis. The conversion from image intensity to contrast pathlength
is determined. The contrast pathlength is calculated for each vessel-profile point, and the contrast is centered about the
vessel's central plane generating a 3D contrast distribution. This procedure is repeated for each acquired angiogram.
Corresponding points on the surface of the calculated contrast distributions are established for temporally adjacent
distributions using estimated streamlines. Distances between corresponding points are calculated from which average
velocities are calculated. These average velocities are placed at points along the streamlines, thereby generating a 3D
velocity flow field in the vessel lumen. Simulations for steady flow conditions for straight vessels, curved (in-plane)
vessels, and vessels with stenoses, for noiseless and noisy (10% peak contrast) angiograms were performed. The
calculated and simulated 3D contrast distributions agree well for both noiseless and noisy conditions (errors < 2 voxels ~
0.2 mm). Average absolute error of the calculated 3D flow velocities is approximately 10%. These promising initial
results indicate that this technique may form the basis for calculating 3D-contrast and 3D-flow-velocity distributions
from standard single-plane angiographic sequences.
Susceptibility quantification in MRI using modified conjugate gradient least square method
Luning Wang,
Jason Langley,
Qun Zhao
Show abstract
MR susceptometry provides a new approach to enhance contrast in MR imaging and quantify substances such as iron,
calcium, blood oxygenation in various organs for the clinical diagnosis of many diseases. Susceptibility is closely related
to the magnetic field inhomogeneity in MRI, and calculation of susceptibility from the measured magnetic field is an ill-posed
inverse problem. Conjugate gradient least square (CGLS) method with Tikhonov regularization is a powerful tool
to solve the inverse problems. However, the estimated quantity in CGLS method is usually in one-dimensional (e.g., a
column vector) while the most of MR data are in the form of three dimensions. In this work, the least square problem is
modified to enable the calculation of susceptibility directly from three-dimensional MR phase maps, which has the
benefit of reducing the size of associated matrices and usage of computer memories. Numerical simulations were used to
find the proper regularization parameters and study the influence of different noise levels of the magnetic field on the
regularization parameters. Experiments of superparamagnetic iron oxide phantom were also conducted. Both of the
results demonstrate the validation and accuracy of this method.
Direct reconstruction of T1 from k-space using a radial saturation-recovery sequence
Show abstract
Contrast agent concentration ([CA]) must be known accurately to quantify dynamic contrast-enhanced (DCE) MR
imaging. Accurate concentrations can be obtained if the longitudinal relaxation rate constant T1 is known both pre- and
post-contrast injection. Post-contrast signal intensity in the images is often saturated and an approximation to T1 can be
difficult to obtain. One method that has been proposed for accurate T1 estimation effectively acquires multiple images
with different effective saturation recovery times (eSRTs) and fits the images to the equation for T1 recovery to obtain T1 values. This was done with a radial saturation-recovery sequence for 2D imaging of myocardial perfusion with DCE
MRI. This multi-SRT method assumes that the signal intensity is constant for different readouts in each image. Here this
assumption is not necessary as a model-based reconstruction method is proposed that directly reconstructs an image of
T1 values from k-space. The magnetization for each ray at each readout pulse is modeled in the reconstruction with
Bloch equations. Computer simulations based on a 72 ray cardiac DCE MRI acquisition were used to test the method.
The direct model-based reconstruction gave accurate T1 values and was slightly more accurate than the multi-SRT
method that used three sub-images.
Histogram analysis of ADC in brain tumor patients
Show abstract
At various stage of progression, most brain tumors are not homogenous. In this presentation, we retrospectively studied
the distribution of ADC values inside tumor volume during the course of tumor treatment and progression for a selective
group of patients who underwent an anti-VEGF trial. Complete MRI studies were obtained for this selected group of
patients including pre- and multiple follow-up, post-treatment imaging studies. In each MRI imaging study, multiple
scan series were obtained as a standard protocol which includes T1, T2, T1-post contrast, FLAIR and DTI derived
images (ADC, FA etc.) for each visit. All scan series (T1, T2, FLAIR, post-contrast T1) were registered to the
corresponding DTI scan at patient's first visit. Conventionally, hyper-intensity regions on T1-post contrast images are
believed to represent the core tumor region while regions highlighted by FLAIR may overestimate tumor size. Thus we
annotated tumor regions on the T1-post contrast scans and ADC intensity values for pixels were extracted inside tumor
regions as defined on T1-post scans. We fit a mixture Gaussian (MG) model for the extracted pixels using the
Expectation-Maximization (EM) algorithm, which produced a set of parameters (mean, various and mixture coefficients)
for the MG model. This procedure was performed for each visits resulting in a series of GM parameters. We studied the
parameters fitted for ADC and see if they can be used as indicators for tumor progression. Additionally, we studied the
ADC characteristics in the peri-tumoral region as identified by hyper-intensity on FLAIR scans. The results show that
ADC histogram analysis of the tumor region supports the two compartment model that suggests the low ADC value subregion
corresponding to densely packed cancer cell while the higher ADC value region corresponding to a mixture of
viable and necrotic cells with superimposed edema. Careful studies of the composition and relative volume of the two
compartments in tumor region may provide some insights in the early assessment of tumor response to therapy for
recurrence brain cancer patients.
The development and application of calculated readout in spectral parallelism in magnetic resonance imaging
Linda Vu,
Simon S. So,
Sergei Obruchkov,
et al.
Show abstract
In magnetic resonance imaging (MRI), an object within a field-of-view (FOV) is spatially encoded with a broad
spectrum of frequency components generating signals that decohere with one another to create a decaying echo
with a large peak amplitude. The echo is short and decays at a rapid rate relative to the readout period when
performing high resolution imaging of a sizable object where many frequency components are encoded resulting
in faster decoherence of the generated signals. This makes it more difficult to resolve fine details as the echo
quickly decays down to the quantization limit. Samples collected away from the peak signal, which are required
to produce high resolution images, have very low amplitudes and therefore, poor dynamic range.
We propose a novel data acquisition system, Calculated Readout in Spectral Parallelism (CRISP), that
spectrally separates the radio frequency (RF) signal into multiple narrowband channels before digitization. The
frequency bandwidth of each channel is smaller than the FOV and centered over a part of the image with minimal
overlap with the other channels. The power of the corresponding temporal signal in each channel is reduced
and spread across a broader region in time with a slower decay rate. This allows the signal from each channel
to be independently amplified such that a larger portion of the signal is digitized at higher bits. Therefore, the
dynamic range of the signal is improved and sensitivity to quantization noise is reduced. We present a realization
of CRISP using inexpensive analog filters and preliminary results from high resolution images.
Voxel magnetic field disturbance from remote vasculature in BOLD fMRI
Show abstract
The mechanism of blood oxygenation-level dependent (BOLD) functional magnetic resonance imaging
(fMRI) lies in the detection of blood-induced magnetic field disturbance during brain activity. A magnetic dipole induces
a magnetic field in 3D space, which is represented by a 3D kernel that shows orientation-dependent decay in space (with
a radial distant decay factor of 1/r3), bipolar values, and revolution symmetry. By representing the intravascular blood
space with a pack of magnetic dipoles, we can use the 3D kernel to calculate the BOLD fieldmap by a 3D convolution.
In our implementation, a vasculature-laden voxel of interest (VOI) is represented by a matrix at a grid
resolution(~1micron), and the intravascular space is filled with macroscopic blood magnetic dipoles (each is defined for
a matrix element sitting in the blood space). Based on the magnetic dipole model of blood magnetization and the
convolution algorithm, we calculate the effect of exterior vasculature (from nearest neighborhood as well as from farther
or remote surrounding) on the BOLD fieldmap at the VOI. Our results show that only vessels at the VOI boundary
region impose a noticeable influence, and this effect increases slightly with vessel size. The effect of remote vasculature
(sitting in voxels outside the nearest neighborhood) is ignorable. We also discuss the case of asymmetrical surroundings.
Multiresolution voxel decomposition of complex-valued BOLD signals reveals phasor turbulence
Show abstract
High spatial resolution functional MRI (fMRI) technology continues to enable smaller voxel sizes, providing
details about neuronal activity in terms of spatial localization and specificity. The BOLD contrast mechanism can be
examined by numerical simulation. By representing a complex timeseries with a dynamic phasor in a polar coordinate
system, a complex-valued BOLD signal manifests as an inward spiral: the radial distance represents the signal amplitude
and the polar angle the signal phase angle. For normal fMRI settings (millimeter resolution, 3T main field, and 30ms
relaxation time), the BOLD phasors usually evolve in a form of inward spiraling. Under some extreme parameter
settings (high resolution, high field, or long relaxation time), the phasors may become turbulent (i.e. exhibit disordered
spiraling). In this paper, we will report on a BOLD phasor turbulence phenomenon resulting from coarse-to-fine
multiresolution voxel decomposition. In our implementation, a vasculature-laden voxel (320×320×320 micron3) is
decomposed into a sequence of subvoxels (160×160 ×160, 80×80×80, 40×40×40, 20×20×20 micron3). Our simulations
reveal the following phenomena: 1) Ultrahigh spatial resolution (several tens of microns, e.g. 20micron) BOLD fMRI
may cause signal turbulence, primarily occurring at vessel boundaries; 2) The intravascular signal is prone to turbulence
but its contribution to the voxel signal is greatly suppressed by the blood volume fraction; 3) There is no signal
turbulence under small angle condition; 4) For millimeter-resolution fMRI with small relaxation time (< 30ms), signal
turbulence is unlikely to occur. We explain the high-resolution BOLD signal turbulence from the perspective of the
emergence of BOLD field irregularity.
Wavelet encoded MR image reconstruction with compressed sensing
Show abstract
Traditional MRI scanners acquire samples of the encoded image in a spatial frequency (Fourier) domain, known as k-space.
The recently developed theory of Compressed Sensing (CS) has shown that a natural image could be acquired and
reconstructed from a reduced number of linear projection measurements at sub-Nyquist sample rates. CS is well suited to
fast acquisition of MRI by sparse encoding in k-space. However, the commonly used Fourier-encoding MRI scheme only
weakly satisfies the incoherent measurements constraint required for CS. MR images can be reconstructed from a more
sparse wavelet-encoded k-space than from Fourier-encoded k-space with the same resolution. Wavelet-encoded MRI is
flexible in spatially-selective excitations to yield an encoding scheme similar to the "universal encoding". In this work, a
CS-MRI scheme has been investigated to accurately reconstruct MR images from sparsely wavelet-encoded k-space.
This sparse encoding is achieved with tailored spatially-selective RF pulses, which are designed with Battle-Lemarie
wavelet functions. The Fourier transforms of the Battle-Lemarie wavelet functions used as RF pulse profiles are smooth
and decay rapidly to zero. This technique provides short RF pulses with relatively precise spatial excitation profiles.
Simulator results show that the proposed encoding scheme has different characteristics than conventional Fourier and
wavelet encoding methods, and this scheme could be applied to fast MR image acquisition at relatively high resolution.
Poster Session: PET and SPECT
Modified total variation norm for the maximum a posteriori ordered subsets expectation maximization reconstruction in fan-beam SPECT brain perfusion imaging
Show abstract
The anisotropic geometry of Fan-Beam Collimator (FBC) Single Photon Emission Tomography (SPECT) is used in
brain perfusion imaging with clinical goals to quantify regional cerebral blood flow and to accurately determine the
location and extent of the brain perfusion defects. One of the difficult issues that need to be addressed is partial volume
effect. The purpose of this study was to minimize the partial volume effect while preserving the optimal tradeoff between
noise and bias, and maintaining spatial resolution in the reconstructed images acquired in FBC geometry. We modified
conventional isotropic TV (L1) norm, which has only one hyperparameter, and replaced it with two independent TV
(L1u) norms (TVxy and TVz) along two orthogonal basis vectors (XY, Z) in 3D reconstruction space. We investigated if
the anisotropic norm with two hyperparameters (βxy and βz, where z is parallel to the axis-of-rotation) performed better
in FBC-SPECT reconstruction, as compared to the conventional isotropic norm with one hyperparameter (β) only. We
found that MAP-OSEM reconstruction with modified TV norm produced images with smaller partial volume effect, as
compared to the conventional TV norm at a cost of slight increase in the bias and noise.
New method for tuning hyperparameter for the total variation norm in the maximum a posteriori ordered subsets expectation maximization reconstruction in SPECT myocardial perfusion imaging
Show abstract
In order to improve the tradeoff between noise and bias, and to improve uniformity of the reconstructed myocardium
while preserving spatial resolution in parallel-beam collimator SPECT myocardial perfusion imaging (MPI) we
investigated the most advantageous approach to provide reliable estimate of the optimal value of hyperparameter for the
Total Variation (TV) norm in the iterative Bayesian Maximum A Posteriori Ordered Subsets Expectation Maximization
(MAP-OSEM) one step late tomographic reconstruction with Gibbs prior. Our aim was to find the optimal value of
hyperparameter corresponding to the lowest bias at the lowest noise while maximizing uniformity and spatial resolution
for the reconstructed myocardium in SPECT MPI. We found that the L-curve method that is by definition a global
technique provides good guidance for selection of the optimal value of the hyperparameter. However, for a
heterogeneous object such as human thorax the fine-tuning of the hyperparameter's value can be only accomplished by
means of a local method such as the proposed bias-noise distance (BND) curve. We established that our BND-curve method provides accurate optimized hyperparameter's value estimation as long as the region of interest volume for
which it is defined is sufficiently large and is located sufficiently close to the myocardium.
Effect of de-noising and DDRV correction on cone-beam SPECT reconstruction with non-uniform attenuation
Show abstract
SPECT (single photon emission computed tomography) is a non-invasive, cost-effective means for assessment of
tissue/organ functions in nuclear medicine. For more accurate diagnosis, quantitative reconstruction of radiotracer
concentration at any location inside the body is desired. To achieve this goal, we have to address a number of factors that
significantly degrade the acquired projection data. The cone-beam SPECT system has higher resolution comparing with
parallel-beam and fan-beam SPECT, which is highly advantageous in small object detection. In this paper, we used four
analytical reconstruction schemes for cone-beam SPECT that allow simultaneous compensation for non-uniform
attenuation and distance-dependent resolution variation (DDRV), as well as accurate treatment of Poisson noise. The
simulation results show that the reconstruction scheme 1 and 4 both can obtain good reconstruction results.
Quality controls and delineation protocol of PET/CT gated acquisition in function of the movement amplitude, size of spheres and signal over background ratio
Show abstract
This study presents quality controls and segmentation tools for gated acquisition imaging of moving tumors. We
study the effect of different amplitudes in a bin in function of sizes of spheres and signal to background ratios.
Simple rules are then derived to establish which bins are appropriate to quantify tumor activity and to delineate
volume. Finally the threshold technics for gated acquisition exams are discussed in function of the different
parameters. Our experimental setup consisted of a movable platform, a thorax phantom with 6 fillable spheres
and a real time position management device allowing to synchronize the PET/CT image with movement. The
spheres were filled with F-Fluoro-2-deoxy-glucose and the activity in the tank was adjusted to obtain signal to
background ratios from 3.5 to 20 (228 combinations of experimental parameters were studied). Maximal activity,
optimal threshold and elongation of the sphere images between static and moving tumors were then compared
with our own matlab program. Significant changes had appearing for movement superior to 7.5 mm in a bin
leading to an activity decrease, an increase of the optimal threshold and an elongation in the movement direction.
These effects were accentuated for low SBR and a sphere diameter inferior to 20 mm. The optimal threshold
value was around 35% for large spheres and high SBR. This value increase when the sphere size and the SBR
decrease. We then deduce from our measurements the relevant parameters for the delineation procedure. In
conclusion, the effect of movement were successfully quantified in function of the sphere size, SBR and movement
in a bin. Calibration threshold curves are now available in a clinical routine used for gated acquisitions.
Using spherical basis functions on a polar grid for iterative image reconstruction in small animal PET
Jorge Cabello,
Josep F. Oliver,
Magdalena Rafecas
Show abstract
Statistical iterative methods have been extensively demonstrated to outperform analytical methods in terms
of image quality in nuclear imaging. In the method, the mathematically unknown biodistribution is usually
represented by cubic basis functions in 3D. Alternatively, spherical basis functions have demonstrated lower
noise produced in the resulting reconstructed images. Additionally, the system response matrix (SRM), a key
element required by iterative methods, is usually too large to be stored in random access memory of a regular
computer. The SRM can be calculated prior to reconstruction and stored on-disk, and thus be directly accessed
during the reconstruction process. But this approach usually makes the process too time consuming. To reduce
the number of elements to be computed and stored, a common approach uses the scanner symmetries. In this
work we use polar voxels, reducing the number of non-zero elements to be computed by a factor 72, allowing
us to speed up the Monte-Carlo simulations in which the computation of the SRM is based. In this work we
combine the use blobs, a type of spherical basis function, as basis functions and a polar representation in the
SRM. The latter is especially important for blobs, given that due to the overlapping of blobs, the size of the SRM
significantly increases. This work will show a quantitative comparison of reconstructed images using Cartesian
voxels, polar voxels and blobs. We will show that blobs reduce image noise compared to voxels, producing a
lower spatial resolution degradation, compared to polar voxels after Gaussian filtering.
Full modeling of AX-PET: a new PET device with axially oriented crystals based on Geant4 and GATE
Show abstract
AX-PET is a novel PET detector based on long axially arranged crystals and orthogonal Wavelength shifter
(WLS) strips, both individually readout by Geiger-mode Avalanche Photo Diodes (G-APD). Its design was
conceived in order to reduce the parallax error and simultaneously improve spatial resolution and sensitivity.
The sensitivity can be further enhanced by adding additional crystal layers as well as by including Inter-Crystal
Scatter (ICS) events, identified and processed post-acquisition. Its unique features require dedicated Monte Carlo
(MC) simulations and its non-conventional design makes modeling rather challenging. We developed an AX-PET
model based on Geant4 and GATE packages. Simulations were extensively validated against experimental data
obtained from both small scale laboratory and full module setups. The first simulations aimed at developing
an analytical model of the WLS behavior which was afterwards coupled to GATE. Full AX-PET acquisitions
were used to test the GATE simulations. The agreement between data and simulations was very good. AX-PET
simulations are employed to test and optimize image reconstruction software and, at the same time, train ICS
identification and reconstruction algorithms.
Observing the high resolution capabilities of a silicon PET insert probe
K. Brzezinski,
J. F. Oliver,
J. Gillam,
et al.
Show abstract
A high resolution silicon detector probe in coincidence with a conventional PET scanner should be able to increase
the scanner's spatial resolution. The MADEIRA PET probe is such a device and was simulated in coincidence
with a clinical scanner was simulated using the Monte Carlo package GATE. The device consists of ten layers
of silicon of 1x1x1 mm3 pixels in a 80x52 array. Simulations were run with various activity distributions.
No attenuation was included to reduce the amount of scatter in the simulated data. The simulations were
conducted in air. Coincidence sorting was performed on the singles list mode data. Random coincidences were
rejected and no time or energy blurring was used in order to isolate the events which determine the highest
achievable spatial resolution. Sinograms where calculated from the sorted data with one sinogram containing
events where both annihilation photons were detected in the PET ring and another for probe-ring events. The
probe-ring sinograms identified the limited FOV of the probe. The point-spread function was calculated from
the sinograms. The full-width half-maximum decreased from 5.5 mm in the ring-ring sinogram to 2.7 mm in the
ring-probe sinogram. The full-width third-maximum decreased from 7 mm to 3 mm. Images were reconstructed
using the Maximum Likelihood-Expectation Maximization (ML-EM) algorithm using the list-mode coincidence
data. The improvement in spatial resolution seen in the sinograms is reflected in the images.
Ultrafast image reconstruction of a dual-head PET system by use of CUDA architecture
Show abstract
Positron emission tomography (PET) is an important imaging modality in both clinical usage and research
studies. For small-animal PET imaging, it is of major interest to improve the sensitivity and resolution. We
have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector
heads. The highly accurate system response matrix can be computed by use of Monte Carlo simulations, and
stored for iterative reconstruction methods. The detector head employs 2.1x2.1x20 mm3 LSO/LYSO crystals of
pitch size equal to 2.4 mm, and thus will produce more than 224 millions lines of response (LORs). By exploiting
the symmetry property in the dual-head system, the computational demands can be dramatically reduced.
Nevertheless, the tremendously large system size and repetitive reading of system response matrix from the hard
drive will result in extremely long reconstruction times. The implementation of an ordered subset expectation
maximization (OSEM) algorithm on a CPU system (four Athlon x64 2.0 GHz PCs) took about 2 days for 1
iteration. Consequently, it is imperative to significantly accelerate the reconstruction process to make it more
useful for practical applications. Specifically, the graphic processing unit (GPU), which possesses highly parallel
computational architecture of computing units can be exploited to achieve a substantial speedup. In this work, we
employed the state-of-art GPU, NVIDIA Tesla C2050 based on the Fermi-generation of the compute united device
architecture (CUDA) architecture, to yield a reconstruction process within a few minutes. We demonstrated
that reconstruction times can be drastically reduced by using the GPU. The OSEM reconstruction algorithms
were implemented employing both GPU-based and CPU-based codes, and their computational performance was
quantitatively analyzed and compared.
Evaluation of image gating as an approach for noise estimation and optimisation of SPECT images
Khalid Alzimami,
Salem Sassi,
Abduallah Alshehri,
et al.
Show abstract
This study aims to evaluate the potential for using image gating
techniques for accurate estimating random noise in clinical SPECT images.
Phantom and patient bone SPECT scans were acquired in a gated mode using
standard acquisition protocols with a dummy electronic trigger. The acquired
projection data was divided into time slots and reconstructed using the same
algorithm to produce independent sets of data. Our results showed that this
method can be used for accurate estimation of the level of random noise in
phantom and clinical SPECT studies, and it avoids the problem of pixel value
correlation faced when using small ROI in estimating noise in SPECT images.
Singles-prompt: a novel method to estimate random coincidences by using prompts and singles information
Josep F. Oliver,
M. Rafecas
Show abstract
Random coincidences are one of the main sources of image degradation in Positron Emission Tomography (PET)
imaging. This aspect becomes especially important when high image quality is needed or accurate quantitative
analysis is undertaken. To correct for these degradation effects, an accurate method to estimate the contribution
of random events is necessary. A common method of choice is the so called "Singles Rate" method (SR), widely
used because of its good statistical properties. SR is based on the measurement of the singles count rate in each
detector, RSR/ij = 2τSiSj.
However, the SR method always overestimates the correct randoms rate value, especially for non-extended
sources close to the center field of view of the scanner. In this work an extension of the SR method is proposed.
The novel method, called "Singles-Prompts" (SP) takes advantage of the extra information provided by the
measured prompts. Changing SR by SP is straightforward since only two simple replacements in the SR formula
are required. To validate the method, Monte Carlo simulations have been performed. A small animal PET has
been used together with three source geometries that cover two limiting cases: from a point-like source to an
extended source.
The results show that the SP estimation is more accurate than that of SR and always has a smaller variance.
More importantly, for energy windows that prevent to register multiple events caused by inter-crystal scatter
(ICS), the SP method provides an estimation compatible with the correct value regardless of the geometry of
the source.
An investigation of an application specific PET prototype with inhomogeneous-energy resolution detectors
Show abstract
Designed for general purpose with nearly fixed performance, traditional PET systems are constructed with almost
identical and unmovable detectors. In this work, we are developing an application specific PET with detectors with
inhomogeneous performances, which can be adaptively rearranged for different objects and regions of interest (ROIs).
This article reports our initial investigation on a prototype system consisting of inhomogeneous detectors with two levels
of energy resolution. In this system, the high performance detectors and the normal performance detectors are arranged
in one scanner, and the high performance detectors are continuous distributed on the scanner. A liver phantom is
constructed as our object of detection. The coincidence data and image quality are analyzed with different distribution
schemes of the high performance detectors. Preliminary results indicate that the proposed prototype obtains higher true
counts and lower scatter counts than the system with normal performance detectors, resulting in lower scatter fractions
for every region and the whole object. The extent of the reduction of scatter fraction is varied with different distribution
schemes of the high performance detectors, which is related to the distribution of activity. Better signal-to-noise ratio for
every region in the object and better percent contrast are also obtained in some schemes of the high performance
detectors.
Basic design and simulation of a SPECT microscope for in vivo stem cell imaging
Show abstract
The need to understand the behavior of individual stem cells at the various stages of their differentiation and to assess
the resulting reparative action in pre-clinical model systems, which typically involves laboratory animals, provides the
motivation for imaging of stem cells in vivo at high resolution. Our initial focus is to image cells and cellular events at
single cell resolution in vivo in shallow tissues (few mm of intervening tissue) in laboratory mice and rates. In order to
accomplish this goal we are building a SPECT-based microscope. We based our design on earlier theoretical work with
near-field coded apertures and have adjusted the components of the system to meet the real-world demands of instrument
construction and of animal imaging. Our instrumental design possesses a reasonable trade-off between field-of-view,
sensitivity, and contrast performance (photon penetration). A layered gold aperture containing 100 pinholes and
intended for use in coded aperture imaging application has been designed and constructed. A silicon detector connected
to a TimePix readout from the CERN collaborative group was selected for use in our prototype microscope because of
its ultra-high spatial and energy resolution capabilities. The combination of the source, aperture, and detector has been
modeled and the coded aperture reconstruction of simulated sources is presented in this work.
Poster Session: X-ray Imaging
K-edge subtraction imaging using a pixellated energy-resolving detector
Silvia Pani,
Sarene C. Saifuddin,
Christiana Christodoulou,
et al.
Show abstract
This paper presents preliminary work aimed at assessing the feasibility of K-edge subtraction imaging using the
spectroscopic information provided by a pixellated energy-resolving Cadmium Zinc Telluride detector, having an active
area of 20×20 pixels 250 μm in size. Images of a test object containing different amounts of Iodine-based contrast agent
were formed above and below the K-edge of Iodine (33.2 keV) by integrating, pixel by pixel, different windows of the
spectrum. The results show that the optimum integration window for details 1-2 mm in diameter is between 2 keV and 5
keV. Concentrations of down to 50 μg Iodine/ml were detected in a 1-mm diameter tube with an entrance dose of 100
μGy.
Verification of nonlinearity in digital x-ray images using surrogate method
Show abstract
We investigate the nonlinearity in digital X-ray images to determine the feasibility of a noise reduction process using a
mathematical model, which realizes an accurate digital X-ray imaging system. To develop this mathematical model, it is
important to confirm whether the system is linear or nonlinear. We have verified the nonlinearity of the imaging system
through an analysis of computed radiography (CR) images by using the method of surrogate, a statistical test of
nonlinearity, and the Wayland test. In the method of surrogate, we use the Fourier transform surrogate method. The
Wayland test can be used for evaluating the complexity of the orbit of a signal aggregate called the attractor
reconstructed in a high-dimensional phase space using a nonlinear statistical parameter called the translation error.
Nonlinearity is determined by statistically comparing the translation error of the original data with that of the surrogate
data. X-ray images are obtained under different conditions to investigate the effects of various tube voltages--50 and 80
kV--and dose settings--2 and 10 mAs. We extract 30 profiles from both directions, the directions vertical (V-direction)
and horizontal (H-direction) to the X-ray tube. In the H-direction, nonlinearity is found at all voltage and dose settings.
On the other hand, nonlinearity is found only at 10 mAs and 80 kV in the V-direction. Hence, it can be concluded that
nonlinearity is indicated by a decrease in the quantum mottle, and the factors of nonlinearity exhibit the comprehensive
variation produced by the digital X-ray imaging system.
A software tool for quality assurance of computed/digital radiography (CR/DR) systems
Show abstract
The recommended methods to test the performance of computed radiography (CR) systems have been established
by The American Association of Physicists in Medicine, Report No. 93, "Acceptance Testing and Quality Control
of Photostimulable Storage Phosphor Imaging Systems". The quality assurance tests are categorized by how
frequently they need to be performed. Quality assurance of CR systems is the responsibility of the facility that
performs the exam and is governed by the state in which the facility is located. For Example, the New York State
Department of Health has established a guide which lists the tests that a CR facility must perform for quality
assurance. This study aims at educating the reader about the new quality assurance requirements defined by
the state. It further demonstrates an easy to use software tool, henceforth referred to as the Digital Physicist,
developed to aid a radiologic facility in conforming with state guidelines and monitoring quality assurance of
CR/DR imaging systems. The Digital Physicist provides a vendor independent procedure for quality assurance
of CR/DR systems. Further it, generates a PDF report with a brief description of these tests and the obtained
results.
Validation of a method to convert an image to appear as if acquired using a different digital detector
Show abstract
A method to convert digital mammograms acquired on one system to appear as if acquired using another system is
presented. This method could be used to compare the clinical efficacy of different systems. The signal transfer properties
modulation transfer function (MTF) and noise power spectra (NPS) were measured for two detectors - a computed
radiography (CR) system and a digital radiography (DR) system. The contributions to the NPS from electronic, quantum
and structure sources were calculated by fitting a polynomial at each spatial frequency across the NPS at each dose. The
conversion process blurs the original image with the ratio of the MTFs in frequency space. Noise with the correct
magnitude and spatial frequency was added to account for differences in the detector response and dose. The method was
tested on images of a CDMAM test object acquired on the two systems at two dose levels. The highest dose images were
converted to lower dose images for the same detector, then images from the DR system were converted to appear as if
acquired at a similar dose using CR. Contrast detail curves using simulated CDMAM images closely matched those of
real images.
Measuring the presampled MTF from a reduced number of flat-field images using the noise response (NR) method
Show abstract
We evaluate a new method for measuring the presampled modulation transfer function (MTF) using the noise power
spectrum (NPS) obtained from a few flat-field images acquired at one exposure level. The NPS is the sum of structure,
quantum, and additive instrumentation noise, which are proportional to exposure squared, exposure, and a constant,
respectively, with the spatial-frequency dependence of the quantum noise depending partly on the detector MTF.
Cascaded linear-systems theory was used to derive an exact and generic relationship that was used to isolate noise terms
and enable determination of the MTF directly from the noise response, thereby circumventing the need for precision test
objects (slit, edge, etc.) as required by standard techniques. Isolation of the quantum NPS by fitting the total NPS versus
exposure obtained using 30 flat-field images each at six or more different exposure levels with a linear regression
provides highly accurate MTFs. A subset of these images from indirect digital detectors was used to investigate the
accuracy of measuring the MTF from 30 or fewer flat-field images obtained at a single exposure level. Analyzing as few
as two images acquired at a single exposure resulted in no observable systematic error. Increasing the number of images
analyzed resulted in an increase in accuracy. Fifteen images provided comparable accuracy with the most rigorous slope
approach, with less than 5% variability, suggesting additional image acquisitions may be unnecessary. Reducing the
number of images acquired for the noise response method further simplifies and facilitates routine MTF measurements.
Poster Session: Detectors
CZT detector in multienergy x-ray imaging with different pixel sizes and pitches: Monte Carlo simulation studies
Show abstract
A photon counting detector based on semiconductor materials is a very promising approach for x-ray imaging.
Cadmium zinc telluride (CZT) semiconductor has a high atomic number which results in higher absorption coefficients
for x-rays. However, the CZT detectors exhibit several problems with hole trapping and charge sharing. Charge sharing
occurs due to diffusion of charge and characteristic x-ray escape and scattered x-rays in the detectors.
In this study, we evaluated the effect of interaction with CZT detector using Monte Carlo simulations. To demonstrate
the effectiveness of CZT detector in clinical application, we reported confirmation of CNR improvement in K-edge
images, and material decomposition using energy selective windows.
X-ray energy spectrum acquired at 120 kVp tube voltage and 2 mm Al filtration and 10 cm added water phantom in the
x-ray beam. Geant4 Application for Tomographic Emission (GATE) version 6.0 was used for a CZT crystal with size of
10x10 mm2 and thickness of 4 mm. The detector pixel with sizes of 0.09x0.09, 0.45x0.45, and 0.90x0.90 mm2 were
simulated. For all pixel sizes, the x-ray spectra of the simulations were distorted towards the lower energy region.
Because the characteristic x-rays add counts in the range of 20-40 keV. The magnitude of this deterioration is substantial
for small pixel sizes. However, we demonstrated that the distortion of spectrum does not greatly affect the x-ray imaging.
The GATE simulation model and these results may be used as a basis of development of energy-resolved photon
counting x-ray detector. We believe that the CZT detector may enhance the detectability of multi-energy x-ray imaging.
Effect of x-ray incident direction and scintillator layer design on image quality of indirect-conversion flat-panel detector with GOS phosphor
Show abstract
In this study, we characterized the image quality of two types of indirect-conversion flat-panel detectors: an X-ray
incident-side photo-detection system (IS) and an X-ray penetration-side photo-detection system (PS). These detectors
consist of a Gd2O2S:Tb (GOS) scintillator coupled with a photodiode thin film transistor (PD-TFT) array on a glass
substrate. The detectors have different X-ray incident directions, glass substrates, and scintillators. We also characterized
the effects of layered scintillator structures on the image quality by using a single-layered scintillator containing large
phosphor grains and a double-layered scintillator consisting of a layer of large phosphor grains and a layer of small
phosphor grains. The IS system consistently demonstrated a higher MTF than the PS system for a scintillator of the same
thickness. Moreover, the IS system showed a higher DQE than the PS system when a thick scintillator was used. While
the double-layered scintillators were useful for improving the MTF in both systems, a thick single-layered scintillator
was preferable for obtaining a high DQE when the IS system was applied. These results indicate that an IS system can
efficiently utilize the light emitted from the phosphor at the far side of the PD without the occurrence of blurring. The
use of IS systems makes it possible to increase the thickness of the scintillator layer for improving the sensitivity without
reducing the MTF, which increases the DQE. The DQE of the IS system was 1.2 times that of the PS system, despite the
absorption of X-rays at the glass substrate before entering the phosphor.
Graphical user interface for a dual-module EMCCD x-ray detector array
Show abstract
A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench
(LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray
detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs)
each having a variable on-chip electron-multiplication gain of up to 2000x to reduce the effect of readout noise. To
enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two
EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are
under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient
image review. Images from the array are stitched into a 2kx1k pixel image that can be acquired and saved at a rate of 17
Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's
directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters
including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is
designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution,
high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and
interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses
and to perform more precise image-guided interventions.
CMOS image sensor based X-Ray detector noise characterization and its fixed pattern noise correction method
Show abstract
An X-Ray detector based on CMOS imaging sensors has recently been developed for dental practice application. The X-Ray
detector has an inherited fixed pattern noise, as well as other temporal noise due to the CMOS sensor. In order to
offer a quality image for dental diagnostics, the X-Ray image must be processed and corrected prior to displaying for
customer use. The X-Ray detector noise has been studied and characterized. Three X-Ray devices are used in the
investigation. Both an X-Ray source and a green visual LED light source are used to determine the magnitude of X-Ray
photon shot noise. Two sets of images, a single-flood image and a 20-averaged image, are studied to characterize the
photon response non-uniformity noise. The investigation shows that the device has different noise sources, with fixed
pattern noise and X-Ray photon shot noise being the dominant noise sources of the device. Separating these noises from
each other helps to improve image quality by reducing or even eliminating some noise components.
Selenium coated CMOS passive pixel array for medical imaging
Show abstract
Digital imaging systems for medical applications use amorphous silicon thin-film transistor (TFT) technology due to its
ability to be manufactured over large areas. However, TFT technology is far inferior to crystalline silicon CMOS
technology in terms of the speed, stability, noise susceptibility, and feature size. This work investigates the feasibility of
integrating an imaging array fabricated in CMOS technology with an a-Se detector. The design of a CMOS passive pixel
sensor (PPS) array is presented, in addition to how an 8×8 PPS array is integrated with the 75 micron thick stabilized
amorphous selenium detector. A non-linear increase in the dark current of 200 pA, 500 pA and 2 nA is observed with
0.27, 0.67 and 1.33 V/micron electric field respectively, which shows a successful integration of selenium layer with the
CMOS array. Results also show that the integrated Selenium-CMOS PPS array has good responsivity to optical light and
X-rays, leaving the door open for further research on implementing CMOS imaging architectures going forward.
Demonstrating that the PPS chips using CMOS technology can use a-Se as a detector is thus the first step in a promising
path of research, which should yield substantial and exciting results for the field. Though area may still prove
challenging, larger CMOS wafers can be manufactured and tiled to allow for a large enough size for certain diagnostic
imaging applications and potentially even large area applications like digital mammography.
CMOS digital intra-oral sensor for x-ray radiography
Show abstract
In this paper, we present a CMOS digital intra-oral sensor for x-ray radiography. The sensor system consists of a custom
CMOS imager, custom scintillator/fiber optics plate, camera timing and digital control electronics, and direct USB
communication. The CMOS imager contains 1700 x 1346 pixels. The pixel size is 19.5um x 19.5um. The imager was
fabricated with a 0.18um CMOS imaging process. The sensor and CMOS imager design features chamfered corners for
patient comfort. All camera functions were integrated within the sensor housing and a standard USB cable was used to
directly connect the intra-oral sensor to the host computer. The sensor demonstrated wide dynamic range from 5uGy to
1300uGy and high image quality with a SNR of greater than 160 at 400uGy dose. The sensor has a spatial resolution
more than 20 lp/mm.
Design and fabrication of single grain TFTs and lateral photodiodes for low dose x-ray detection
Show abstract
Design, fabrication and measurement results of single grain (SG) lateral PIN photodiodes and SG thin film transistors
(TFT) are reported in this paper. Devices were developed to be used in indirect X-ray image sensor pixel design. We
have controlled position of 6 μm x 6 μm silicon grains with excimer-laser crystallization of a-Si film. Lateral PIN
photodiode (PD) arrays were designed inside the single grain with 1 μm, 1.5 μm and 2 μm intrinsic region length and 4
μm width. The gate length and the width of the fabricated TFTs are 1.5 μm and 4 μm, respectively. Devices were
fabricated using a-Si, SOI and crystalline silicon layers and electrical measurement results were compared. 100 μm x 100
μm sizes SG-photodiodes have dark and saturation currents on the order of 0.1 nA and 10 nA resulting in a light
sensitivity of 200 with an exposure of white light. Fabricated NMOS and PMOS TFTs inside the grains have field effect
mobility of 526 cm2/Vs and 253 cm2/Vs, respectively.
Photon quantum shot noise limited array in amorphous silicon technology for protein crystallography applications
Show abstract
An array of ring voltage controlled oscillators (RVCO) aiming for photon quantum shot noise limited applications such
as protein crystallography is presented. The pixilated array consists of 24 by 21 RVCO pixels. RVCO pixel converts x-ray
generated input charge into an output oscillating frequency signal. This architecture can be used in both direct and
indirect detection schemes. In this paper the direct detection using a layer of amorphous selenium (a-Se) coupled with
the RVCO array is proposed. Theoretical and Experimental results for an in-house fabricated array of RVCOs in
amorphous silicon (a-Si) technology are presented. All different requirements for protein crystallography application are
listed in this paper and also the way this array addresses each of these requirements is discussed in details in this paper.
The off-panel readout circuitry, designed and implemented in-house, is given in this paper. The off-panel readout circuits
play an important role in the imaging applications using frequency based pixels. They have to be optimized in order to
reduce the fixed pattern noise and fringing effects in an imaging array containing many such RVCO pixels. Since the
frequency of oscillation of each of these pixels is in the range of 100 KHz, there is no antenna effect in the array.
Antenna effect becomes an important issue in other technologies such as poly silicon (poly-Si) and CMOS technologies
due to higher frequency of oscillation ranges (more than 100 MHz). Noise estimations, stability simulations and
measurements for some randomly selected pixels in the array for the fabricated RVCO array are presented. The reported
architecture is particularly promising for large area photon quantum shot noise applications, specifically protein
crystallography. However, this architecture can be used for low dose fluoroscopy, dental computed tomography (CT) and
other large area imaging applications limited by input referred electronic noise due to its very low input referred
electronic noise, high sensitivity and ease of fabrication in low cost a-Si technology.
Study of gain phenomenon in lateral metal-semiconductor-metal detectors for indirect conversion medical imaging
Show abstract
Previously, metal-semiconductor-metal (MSM) lateral amorphous selenium (a-Se) detectors have been proposed for
indirect detector medical imaging applications. These detectors have raised interest due to their high-speed and
photogain. The gain measured from these devices was assumed to have been photoconductive gain; however the origin
of this gain was not fully understood. In addition, whether or not there was any presence of photocurrent multiplication
gain was not investigated. For integration-type applications photocurrent multiplication gain is desirable since the total
collected charge can be greater than the total number of absorbed photons. In order to fully appreciate the value of MSM
devices and their benefit for different applications, whether it is counting or integration applications, we need to
investigate the responsible mechanisms of the observed response. In this paper, we systematically study, through
experimental and theoretical means, the nature of the photoresponse and its responsible mechanisms. This study also
exposes the possible means to increase the performance of the device and under what conditions it will be most
beneficial.
Complete erasing of ghost images caused by deeply trapped electrons on computed radiography plates
Show abstract
The ghost images, i.e., latent image that is unerasable with visible light (LIunVL) and reappearing image appeared on
computed radiography (CR) plates were completely erased by simultaneous exposing them to filtered ultraviolet light
and visible light. Three different types of CR plates (Agfa, Kodak, and Fuji) were irradiated with 50 kV X-ray beams in
the dose range 8.1 mGy to 8.0 Gy, and then conventionally erased for 2 h with visible light. The remaining LIunVL
could be erased by repeating 6 h simultaneous exposures to filtered ultraviolet light and visible light. After the sixth
round of exposure, all the LIunVL in the three types of CR plates were erased to the same level as in an unirradiated
plate and no latent images reappeared after storage at 0°C for 14 days. The absorption spectra of deep centers were
specified using polychromatic ultraviolet light from a deep-ultraviolet lamp. It was found that deep centers showed a
dominant peak in the absorption spectra at around 324 nm for the Agfa and Kodak plates, and at around 320 nm for the
Fuji plate, in each case followed by a few small peaks. After completely erasing CR plates, these peaks were no longer
observed.
Evaluation and comparison of high-resolution (HR) and high-light (HL) phosphors in the micro-angiographic fluoroscope (MAF) using generalized linear systems analyses (GMTF, GDQE) that include the effect of scatter, magnification, and detector characteristics
Show abstract
In this study, we evaluated the imaging characteristics of the high-resolution, high-sensitivity micro-angiographic
fluoroscope (MAF) with 35-micron pixel-pitch when used with different commercially-available 300 micron thick
phosphors: the high resolution (HR) and high light (HL) from Hamamatsu. The purpose of this evaluation was to see if
the HL phosphor with its higher screen efficiency could be replaced with the HR phosphor to achieve improved
resolution without an increase in noise resulting from the HR's decreased light-photon yield. We designated the detectors
MAF-HR and MAF-HL and compared them with a standard flat panel detector (FPD) (194 micron pixel pitch and 600
micron thick CsI(Tl)). For this comparison, we used the generalized linear-system metrics of GMTF, GNNPS and
GDQE which are more realistic measures of total system performance since they include the effect of scattered radiation,
focal spot distribution, and geometric un-sharpness. Magnifications (1.05-1.15) and scatter fractions (0.28 and 0.33)
characteristic of a standard head phantom were used. The MAF-HR performed significantly better than the MAF-HL at
high spatial frequencies. The ratio of GMTF and GDQE of the MAF-HR compared to the MAF-HL at 3(6) cycles/mm
was 1.45(2.42) and 1.23(2.89), respectively. Despite significant degradation by inclusion of scatter and object
magnification, both MAF-HR and MAF-HL provide superior performance over the FPD at higher spatial frequencies
with similar performance up to the FPD's Nyquist frequency of 2.5 cycles/mm. Both substantially higher resolution and
improved GDQE can be achieved with the MAF using the HR phosphor instead of the HL phosphor.
Poster Session: Novel Systems, Other
LBP based detection of intestinal motility in WCE images
Show abstract
In this research study, a system to support medical analysis of intestinal contractions by processing WCE images
is presented. Small intestine contractions are among the motility patterns which reveal many gastrointestinal
disorders, such as functional dyspepsia, paralytic ileus, irritable bowel syndrome, bacterial overgrowth. The
images have been obtained using the Wireless Capsule Endoscopy (WCE) technique, a patented, video colorimaging
disposable capsule. Manual annotation of contractions is an elaborating task, since the recording device
of the capsule stores about 50,000 images and contractions might represent only the 1% of the whole video. In
this paper we propose the use of Local Binary Pattern (LBP) combined with the powerful textons statistics to
find the frames of the video related to contractions. We achieve a sensitivity of about 80% and a specificity of
about 99%. The achieved high detection accuracy of the proposed system has provided thus an indication that
such intelligent schemes could be used as a supplementary diagnostic tool in endoscopy.
Temperature anomaly detection and estimation using microwave radiometry and anatomical information
Show abstract
Many medically significant conditions (e.g., ischemia, carcinoma and inflammation) involve localized anomalies in
physiological parameters such as the metabolic and blood perfusion rates. These in turn lead to deviations from normal
tissue temperature patterns. Microwave radiometry is a passive system for sensing the radiation that objects emit
naturally in the microwave frequency band. Since the emitted power depends on temperature, and since radiation at low
microwave frequencies can propagate through several centimeters of tissue, microwave radiometry has the potential to
provide valuable information about subcutaneous anomalies. The radiometric temperature measurement for a tissue
region can be modeled as the inner product of the temperature pattern and a weighting function that depends on tissue
properties and the radiometer's antenna. In the absence of knowledge of the weighting functions, it can be difficult to
extract specific information about tissue temperature patterns (or the underlying physiological parameters) from the
measurements. In this paper, we consider a scenario in which microwave radiometry works in conjunction with another
imaging modality (e.g., 3D-CT or MRI) that provides detailed anatomical information. This information is used along
with sensor properties in electromagnetic simulation software to generate weighting functions. It also is used in bio-heat
equations to generate nominal tissue temperature patterns. We then develop a hypothesis testing framework that makes
use of the weighting functions, nominal temperature patterns, and maximum likelihood estimates to detect anomalies.
Simulation results are presented to illustrate the proposed detection procedures. The design and performance of an S-band
(2-4 GHz) radiometer, and some of the challenges in using such a radiometer for temperature measurements deep
in tissue, are also discussed.
Optimization of differential phase-contrast imaging setups using simulative approaches
André Ritter,
Peter Bartl,
Florian Bayer,
et al.
Show abstract
Differential phase-contrast imaging with X-ray tubes based on Talbot Interferometry is influenced by conventional
X-ray imaging setups. Parameters, which are optimized for conventional setups, may not be optimal for
differential phase-contrast imaging. Therefore, there is a high potential for optimization of differential phase-contrast
imaging. Quantities like visibility, contrast to noise ratio, and dose can be combined to form an objective
function. For differential phase-contrast imaging, those possible objective functions are generally not known analytically
and are expected to be non-linear. The optimization of differential phase-contrast is still possible as
the quantities, which are necessary to form an objective function, can be obtained by a simulation. Additionally,
setup parameters can be varied more purposefully within the simulation than it would be possible in an experimental
setup. To take particle as well as wave contributions into account, a Monte-Carlo simulation framework
and a wave field simulation framework are used. Numerical optimization procedures are an adequate approach to
find optimal setups for differential phase-contrast imaging. The objective function can be obtained by numerical
simulations. Hence, different optimization procedures will be evaluated and compared. Results for an optimized
phase grating and an optimized analyzer grating are presented. The appropriate optimization procedure and the
optimal setup depend on the intended application of the setup and the constraints which the setup parameters
have to obey.
SEM and microCT validation for en face OCT imagistic evaluation of endodontically treated human teeth
Show abstract
Successful root canal treatment is based on diagnosis, treatment planning, knowledge of tooth anatomy, endodontic
access cavity design, controlling the infection by thorough cleaning and shaping, methods and materials used in root
canal obturation. An endodontic obturation must be a complete, three-dimensional filling of the root canal system, as
close as possible to cemento-dentinal junction, without massive overfilling or underfilling.
There are several known methods which are used to assess the quality of the endodontic sealing, but most are invasive.
These lead to the destruction of the samples and often no conclusion could be drawn in respect to the existence of any
microleakage in the investigated areas of interest. Using an time domain en-face OCT system, we have recently
demonstrated real time thorough evaluation of quality of root canal fillings.
The purpose of this in vitro study was to validate the en face OCT imagistic evaluation of endodontically treated human
teeth by using scanning electron microscopy (SEM) and microcomputer tomography (μCT).
SEM investigations evidenced the nonlinear aspect of the interface between the endodontic filling material and the root
canal walls and materials defects in some samples.
The results obtained by μCT revealed also some defects inside the root-canal filling and at the interfaces between the
material and the root canal walls.
The advantages of the OCT method consist in non-invasiveness and high resolution. In addition, en face OCT
investigations permit visualization of the more complex stratified structure at the interface between the filling material
and the dental hard tissue.
Performance evaluation of a differential phase-contrast cone-beam (DPC-CBCT) system for soft tissue imaging
Show abstract
Differential phase-contrast (DPC) technique is promising as the next breakthrough in the field of X-ray CT imaging.
Utilizing the long ignored X-ray phase information, Differential phase-contrast (DPC) technique has the potential of
providing us with projection images with higher contrast in a CT scan without increasing the X-ray dose. While
traditional absorption-based X-ray imaging is not very efficient at differentiating soft tissues, differential phase-contrast
(DPC) is promising as a new method to boast the quality of the CT reconstruction images in term of contrast noise ratio
(CNR) in soft tissue imaging. In order to validate and investigate the use of DPC technique in cone-beam CT imaging
scheme, a new bench-top micro-focus DPC-based cone-beam computed tomography DPC-CBCT system has been
designed and constructed in our lab for soft tissue imaging. The DPC-CBCT system consists of a micro-focus X-ray tube
(focal spot 8 μm), a high-resolution detector, a rotating phantom holder and two gratings, i.e. a phase grating and an
analysis. The detector system has a phosphor screen, an optical fiber coupling unit and a CMOS chip with an effective
pixel pitch of 22.5 microns. The optical elements are aligned to minimize unexpected moiré patterns, and system
parameters, including tube voltage (or equivalently X-ray spectrum), distances between gratings, source-to-object
distance and object-to-detector distance are chosen as practicable to be applied in a rotating system. The system is tested
with two simple phantoms for performance evaluation. 3-D volumetric phase-coefficients are reconstructed. The
performance of the system is compared with conventional absorption-based CT in term of contrast noise ratio (CNR)
under the condition of equal X-ray dose level.
X-ray tube-based phase CT: spectrum polychromatics and imaging performance
Show abstract
Owing to its advantages in differentiating low-atomic materials over the conventional attenuation-based CT, the x-ray
refraction-based phase contrast CT implemented with grating interferometer (namely grating-based differential phase
contrast CT) has drawn increasing attention recently. Through the Talbot-effect, the phase variation of an object is
retrieved and reconstructed to characterize the object's refraction property. The Talbot-effect is wavelength dependent,
and a quantitative investigation into the influence of x-ray source spectrum polychromatics on the imaging performance
of the grating-based differential phase contrast CT can provide guidelines on its architecture design and performance
optimization. In this work, we conduct a computer simulation study of the x-ray grating-based differential phase contrast
CT imaging under the condition of both monochromatic and polychromatic x-ray sources. The preliminary data shows
that, the modulation transfer function (MTF) of a grating-based differential phase contrast CT with polychromatic source
changes little in comparison to that with a monochromatic one. Furthermore, it is shown that the spectrum
polychromatics leads to a remarkable improvement in the contrast-to-noise ratio of a grating-based differential phase
contrast CT, which implies that a commercially available x-ray tube can be well suited to build a differential phase
contrast CT with a grating-based interferometer to image the refractive property of an object.
X-ray phase computed tomography for nanoparticulated imaging probes and therapeutics: preliminary feasibility study
Show abstract
With the scientific progress in cancer biology, pharmacology and biomedical engineering, the nano-biotechnology
based imaging probes and therapeutical agents (namely probes/agents) - a form of theranostics - are among the strategic
solutions bearing the hope for the cure of cancer. The key feature distinguishing the nanoparticulated probes/agents from
their conventional counterparts is their targeting capability. A large surface-to-volume ratio in nanoparticulated
probes/agents enables the accommodation of multiple targeting, imaging and therapeutic components to cope with the
intra- and inter-tumor heterogeneity. Most nanoparticulated probes/agents are synthesized with low atomic number
materials and thus their x-ray attenuation are very similar to biological tissues. However, their microscopic structures are
very different, which may result in significant differences in their refractive properties. Recently, the investigation in the
x-ray grating-based differential phase contrast (DPC) CT has demonstrated its advantages in differentiating low-atomic
materials over the conventional attenuation-based CT. We believe that a synergy of x-ray grating-based DPC CT and
nanoparticulated imaging probes and therapeutic agents may play a significant role in extensive preclinical and clinical
applications, or even become a modality for molecular imaging. Hence, we propose to image the refractive property of
nanoparticulated imaging probes and therapeutical agents using x-ray grating-based DPC CT. In this work, we conduct a
preliminary feasibility study with a focus to characterize the contrast-to-noise ratio (CNR) and contrast-detail behavior
of the x-ray grating-based DPC CT. The obtained data may be instructive to the architecture design and performance
optimization of the x-ray grating-based DPC CT for imaging biomarker-targeted imaging probes and therapeutic agents,
and even informative to the translation of preclinical research in theranostics into clinical applications.
A new technology for terahertz imaging in breast cancer margin determination
Show abstract
In this paper we describe a project for designing, developing and translating a THz imaging
device for monitoring margins from extracted tissue during surgical breast cancer conservation
procedures. In this application, the reflective and transmission properties of extracted tissue
are monitored, in near real-time using a fine-beam THz signal which is sensitive to the
presence of liquid and bound water content. In this way, it is intended that the extracted tissue
will be studied in the operating theatre to determine during surgery, whether or not the region
of malignant tissue has been fully excised from the patient. In the early stages of this project,
we are determining to what degree an existing THz system at the University of Massachusetts
(UMass) in Amherst is able to differentiate between breast carcinoma, normal fibroglandular
and adipose tissues. This is achieved through close collaboration with a surgical and
radiological team at the UMass-Worcester medical school and involves post-surgical
recovered tissues. As part of this work, we are describing the system, measurement
methodology, and first results that were obtained to calibrate the imaging system.
Retaining axial-lateral orthogonality in steered ultrasound data to improve image quality in reconstructed lateral displacement data
Show abstract
Ultrasound elastography tracks tissue displacements under small levels of compression to obtain images of strain, a
mechanical property useful in the detection and characterization of pathology. Due to the nature of ultrasound
beamforming, only tissue displacements in the direction of beam propagation, referred to as 'axial', are measured to high
quality, although an ability to measure other components of tissue displacement is desired to more fully characterize the
mechanical behavior of tissue. Previous studies have used multiple one-dimensional (1D) angled axial displacements
tracked from steered ultrasound beams to reconstruct improved quality trans-axial displacements within the scan plane
('lateral'). We show that two-dimensional (2D) displacement tracking is not possible with unmodified electronically-steered
ultrasound data, and present a method of reshaping frames of steered ultrasound data to retain axial-lateral
orthogonality, which permits 2D displacement tracking. Simulated and experimental ultrasound data are used to compare
changes in image quality of lateral displacements reconstructed using 1D and 2D tracked steered axial and steered lateral
data. Reconstructed lateral displacement image quality generally improves with the use of 2D displacement tracking at
each steering angle, relative to axial tracking alone, particularly at high levels of compression. Due to the influence of
tracking noise, unsteered lateral displacements exhibit greater accuracy than axial-based reconstructions at high levels of
applied strain.
Simulation of ultrasound backscatter images from fish
Show abstract
The objective of this work is to investigate ultrasound (US) backscatter in the MHz range from fish to develop a realistic and
reliable simulation model. The long term objective of the work is to develop the needed signal processing for fish species
differentiation using US. In in-vitro experiments, a cod (Gadus morhua) was scanned with both a BK Medical ProFocus
2202 ultrasound scanner and a Toshiba Aquilion ONE computed tomography (CT) scanner. The US images of the fish
were compared with US images created using the ultrasound simulation program Field II. The center frequency of the
transducer is 10 MHz and the Full Width at Half Maximum (FWHM) at the focus point is 0.54 mm in the lateral direction.
The transducer model in Field II was calibrated using a wire phantom to validate the simulated point spread function. The
inputs to the simulation were the CT image data of the fish converted to simulated scatter maps. The positions of the point
scatterers were assumed to be uniformly distributed. The scatter amplitudes were generated with a new method based on
the segmented CT data in Hounsfield Units and backscatter data for the different types of tissues from the literature. The
simulated US images reproduce most of the important characteristics of the measured US image.
New method to test the gantry, collimator, and table rotation angles of a linear accelerator used in radiation therapy
Stéphane Beaumont,
Tarraf Torfeh,
Romain Latreille,
et al.
Show abstract
The precision of a medical LINear ACcelerator (LINAC) gantry rotation angle is crucial for the radiation therapy process,
especially in stereotactic radio surgery, given the expected precision of the treatment and in Image Guided Radiation Therapy
(IGRT) where the mechanical stability is disturbed due to the additional weight of the kV x-ray tube and detector.
We present in this paper an extension of the Winston and Lutz test initially dedicated to control the size and the position
of the isocenter of the LINAC and here adapted to test the gantry rotation angle with no additional portal images.
This new method uses a test-object patented by QualiFormeD5 and is integrated in the QUALIMAGIQ software platform
developed to automatically analyze images acquired for quality control of medical devices.
Poster Session: Breast Imaging
Factors for conversion between human and automatic read-outs of CDMAM images
Show abstract
According to the European protocol for the quality control of the physical and technical aspects of mammography
screening (EPQCM) image quality of digital mammography devices has to be evaluated using the CDMAM
(contrast-detail mammography) phantom. The evaluation of image quality is accomplished by the determination
of threshold thicknesses of gold disks of different diameters (0.08 mm to 2 mm). The EPQCM requires this task
to be performed by qualified human observers and revealed to be very time consuming.Therefore a software
solution was provided by the European Reference Organisation for Quality Assured Breast Screening (Euref)
known as 'cdcom'. The problem with this program is that it provides threshold thicknesses different from the
results gained by human readers. Factors for the conversion from automatic to human read outs depend on the
diameter of the gold disk and were estimated using a huge amount of data, both human and automatic read
outs. But these factors provided by various groups differ from each other and are purely phenomenological.
Our approach uses the Rose theory which gives a correlation between threshold contrast, diameter of the object
and number of incident photons. To estimate the five conversion factors between the diameters of 0.2 mm and
0.5 mm we exposed with five different current-time products which resulted in 25 equations with six unknowns
(5 factors and one constant). This optimization problem was then solved using the Excel built-in solver. The
theoretical conversion factors amounted to be 1.62, 1.75, 2.04, 2.20, 2.39 for the diameters of 0.2, 0.25, 0.31, 0.4,
and 0.5 mm. The corresponding phenomenological factors found in literature are 1.74, 1.78, 1.83, 1.88, and 1.93.
The applied method reveals to be very robust and produces factors comparable to the phenomenological ones.
Image noise sensitivity of dual-energy digital mammography for calcification imaging
Show abstract
Dual-energy digital mammography (DEDM) can suppress the contrast between adipose and glandular tissues and
generate dual-energy (DE) calcification signals. DE calcification signals are always influenced by many factors. Image
noise is one of these factors. In this paper, the sensitivity of DE calcification signal to image noise was analyzed based
on DEDM physical model. Image noise levels of two different commercially available digital mammography systems,
GE Senographe Essential system and GE Senographe DS system, were measured. The mean noise was about 1.04% for
Senographe Essential system, 1.42% for Senographe DS system at 28kVp/50mAs; and was 0.47% for Senographe
Essential system, 0.79% for Senographe DS system at 48kVp/12.5mAs. Evaluations were performed by comparing RMS
(Root-Mean-Square) of calcification signal fluctuations in background regions and CNR (Contrast-Noise-Ratio) of
calcification signals in clusters when these two digital mammography systems were used. The results showed that image
noise had a serious impact on DEDM calcification signals. If GE Senographe Essential system was used, calcification
signal fluctuations were 200~300μm, and when calcification size is greater than 300μm, the probability of acquiring
CNR≥3 is over 50%. If noise reduction techniques are used, the calcification threshold size of CNR≥3 can be lower.
Abnormal breast tissue imaging based on multi-energy x-ray
Show abstract
The dual or multi-energy x-ray technique facilitates to generate tissue-selective images, by exploiting tissue-specific
energy dependence of x-ray attenuation. An abnormal breast is considered to be a mixture of adipose, glandular, and
abnormal tissues, but three tissues cannot be selectively decomposed because the total attenuation of a tissue is
represented by only two attenuation basis functions at diagnostic energy range. This paper presents a novel method to
selectively represent abnormal breast tissue, using polyenergetic multi-energy x-ray. We show that an abnormal tissue
can be revealed from the total thickness map, which is virtually constructed by assuming a healthy and compressed
breast. Specifically, regression analysis is first performed using the multi-energy images of the prepared calibration
phantom that consists of two basis materials. Total thickness map is then constructed by linearly combining thickness
maps of basis materials, where the optimal weights for combination are determined so that the uniformity of total breast
thickness is maximized. It is noted that the proposed method does not need accurate attenuation coefficients of breast
tissues. Simulation results show that the proposed method dramatically improves the detectability of mass that is
obscured by normal structures.
High contrast soft tissue imaging based on multi-energy x-ray
Show abstract
Breast soft tissues have similar x-ray attenuations to mass tissue. Overlapping breast tissue structure often obscures mass
and microcalcification, essential to the early detection of breast cancer. In this paper, we propose new method to generate
the high contrast mammogram with distinctive features of a breast cancer by using multiple images with different x-ray
energy spectra. On the experiments with mammography simulation and real breast tissues, the proposed method has
provided noticeable images with obvious mass structure and microcalifications.
Detailed characterization of 2D and 3D scatter-to-primary ratios of various breast geometries using a dedicated CT mammotomography system
Show abstract
With a dedicated breast CT system using a quasi-monochromatic x-ray source and flat-panel digital detector, the 2D and
3D scatter to primary ratios (SPR) of various geometric phantoms having different densities were characterized in detail.
Projections were acquired using geometric and anthropomorphic breast phantoms. Each phantom was filled with 700ml
of 5 different water-methanol concentrations to simulate effective boundary densities of breast compositions from 100%
glandular (1.0g/cm3) to 100% fat (0.79g/cm3). Projections were acquired with and without a beam stop array. For each
projection, 2D scatter was determined by cubic spline interpolating the values behind the shadow of each beam stop
through the object. Scatter-corrected projections were obtained by subtracting the scatter, and the 2D SPRs were
obtained as a ratio of the scatter to scatter-corrected projections. Additionally the (un)corrected data were individually
iteratively reconstructed. The (un)corrected 3D volumes were subsequently subtracted, and the 3D SPRs obtained from
the ratio of the scatter volume-to-scatter-corrected (or primary) volume. Results show that the 2D SPR values peak in the
center of the volumes, and were overall highest for the simulated 100% glandular composition. Consequently, scatter
corrected reconstructions have visibly reduced cupping regardless of the phantom geometry, as well as more accurate
linear attenuation coefficients. The corresponding 3D SPRs have increased central density, which reduces radially. Not
surprisingly, for both 2D and 3D SPRs there was a dependency on both phantom geometry and object density on the
measured SPR values, with geometry dominating for 3D SPRs. Overall, these results indicate the need for scatter
correction given different geometries and breast densities that will be encountered with 3D cone beam breast CT.
Characterization of image quality for 3D scatter-corrected breast CT images
Show abstract
The goal of this study was to characterize the image quality of our dedicated, quasi-monochromatic spectrum, cone
beam breast imaging system under scatter corrected and non-scatter corrected conditions for a variety of breast
compositions. CT projections were acquired of a breast phantom containing two concentric sets of acrylic spheres
that varied in size (1-8mm) based on their polar position. The breast phantom was filled with 3 different
concentrations of methanol and water, simulating a range of breast densities (0.79-1.0g/cc); acrylic yarn was
sometimes included to simulate connective tissue of a breast. For each phantom condition, 2D scatter was measured
for all projection angles. Scatter-corrected and uncorrected projections were then reconstructed with an iterative
ordered subsets convex algorithm. Reconstructed image quality was characterized using SNR and contrast analysis,
and followed by a human observer detection task for the spheres in the different concentric rings. Results show that
scatter correction effectively reduces the cupping artifact and improves image contrast and SNR. Results from the
observer study indicate that there was no statistical difference in the number or sizes of lesions observed in the
scatter versus non-scatter corrected images for all densities. Nonetheless, applying scatter correction for differing
breast conditions improves overall image quality.
Evaluation of image quality in computed radiography based mammography systems
Show abstract
Mammography is the most widely accepted procedure for the early detection of breast cancer and Computed
Radiography (CR) is a cost-effective technology for digital mammography. We have demonstrated that CR
mammography image quality is viable for Digital Mammography. The image quality of mammograms acquired
using Computed Radiography technology was evaluated using the Modulation Transfer Function (MTF), Noise
Power Spectrum (NPS) and Detective Quantum Efficiency (DQE). The measurements were made using a 28 kVp
beam (RQA M-II) using 2 mm of Al as a filter and a target/filter combination of Mo/Mo. The acquired image
bit depth was 16 bits and the pixel pitch for scanning was 50 microns. A Step-Wedge phantom (to measure
the Contrast-to-noise ratio (CNR)) and the CDMAM 3.4 Contrast Detail phantom were also used to assess
the image quality. The CNR values were observed at varying thickness of PMMA. The CDMAM 3.4 phantom
results were plotted and compared to the EUREF acceptable and achievable values. The effect on image quality
was measured using the physics metrics. A lower DQE was observed even with a higher MTF. This could be
possibly due to a higher noise component present due to the way the scanner was configured. The CDMAM
phantom scores demonstrated a contrast-detail comparable to the EUREF values. A cost-effective CR machine
was optimized for high-resolution and high-contrast imaging.
Application of parameters for evaluating different technologies for digitizing film mammograms
Show abstract
Taking into account requirements for processing digital mammograms, systems dealing with the optimization of images
acquisition need to be adequately evaluated. The processes for generating these images are varied and they can be
grouped mainly in two categories: (1) films scanned by specialized digitizers; (2) images obtained from electronic
sensors associated to digital converters (CR and DR systems). The main two types of different scanners are those with
white light-based detection and CCD sensors and with a scanning laser beam. Thus the current investigation aims to
perform quality evaluation of film digitizers, mainly addressed to mammography. In this analysis the following
parameters were studied: digitizers characteristic curves - relating the pixel value assigned to a region and the
corresponding optical density of the film on the same region; noise - obtained by the Wiener spectrum; and
reproducibility - evaluating whether a device used to capture a digital image can be reliable in subsequent scans. Six
different digitizer equipments were investigated with purposes of determining tools to enhance the image quality based
on their characteristics. The results have indicated that although the most sophisticated scanners have the best
characteristics among those evaluated, knowledge about the scanner behavior can allow developing procedures to
provide the adequate quality image for processing schemes.
Evaluation of the quality of image for various breast composition and exposure conditions in digital mammography
Show abstract
Breast density has a close relationship with breast cancer risk. The exposure parameters must be appropriately chosen for
each breast. However, the optimal exposure conditions for digital mammography are uncertain in clinical. The exposure
parameters in digital mammography must be optimized with maximization of image quality and minimization of
radiation dose. We evaluated image quality under different exposure conditions to investigate the most advantageous
tube voltage. For different compressed breast phantom thicknesses and compositions, we measured the Wiener spectrum
(WS), noise-equivalent number of quanta (NEQ), and detective quantum efficiency (DQE). In this study, the
signal-to-noise ratios were derived from a perceived statistical decision theory model with the internal noise of eye-brain
system (SNRi), contrived and studied by Loo et al.1 and Ishida et al.2 These were calculated under a fixed average
glandular dose. The WS values were obtained with a fixed image contrast. For 4-cm-thick and 50% glandular breast
phantoms, the NEQ showed that high voltages gave a superior noise property of images, especially for thick breasts, but
the improvement in the NEQ by tube voltage was not so remarkable. On the other hand, the SNRi value with a Mo filter
was larger than that with a Rh filter. The SNRi increased when the tube voltage decreased. The result differed from those
of WS and NEQ. In this study, the SNRi depended on the contrast of signal. Accuracy should be high with an intense,
low-contrast object.
Design and validation of a mathematical breast phantom for contrast-enhanced digital mammography
Show abstract
In contrast-enhanced digital mammography (CEDM) an iodinated contrast agent is employed to increase lesion contrast
and to provide tissue functional information. Here, we present the details of a software phantom that can be used as a
tool for the simulation of CEDM images, and compare the degree of anatomic noise present in images simulated using
the phantom to that associated with breast parenchyma in clinical CEDM images. Such a phantom could be useful for
multiparametric investigations including characterization of CEDM imaging performance and system optimization. The
phantom has a realistic mammographic appearance based on a clustered lumpy background and models contrast agent
uptake according to breast tissue physiology. Fifty unique phantoms were generated and used to simulate regions of
interest (ROI) of pre-contrast images and logarithmically subtracted CEDM images using monoenergetic ray tracing.
Power law exponents, β, were used as a measure of anatomic noise and were determined using a linear least-squares fit
to log-log plots of the square of the modulus of radially averaged image power spectra versus spatial frequency. The
power spectra for ROI selected from regions of normal parenchyma in 10 pairs of clinical CEDM pre-contrast and
subtracted images were also measured for comparison with the simulated images. There was good agreement between
the measured β in the simulated CEDM images and the clinical images. The values of β were consistently lower for the
logarithmically subtracted CEDM images compared to the pre-contrast images, indicating that the subtraction process
reduced anatomical noise.
Automatic patient motion detection in digital breast tomosynthesis
Show abstract
Patient motion is frequently a problem in mammography, especially when the x-ray exposure is long, resulting in image
quality degradation. At present, patient motion can only be identified by inspecting the image subjectively after image
acquisition. As digital breast tomosynthesis (DBT) takes longer time to complete the data acquisition than conventional
mammography, there is more chance for patient motion to happen in DBT. Therefore it is important to understand the
potential motion problem in DBT and incorporate a design to minimize it. In this paper we present an automatic method
to detect patient motions in DBT. The method is developed based on an understanding that, features of breast should
move along predictable trajectory in a time-series of projection measurements; deviations from it are linked to patient
motion. Motion distance is estimated by analyzing skin lines and large calcifications (if exist) in all projection images
and then a motion score is derived for a DBT scan. Effectiveness and robustness of this method will be demonstrated
with clinical data, together with discussions on different motion patterns observed clinically. The impacts of this work
could be far-reaching. It allows real-time detection and objective evaluation of patient motions, applicable to all breasts.
Patient with severe motion can be re-scanned immediately before leaving the room. Data with moderate motions can go
through additional targeted image processing to minimize motion artifacts. It also enables a powerful tool to evaluate
and optimize different DBT designs to minimize the patient motion problem. Besides, this method can be extended to
other imaging modalities, e.g. breast CT, to study patient motions.
A human observer study for evaluation and optimization of reconstruction methods in breast tomosynthesis using clinical cases
Show abstract
In breast tomosynthesis (BT) a number of 2D projection images are acquired from different angles along a limited arc.
The imaged breast volume is reconstructed from the projection images, providing 3D information. The purpose of the
study was to investigate and optimize different reconstruction methods for BT in terms of image quality using human
observers viewing clinical cases. Sixty-six cases with suspected masses and calcifications were collected from 55
patients.
Segmentation of adipose and glandular tissue for breast tomosynthesis imaging using a 3D hidden-Markov model trained on breast MRIs
Show abstract
Breast tomosynthesis involves a restricted number of images acquired in an arc in conventional mammography
projection geometry. Despite its angular undersampling, tomosynthesis projections are reconstructed into a volume at a
dose comparable to mammography. Tomosynthesis thus provides depth information, which is especially beneficial to
patients with dense breasts. Because the device can be based on an existing FFDM unit, tomosynthesis may be used to
accurately assess breast tissue composition, which would greatly benefit high-risk patients with less access to costly
imaging modalities such as MRI. This study plans to extract quantitative 3D breast tissue density information using a
fully automatic probabilistic model trained on segmented MRIs. The MRI ground truth was obtained for 293 breasts by
iterative threshold-based fatty / glandular tissue segmentation. After training a 3D hidden Markov model (HMM) on 10
MR volumes, our model was validated by segmenting 214 of the 293 breasts. After the tomosynthesis value
optimization, the same trained HMM was tested to segment breast tomosynthesis volumes of subjects whose MRIs were
used for validation. Initial training / testing of the HMM on MRIs matched density to thresholding within 5% for 70/214
breasts and 10% for 127/214 breasts. HMM segmentation was qualitatively superior at the cranial/caudal end slices in
MRIs and quantitatively superior for most tested tomosynthesis volumes. Its robustness and ease of modification give the
HMM great promise and potential for expansion in this multi-modality study.
Stationary digital breast tomosynthesis with distributed field emission x-ray tube
Show abstract
Tomosynthesis requires projection images from different viewing angles. Using a distributed x-ray source this can be
achieved without mechanical motion of the source with the potential for faster image acquisition speed. A distributed xray
tube has been designed and manufactured specifically for breast tomosynthesis. The x-ray tube consists of 31 field
emission x-ray sources with an angular range of 30°. The total dose is up to 100mAs with an energy range between 27
and 45 kVp. We discuss the source geometry and results from the characterization of the first prototype. The x-ray tube
uses field emission cathodes based on carbon nanotubes (CNT) as electron source. Prior to the manufacturing of the
sealed x-ray tube extensive testing on the field emission cathodes has been performed to verify the requirements for
commercial tomosynthesis systems in terms of emission current, focal spot size and tube lifetime.
The use of detectability indices as a means of automatic exposure control for a digital mammography system
Show abstract
This work examines the use of a detectability index to control an Automatic Exposure Control (AEC) system for an
amorphous-Selenium digital mammography detector. The default AEC mode for the system was evaluated using
homogeneous poly(methyl methacrylate) (PMMA) plates of thickness 20, 40, 60 and 70 mm to find the tube potential
and anode/filter settings selected by the system. Detectability index (d') using a non-prewhitened model observer with
eye filter (NPWE) was calculated for these beam qualities as a function of air kerma at the detector. AEC settings were
calculated that gave constant d' as a function of beam quality for a homogeneous background; a target d' was used that
ensured the system passed the achievable image quality criterion for the 0.1 mm diameter disc in the European
Guidelines. Threshold gold thickness was measured using the CDMAM test object as a function of beam quality for the
AEC mode, which held pixel value (PV) constant, and for the constant d' mode. Threshold gold thickness for the 0.1 mm
disc increased by a factor of 2.18 for the constant PV mode, while constant d' mode held threshold gold thickness
constant to within 7% and signal-difference-to-noise-ratio (SdNR) constant to within 5%. The constant d' settings
derived for homogeneous images were then applied to a phantom with a structured background. Threshold gold
thickness for the 0.13 mm disc increased by a factor of 1.90 for the constant PV mode, while constant d' mode held
threshold gold thickness constant within 38% for 0.13 mm disk.
Investigating the potential for super-resolution in digital breast tomosynthesis
Show abstract
Digital breast tomosynthesis (DBT) is an emerging 3D x-ray imaging modality in which tomographic sections of the
breast are generated from a limited range of tube angles. Because non-normal x-ray incidence causes the image of an
object to be translated in sub-pixel increments with increasing projection angle, it is demonstrated in this work that DBT
is capable of super-resolution (i.e., sub-pixel resolution). The feasibility of super-resolution is shown with a commercial
DBT system using a bar pattern phantom. In addition, a framework for investigating super-resolution analytically is
proposed by calculating the reconstruction profile for a sine input whose frequency is greater than the alias frequency of
the detector. To study the frequency spectrum of the reconstruction, its continuous Fourier transform is also calculated.
It is shown that the central projection cannot properly resolve frequencies higher than the alias frequency of the detector.
Instead, the central projection represents a high frequency signal as if it were a lower frequency signal. The Fourier
transform of the central projection is maximized at this lower frequency and has considerable spectral leakage as
evidence of aliasing. By contrast, simple backprojection can be used to image high frequencies properly. The Fourier
transform of simple backprojection is correctly maximized at the input frequency. Adding filters to the simple
backprojection reconstruction smoothens pixilation artifacts, and reduces spectral leakage found in the frequency
spectrum. In conclusion, this work demonstrates the feasibility of super-resolution in DBT experimentally and provides
a framework for characterizing its presence analytically.
Poster Session: Applications
An approach of long-view tomosynthesis in peripheral arterial angiographic examinations
Show abstract
Tomosynthesis (TS) has been evaluated as a useful diagnostic imaging tool for the orthopedic market and lung cancer
screening. Previously, we proposed Long-View Tomosynthesis (LVTS) to apply further clinical application by expanding
the reconstructed region of TS. LVTS method consists of three steps. First, it acquires multiple images while X-ray tube
and Flat Panel Detector (FPD) are moving in the same linear direction simultaneously at a constant speed. Second, each
image is divided into fixed length strips, and then the strips from different images having similar X-ray beam trajectory
angles are stitched together. Last, multi slice coronal images are reconstructed by utilizing the Filtered Back Projection
(FBP) technique from the long stitched images. The present LVTS method requires the acquisition by the constant speed
motion to stitch each strip precisely. It is necessary to improve the LVTS method to apply peripheral angiographic
examinations that are usually acquired at arbitrary variable speeds to chase the contrast media in the blood vessel. We
propose adding the method of detecting the moved distance of frames along with anatomical structure and the method of
selecting pixel values with contrast media to stitching algorithm. As a result, LVTS can extract new clinical information
like 3-D structure of superficial femoral arteries and the entire blood vessel from images already acquired by routine
bolus chasing techniques.
Accurate joint space quantification in knee osteoarthritis: a digital x-ray tomosynthesis phantom study
Tanzania S. Sewell,
Kelly L. Piacsek,
Beth A. Heckel,
et al.
Show abstract
The current imaging standard for diagnosis and monitoring of knee osteoarthritis (OA) is projection radiography.
However radiographs may be insensitive to markers of early disease such as osteophytes and joint space narrowing
(JSN). Relative to standard radiography, digital X-ray tomosynthesis (DTS) may provide improved visualization of the
markers of knee OA without the interference of superimposed anatomy. DTS utilizes a series of low-dose projection
images over an arc of ±20 degrees to reconstruct tomographic images parallel to the detector. We propose that DTS can
increase accuracy and precision in JSN quantification. The geometric accuracy of DTS was characterized by quantifying
joint space width (JSW) as a function of knee flexion and position using physical and anthropomorphic phantoms.
Using a commercially available digital X-ray system, projection and DTS images were acquired for a Lucite rod
phantom with known gaps at various source-object-distances, and angles of flexion. Gap width, representative of JSW,
was measured using a validated algorithm.
Over an object-to-detector-distance range of 5-21cm, a 3.0mm gap width was reproducibly measured in the DTS
images, independent of magnification. A simulated 0.50mm (±0.13) JSN was quantified accurately (95% CI
0.44-0.56mm) in the DTS images. Angling the rods to represent knee flexion, the minimum gap could be precisely determined
from the DTS images and was independent of flexion angle.
JSN quantification using DTS was insensitive to distance from patient barrier and flexion angle. Potential exists for
the optimization of DTS for accurate radiographic quantification of knee OA independent of patient positioning.
Image performance evaluation of a 3D surgical imaging platform
Show abstract
The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform
a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic
applications. Performance of the reconstructed 3D images was evaluated, using a custom-built phantom, in
terms of resolution, linearity, uniformity and geometrical accuracy. Both the standard (SD, 13 s) and high definition
(HD, 26 s) modes were evaluated, with the imaging parameters set to image the head (120 kVp, 100 mAs and 150 mAs,
respectively). For quantitative noise characterization, the images were converted to Hounsfield units (HU) off-line.
Measurement of the modulation transfer function revealed a limiting resolution (at 10% level) of 1.0 mm-1 in the axial
dimension. Image noise varied between 15 and 19 HU for the HD and SD modes, respectively. Image intensities varied
linearly over the measured range, up to 1300 HU. Geometric accuracy was maintained in all three dimensions over the
field of view. The present study has evaluated the performance characteristics of the O-arm, and demonstrates feasibility
for use in interventional applications and quantitative imaging tasks outside those currently targeted by the manufacturer.
Further improvements to the reconstruction algorithms may further enhance performance for lower-contrast applications.
Feasibility study of low-dose intra-operative cone-beam CT for image-guided surgery
Show abstract
Cone-beam computed tomography (CBCT) has been increasingly used during surgical procedures for providing accurate three-dimensional anatomical information for intra-operative navigation and verification. High-quality CBCT images are in general obtained through reconstruction from projection data acquired at hundreds of view angles, which is associated with a non-negligible amount of radiation exposure to the patient. In this work, we have applied a novel image-reconstruction algorithm, the adaptive-steepest-descent-POCS (ASD-POCS) algorithm, to reconstruct CBCT images from projection data at a significantly reduced number of view angles. Preliminary results from experimental studies involving both simulated data and real data show that images of comparable quality to those presently available in clinical image-guidance systems can be obtained by use of the ASD-POCS algorithm from a fraction of the projection data that are currently used. The result implies potential value of the proposed reconstruction technique for low-dose intra-operative CBCT imaging applications.
Examination of the dental cone-beam CT equipped with flat-panel-detector (FPD)
Show abstract
In dentistry, computed tomography (CT) is essential for diagnosis. Recently, cone-beam CT has come into use. We
used an "Alphard 3030" cone-beam CT equipped with an FPD system. This system can obtain fluoroscopic and CT
images. Moreover, the Alphard has 4 exposure modes for CT, and each mode has a different field of view (FOV) and
voxel size. We examined the image quality of kinetic and CT images obtained using the cone-beam CT system. To
evaluate kinetic image quality, we calculated the Wiener spectrum (WS) and modulation transfer function (MTF). We
then analyzed the lag images and exposed a phantom. To evaluate CT image quality, we calculated WS and MTF at
various places in the FOV and examined the influence of extension of the cone beam X-ray on voxel size. Furthermore,
we compared the WS and MTF values of cone-beam CT to those of another CT system. Evaluation of the kinetic images
showed that cone-beam CT is sufficient for clinical diagnosis and provides better image quality than the other system
tested. However, during exposure of a CT image, the distance from the center influences image quality (especially MTF).
Further, differences in voxel size affect image quality. It is therefore necessary to carefully position the region of interest
and select an appropriate mode.
Four-dimensional volume-of-interest reconstruction for cone-beam computerized tomography based image-guided radiation therapy of the lung
Show abstract
In image-guided radiation therapy of moving lung lesions, four-dimensional cone-beam CT (4D-CBCT) can be used
to produce time-resolved images for tracking the target throughout the breathing cycle. The requirements in 4DCBCT
are short scan time and image quality sufficient to localize the target. Short scans are desirable but result in
image-distorting streak artifacts in 4D-CBCT reconstruction, which may affect image-guidance. Motion-averaged
(also called conventional or 3D) CBCT reconstruction does not suffer from streak artifacts, but lacks the temporal
resolution to depict the tumor breathing motion. We define a new composite four-dimensional volume-of-interest
(4D-VOI) reconstruction which combines the features of pure 4D and motion-averaged reconstruction image sets. A
4D reconstruction is performed inside of a VOI which contains the moving tumor, and the higher quality motionaveraged
reconstruction is performed outside of the VOI. The three image sets (motion-averaged 3D, 4D, and 4DVOI)
are compared. The 3D reconstruction has very few streak artifacts but lacks the temporal resolution to depict
moving structures. On the other hand, the full-4D reconstruction without VOI processing is severely distorted by
streak artifacts. The 4D-VOI reconstruction has good temporal resolution in the volume of interest and low streak
artifact in most of the image.
4D cone beam CT phase sorting using high frequency optical surface measurement during image guided radiotherapy
Show abstract
In image guided radiotherapy (IGRT) two of the most promising recent developments are four dimensional
cone beam CT (4D CBCT) and dynamic optical metrology of patient surfaces. 4D CBCT is now becoming
commercially available and finds use in treatment planning and verification, and whilst optical monitoring
is a young technology, its ability to measure during treatment delivery without dose consequences has led
to its uptake in many institutes. In this paper, we demonstrate the use of dynamic patient surfaces,
simultaneously captured during CBCT acquisition using an optical sensor, to phase sort projection images
for 4D CBCT volume reconstruction.
The dual modality approach we describe means that in addition to 4D volumetric data, the system provides
correlated wide field measurements of the patient's skin surface with high spatial and temporal resolution.
As well as the value of such complementary data in verification and motion analysis studies, it introduces
flexibility into the acquisition of the signal required for phase sorting. The specific technique used may be
varied according to individual patient circumstances and the imaging target. We give details of three
different methods of obtaining a suitable signal from the optical surfaces: simply following the motion of
triangulation spots used to calibrate the surfaces' absolute height; monitoring the surface height in a single,
arbitrarily selected, camera pixel; and tracking, in three dimensions, the movement of a surface feature. In
addition to describing the system and methodology, we present initial results from a case study oesophageal
cancer patient.
Optimization of four-dimensional cone-beam computed tomography in image-guided radiation therapy of the lung
Show abstract
In image-guided radiation therapy of moving lung lesions,
four-dimensional CBCT (4D-CBCT) can produce several
images of the target through the breathing cycle with good temporal resolution. The requirements in 4D-CBCT are
short scan time and image quality sufficient to localize the target. Short scans are desirable but result in imagedistorting
streak artifacts. We have optimized 4D-CBCT by determining the minimum scan time for adequate image
quality. We scanned 4 patients with long scan times (3.8 - 5.4 minutes) to produce high-quality oversampled data
sets. These serve as the gold standard for image quality assessment. Various shorter scan times were simulated via
removal of projection data from the long scans. The projection data were removed in such a way as to maintain an
accurate number of total breaths for the various simulated scan times. The amount of global streak artifact and the
tumor and bony anatomy shape is assessed for each image set. The original long scans display no major streak
artifacts. The moving structures show no sign of motion blurring and have high contrast boundaries. As the scan
time is reduced, streak artifacts increase. Images for 1.5 - 2 minute scan time have relatively little distortion of
boundaries. At 1 minute or less, the streak artifacts severely distort boundaries and may compromise localization.
Comparing image quality and radiation dose between new generation MDCT and CBCT systems
Show abstract
The use of Cone Beam Computed Tomography (CBCT) for in-room image guided interventions has gained more and
more popularity over the last decade. In this study, we compared a low dose and a standard dose Multi Detector
Computed Tomography (MDCT) protocol for abdominal imaging with a CBCT system in terms of image quality and
radiation dose. Both systems used in this study are latest generation, so both offer high radiation dose efficiency. To
determine the dose distribution of both systems, a Rando-Alderson-Phantom in combination with 41 thermoluminescence
dosimeters (TLDs) were used. The equivalent dose for the whole body was calculated after ICRP. To
determine the image quality of the reconstructed slices, the Catphan600 phantom was used. In terms of quality we
determined the spatial resolution, contrast-to-noise ratio (CNR), and visual inspection. The dose could be reduced by
46.3% when using the low-dose MDCT protocol (120kV 50mAs) compared to the CBCT system (89kV 153mAs). CNR
and image noise are improved for the MDCT, in some cases the CNR up to 74.4%. However, the spatial resolution of the
CBCT system was superior, even after reconstructing the MDCT data with a small field-of-view and a relatively hard
filter. Visually, the MDCT reconstructions are of higher diagnostic quality. In conclusion, the MDCT provides better
dose efficiency in relation to the image quality. For example, in cases such as the chemoembolization, the CBCT system
is more convenient because of the possibility to be used during interventions.
A comparison of methods for estimating the line spread function of a CT imaging system
Samir Abboud,
Kristina Lee,
Kaleb Vinehout,
et al.
Show abstract
A quantitative description of the deterministic properties of a CT system is necessary when evaluating image
quality. The most common such metric is the modulation transfer function (MTF), usually calculated from a
line spread function (LSF) or point spread function (PSF). Currently, there exist many test objects used to
measure the LSF or PSF. In this paper we report on a comparison of these measures using: a thin foil slit test
object, a teflon cube edge test object and a novel "negative" cube test object. Images were acquired using a
custom-built bench-top flat-panel-based cone-beam CT scanner and a cylindrical water-filled PMMA phantom
with the test objects embedded in the middle. From the 3-dimensional reconstructed volumes, we estimated the
LSF either directly or as estimated from the edge spread function. From these, a modulation transfer function
can be estimated, and the frequency dependent image transfer of each object can be reported.
Performance evaluation of a sub-millimeter spectrally resolved CT system on pediatric imaging tasks: a simulation
Show abstract
We are developing a photon counting silicon strip detector with 0.4x0.5 mm2 square detector elements for clinical
CT applications. Except the somewhat limited detection efficiency at higher kVp's the largest discrepancies from
ideal spectral behavior have been shown to be Compton interactions in the detector combined with electronic
noise. Using the framework of cascaded systems analysis, we reconstruct the 3D MTF and NPS of a silicon
strip detector using "optimal" projection based weighting, including the influence of scatter and charge sharing
inside the detector. We compare the reconstructed noise and signal characteristics with a reconstructed 3D
MTF and NPS of an ideal energy integrating detector by calculating the detectability index for several clinically
relevant imaging task. This work demonstrates that although the detection efficiency of the silicon detector
rapidly drops for the acceleration voltages encountered in clinical computed tomography practice and the high
fraction of Compton interactions due to the low atomic number, silicon detectors can perform on par with ideal
energy integrating detectors for routine imaging tasks contaning low frequency components. For imaging task
containing high frequency components, silicon detectors can perform approximately 1.4 - 1.8 times better than
a fully ideal energy integrating system with unity detection, no scatter or charge sharing inside the detector and
1x1 mm2 square detector elements.
Characteristics of noise and resolution on image reconstruction in cone-beam computed tomography
Show abstract
The cone-beam computed tomography (CBCT) is a useful modality in diagnostic imaging due to the properties of fast
volume coverage, lower radiation dose, easy hardware implementation, and higher spatial resolution. Recently, attention
is being paid to address the noise and resolution relationship for CBCT. In CBCT system, image noise and spatial
resolution play important roles in image quality. However, there has not been done many works for evaluating the
relationship of image noise and the spatial resolution in CBCT. In this study, we evaluated the image noise and spatial
resolution as a function of filter, number of projections, and voxel size on reconstructed images in CBCT. The simulated
projection data of Catphan 600 phantom were reconstructed using the FDK algorithm. To evaluate the image noise and
spatial resolution, the coefficient of variation (COV) of attenuation coefficient and the modulation transfer function
(MTF) in axial images were calculated, respectively. The filters used for reconstruction were Ram-lak, Shepp-logan,
Cosine, Hamming, and Hann. A number of projections were 161, 321, 481 and 642 acquired from scanning of 360
degree and the voxels with sizes of 0.10 mm, 0.15 mm, 0.20 mm, 0.25 mm and 0.30 mm were used. The image noise
given by Hann filter was the lowest and decreased as functions of number of projections and voxel size. The spatial
resolution given by Ram-lak filter was the highest and increased as a function of number of projections, decreased as a
function of voxel size. The results of this study show the relationship of image noise and spatial resolution in CBCT and
the characteristics of reconstruction factors for trade-off between the image noise and spatial resolution. It can also
provide information of image noise and spatial resolution for adaptive image.
Investigation of the effect of varying scatter-to-primary ratios on nodule contrast in chest tomosynthesis
Show abstract
The primary aim of the present work was to analyze the effects of varying scatter-to-primary ratios on the appearance of
simulated nodules in chest tomosynthesis section images. Monte Carlo simulations of the chest tomosynthesis system
GE Definium 8000 VolumeRAD (GE Healthcare, Chalfont St. Giles, UK) were used to investigate the variation of
scatter-to-primary ratios between different angular projections. The simulations were based on a voxel phantom created
from CT images of an anthropomorphic chest phantom. An artificial nodule was inserted at 80 different positions in the
simulated phantom images, using five different approaches for the scatter-to-primary ratios in the insertion process. One
approach included individual determination of the scatter-to primary-ratio for each projection image and nodule location,
while the other four approaches were using mean value, median value and zero degree projection value of the scatter-toprimary
ratios at each nodule position as well as using a constant scatter-to-primary ratio of 0.5 for all nodule positions.
The results indicate that the scatter-to-primary ratios vary up to a factor of 10 between the different angular
tomosynthesis projections (±15°). However, the error in the resulting nodule contrast introduced by not taking all
variations into account is in general smaller than 10 %.
3D lesion insertion in digital breast tomosynthesis images
Show abstract
Digital breast tomosynthesis (DBT) is a new volumetric breast cancer screening modality. It is based on the principles of
computed tomography (CT) and shows promise for improving sensitivity and specificity compared to digital
mammography, which is the current standard protocol. A barrier to critically evaluating any new modality, including
DBT, is the lack of patient data from which statistically significant conclusions can be drawn; such studies require large
numbers of images from both diseased and healthy patients. Since the number of detected lesions is low in relation to the
entire breast cancer screening population, there is a particular need to acquire or otherwise create diseased patient data.
To meet this challenge, we propose a method to insert 3D lesions in the DBT images of healthy patients, such that the
resulting images appear qualitatively faithful to the modality and could be used in future clinical trials or virtual clinical
trials (VCTs). The method facilitates direct control of lesion placement and lesion-to-background contrast and is
agnostic to the DBT reconstruction algorithm employed.
A patient image-based technique to assess the image quality of clinical chest radiographs
Show abstract
Current clinical image quality assessment techniques mainly analyze image quality for the imaging system in terms of
factors such as the capture system DQE and MTF, the exposure technique, and the particular image processing method
and processing parameters. However, when assessing a clinical image, radiologists seldom refer to these factors, but
rather examine several specific regions of the image to see whether the image is suitable for diagnosis. In this work, we
developed a new strategy to learn and simulate radiologists' evaluation process on actual clinical chest images. Based on
this strategy, a preliminary study was conducted on 254 digital chest radiographs (38 AP without grids, 35 AP with 6:1
ratio grids and 151 PA with 10:1 ratio grids). First, ten regional based perceptual qualities were summarized through an
observer study. Each quality was characterized in terms of a physical quantity measured from the image, and as a first
step, the three physical quantities in lung region were then implemented algorithmically. A pilot observer study was
performed to verify the correlation between image perceptual qualities and physical quantitative qualities. The results
demonstrated that our regional based metrics have promising performance for grading perceptual properties of chest
radiographs.
A new iodinated liver phantom for the quantitative evaluation of advanced CT acquisition and reconstruction techniques
Show abstract
An iodinated liver phantom is needed for liver CT related studies, such as the quantification of lesion
contrast. Prior studies simulated iodinated hepatic lesions with tubes of iodine solution, which involved
complications associated with the setup, differences from actual lesion morphology, and susceptibility to
iodine sediments. To develop a dedicated liver phantom with anthropomorphic structures and solid
lesions, we designed a phantom with iodinated liver inserts and lesions of different sizes and contrasts.
The concentration of iodine in liver parenchyma was determined according to the HU measured from
clinical images. The concentrations in high and low contrast lesions were selected so as to provide
challenging but reasonable detection tasks. The application of the liver phantom was initially validated at
different doses and reconstruction settings.