Deblurring of post-adaptive optics images
The current 10m class of telescopes (e.g., the Large Binocular Telescope and the Very Large Telescope), as well as the upcoming class of 40m telescopes (e.g., the European Extremely Large Telescope, the Thirty Meter Telescope, and the Giant Magellan Telescope), all require adaptive optics (AO) systems to compensate for the effects of atmospheric turbulence. Although single-conjugate AO systems can be used to flatten the wavefront from a guide star, in most cases the science target is not coincident with the guide star. The stratified structure of the atmosphere therefore causes the light beams from the science target and from the guide star to be affected by different aberrations. As such, even a perfect correction for the guide star cannot be a perfect correction for the target. The resulting point spread function (PSF) is thus degraded and is variable across the astronomical image.
With recently developed complex AO techniques (e.g., multi-conjugate AO), PSF uniformity can be improved. Some residual PSF variation in the field of view, however, is still possible. Depending on the necessary degree of PSF stability imposed by the science targets, the space variation of the PSF in AO observations can therefore be a significant issue. This can, however, be addressed with image processing methods. The case of a space-variant PSF is computationally tractable only under the assumption that the PSF smoothly varies across the image domain. A sectioning approach has therefore been proposed,1, 2 in which the image is decomposed into sub-domains (or patches). Within each of these sub-domains, the PSF is assumed to be approximately space invariant. An alternative interpolation approach has also been proposed,3–5 with which the discontinuity of the PSF from one sub-domain to another can be suppressed.
We have therefore developed a software package, known as ‘Patch,’ to deblur images that have been degraded because of spatially variable PSF.6, 7 Our package is written in the Interactive Data Language (IDL) and can be downloaded for free from the Internet.8 The method that we have implemented in our software is an improvement to the sectioning approach. We decompose the input image in partially overlapping sub-domains (the size of these overlapping regions depends on the extent of the PSF). We then separately reconstruct each sub-domain by means of a deconvolution method. This method includes boundary effect corrections9 so that artifacts on the edges of the sub-domains—caused by Gibbs oscillations—can be prevented. We are then able to obtain the reconstructed image as a mosaic of the results.
Our Patch graphical user interface (GUI) consists of three panels, i.e., one for each specific step of the reconstruction (input, deconvolution, and output). The main inputs for the software are the observed image, a set of local PSFs (defined in a regular grid and centered in each image sub-domain), and an estimation of the background. It is necessary, however, to estimate the local PSFs separately. The first panel (see Figure 1) shows the input image. In this illustrated case, the image is a simulated star cluster, observed by the Hubble Space Telescope before its Corrective Optics Space Telescope Axial Replacement (COSTAR) correction was performed.10 We enlarge each sub-domain in this image to a suitable (and automatically computed) size so that the partially overlapping domains for each deconvolution can be considered. It is possible for the user to set a larger value if specific image features need to be taken into account.
In the second panel of our GUI (see Figure 2), the deconvolution method can be set. For this, we can either implement the Richardson-Lucy11, 12 or the Scaled Gradient Project (SGP)13 methods, with corrections for boundary effects. In Figure 2, the SGP method for deconvolution has been chosen and the algorithm will be stopped when the so-called data-fidelity function (i.e., the Kullback-Leibler distance between the model and the data) is approximately constant (given a tolerance of 10−5). Our algorithm stopping rules are based on a variety of criteria, details of which have been published previously.6 The results of the deconvolution step can then be visualized and saved in the final panel (see Figure 3). The residual map, which is dependent on the difference between the input image and the convolution (with the input PSFs) of the reconstructed object, can also be visualized and saved.
We have tested our methodology on a set of simulated images. These images are of point-like sources in a crowded stellar field, as well as extended extragalactic sources.6, 7 For both of these scenarios, we simulated the images by assuming a strongly variable PSF across the field of view. In the case of the stellar objects, our tests show that the photometric and astrometric accuracy increases (and the number of artifacts simultaneously decreases) when the number of sub-domains increases. The color–magnitude diagrams (CMDs) for different reconstructions of the same input image—obtained by dividing it into different numbers of sub-domains—are shown in Figure 4. We simulated the observation in the J-band and in the K-band (with central wavelengths of 1.27 and 2.12μm, respectively), and the color is defined as the difference between the J-band and K-band magnitudes. We see a clear improvement in the photometric accuracy (i.e., there is a narrower spread of data) over the range of CMDs. Furthermore, our extragalactic source simulations indicate that it is possible to retrieve the morphological properties of the extended object with a good level of precision.
We have developed a new method—and accompanying software—for the deblurring of post-AO images characterized by space-variant PSFs. We have tested our approach on both diffuse and stellar images. The results that we obtained are satisfactory and allow us to achieve good image reconstructions. We are now in the process of testing our software on real AO images, and we hope to publish a paper on our findings shortly.
This work has been partially supported by the National Institute for Astrophysics, under project TECNO-INAF 2010 (Exploiting the adaptive power: a dedicated free software to optimize and maximize the scientific output of images from present and future adaptive optics facilities).
University of Genoa
Genoa, Italy
Andrea La Camera has a PhD in computer science and is currently a postdoctoral researcher. His research activities are mainly within the field of inverse problems and are focused on astronomical image deconvolution.
National Institute for Astrophysics (INAF)
Laura Schreiber obtained her PhD in astronomy from the University of Bologna, Italy, in 2009. In her current work, she focuses on AO instrumentation and data processing.