Improving satellite spatial resolution

Exploiting spatial misregistration of a satellite hyperspectral sensor doubles spatial resolution.
20 March 2013
Shen-En Qian

Spatial resolution refers to the ability of a satellite sensor to identify the smallest details of a ground object, which is also referred to as ground sample distance (GSD). Satellite data users prefer to receive images with high spatial resolution, but building such a satellite sensor is challenging and costly. For example, the concept study of the Canadian Hyperspectral Environment and Resource Observer mission concluded that the achievable GSD is 30m based on a trade-off between spatial and spectral resolution, which is required to maintain imaging sensitivity.1 However, users require 20m or even 10m GSD.

Image fusion is nominally an alternative solution to increase spatial resolution. Multiple images of the same scene are collected and then fused to attain a high spatial resolution image.2 For multispectral or hyperspectral sensors, spatial resolution could be enhanced by fusing their images with a high spatial resolution panchromatic (PAN) image that was acquired simultaneously by the PAN instrument onboard the same satellite. However, this image fusion-based approach requires multiple images of the same scene, or the high-resolution PAN image being available. In practice, these images may not exist. Even if they are available, it is a nontrivial task to fuse them to precisely increase the spatial resolution due to errors in geometric registration and radiometric normalization.3 (Radiometric normalization is a process that aims to remove—or calibrate—radiometric differences between multiple satellite images acquired by different sensors with different instrument functions and environmental conditions such as atmosphere and sun angle.)

We have developed a novel technology to increase spatial resolution of satellite images without any additional images.4 It exploits an intrinsic property of hyperspectral sensors: interband spatial misregistration, also referred to as ‘keystone’ distortion, and uses it as additional information to increase the spatial resolution of the images. Since multiple images of the same scene are no longer required, errors in the geometric registration and the radiometric normalization process are irrelevant.

Keystone distortion, as seen in a camera or TV screen, results when targets on the ground are imaged in a detector array with a spatial shift that varies with spectral band number (i.e., wavelength). For example, ground pixels A, B, and C sensed in band M-2 of the detector array (see Figure 1) are shifted by k pixels compared to those sensed in band 3, due to keystone distortion. This keystone-induced spatial shift of the same ground pixels in different band images of a hyperspectral datacube carries additional information, similar to that carried by multiple satellite observations of the same scene.


Figure 1. Top: The keystone distortion causes spatial shift of ground sample pixels (A, B, and C) when they are sensed in a detector array varying with spectral band number. Bottom: A hyperspectral sensor images one line of ground samples into a 2D detector array, in which the spectrum is dispersed vertically (i.e., spectral bands from 1 to M) and the spatial field (ground samples 1 to N in the cross-track line) is oriented along the rows of the detector array. A series of such 2D images form a datacube after the satellite has flown for many ground lines. M-1, 2, 3: Spectral bands.

We have developed three methods for deriving sub-pixel shifted images from a single hyperspectral datacube by exploiting the sensor's keystone distortion. In addition, we have proposed two schemes to organize the derived images before they are integrated using iterative back-projection (IBP) to generate a single high-resolution (HR) image (see Figure 2).


Figure 2. Block diagram of generating the high-resolution image by exploiting the keystone distortion of a hyperspectral sensor. LR: Low resolution. IBP: Iterative back-projection. HR: High-resolution.

A band image in the datacube is first selected and used as the baseline image. The sub-pixel shifted images related to the baseline image are derived from the datacube by finding the separate band images with required shifts, or by finding pieces of column from all band images with the required shift to form the shift image, or by searching for the pixel whose intensity is closest to that of the pixel of the baseline image at the same location. The derived sub-pixel shifted images are organized by either resampling them to exactly half-pixel shifts or leaving the pixel shift as is in exploiting the varying keystone-induced shifts. The IBP integrates the sub-pixel shift images iteratively in two steps:2 projection and back-projection until satisfactory results are obtained. The HR image is used, in a similar way to the PAN image in conventional pan-sharpening, to increase the spatial resolution of the entire datacube.5,6

Experimental results showed that our sensor-property-based technology can double the spatial resolution of hyperspectral images without using any additional images. Figure 3 shows an example of a hyperspectral image in a uranium mine site before and after increasing the spatial resolution using our keystone distortion-based technology. Edges of objects are sharper in the HR image than in the original image. A small region of interest (ROI) is selected and magnified by 2 to show the details. A small triangle outlined by the roads, which cannot be seen in the original image, can be better identified in the upper part of the HR image. At the lower right corner of the ROI, there is a mining lay-down (storage) area that cannot be well identified in the original image. The outline and three rows of the lay-down area can be seen in the HR image.


Figure 3. Hyperspectral image (a) before and (b) after increasing spatial resolution using the keystone distortion-based technology.

In conclusion, our sensor's keystone-based technology is a feasible and cost-effective method to increase the spatial resolution of satellite hyperspectral sensors. Unlike image fusion, it requires no additional images. We used two separate metrics to assess the HR images we obtained: an image quality metric based on models of the human visual system;7and remote sensing application algorithms. We found that the HR image contains more spatial information than the original image in terms of visual information fidelity metric7 and remote-sensing products.5Experimental results confirmed that our technology preserves spectral information well while increasing spatial resolution. Our next step will be to apply this technology to future Canadian hyperspectral imagers onboard microsatellites.8

© Government of Canada 2013.


Shen-En Qian
Canadian Space Agency
St-Hubert, Canada

Shen-En Qian is a senior scientist and the technical authority for Canadian government contracts on the development of space technologies and satellite missions. He is an SPIE Fellow, holds nine patents, and is author or co-author of three books and more than 100 papers.


References:
1. A. Hollinger, Recent developments in the hyperspectral environment and resource observer (HERO) mission, Proc. IEEE Int'l Conf. Geosci. Rem. Sens. Symp. 2006, p. 1620-1623, 2006.
2. C. Pohl, J. L. Van Genderen, Multisensor image fusion in remote sensing: concept, methods, and applications, Int'l J. Rem. Sens. 19(5), p. 823-854, 1998.
3. I. Zavorin, J. Le Moigne, Use of multiresolution wavelet feature pyramids for automatic registration of multisensor imagery, IEEE Trans. Image Process. 14(6), p. 770-782, 2005.
4. S.-E. Qian, Method and system of increasing spatial resolution of multi-dimensional optical imagery using sensor's intrinsic keystone, Int'l Patent Appl. PCT/CA2011/050077, published in August 2012 (WO/2012/106797)..
5. S.-E. Qian, G. Chen, Enhancing spatial resolution of hyperspectral imagery using sensor's intrinsic keystone distortion, IEEE Trans. Geosci. Rem. Sens. 50(12), p. 5033-5048, 2012.
6. G. Chen, S.-E. Qian, J.-P. Ardouin, Superresolution of hyperspectral imagery using complex ridgelet transform, Int'l J. Wavel. Multires. Info. Process. 10(3), 2012.
7. R. H. Sheikh, A. C. Bovik, Image information and visual quality, IEEE Trans. Image Process. 15(2), p. 430-444, 2006.
8. S.-E. Qian, R. Girard, G. Seguin, Development of Canadian hyperspectral imager onboard micro-satellites, Proc. IEEE Int'l Conf. Geosci. Rem. Sens. Symp. 2013. (In press.)
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research