Proceedings Volume 3871

Image and Signal Processing for Remote Sensing V

cover
Proceedings Volume 3871

Image and Signal Processing for Remote Sensing V

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 14 December 1999
Contents: 9 Sessions, 38 Papers, 0 Presentations
Conference: Remote Sensing 1999
Volume Number: 3871

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Calibration and Registration
  • Hyperspectral Data Analysis
  • Poster Session
  • Filtering, Detection, and Segmentation
  • Target Detection and Object Recognition
  • Image Classification
  • Neural Network and Symbolic Techniques
  • Wavelets
  • Data Fusion
  • Poster Session
Calibration and Registration
icon_mobile_dropdown
Atmospheric correction algorithm for hyperspectral imagery
Lee Curtis Sanders, Rolando V. Raqueno, John R. Schott
In December 1997, the U.S. Department of Energy (DOE) established a Center of Excellence (Hyperspectral- Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolute calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). The results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrievals in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.
Automatic geocoding of high-value targets using structural image analysis and GIS data
Uwe Soergel, Ulrich Thoennessen
Geocoding based merely on navigation data and sensor model is often not possible or precise enough. In these cases an improvement of the preregistration through image-based approaches is a solution. Due to the large amount of data in remote sensing automatic geocoding methods are necessary. For geocoding purposes appropriate tie points, which are present in image and map, have to be detected and matched. The tie points are base of the transformation function. Assigning the tie points is combinatorial problem depending on the number of tie points. This number can be reduced using structural tie points like corners or crossings of prominent extended targets (e.g. harbors, airfields). Additionally the reliability of the tie points is improved. Our approach extracts structural tie points independently in the image and in the vector map by a model-based image analysis. The vector map is provided by a GIS using ATKIS data base. The model parameters are extracted from maps or collateral information of the scenario. The two sets of tie points are automatically matched with a Geometric Hashing algorithm. The algorithm was successfully applied to VIS, IR and SAR data.
Hyperspectral Data Analysis
icon_mobile_dropdown
Algorithm for the estimation of oceanic chlorophyll concentration from hyperspectral data through purpose-oriented feature extraction
Sadao Fujimura, Senya Kiyasu
Remotely sensed data are used for global estimation of oceanic chlorophyll concentration, from which biomass productivity in ocean is estimated. A conventional (Gordon's) method uses the ratio of green to blue bands as an indicator of the chlorophyll concentration. This method is not accurate, especially when phytoplankton is not dominant. We devise a method through our purpose-oriented feature extraction method which is much more accurate than the conventional method even when the other components than chlorophyll are not negligible. The basic idea of our method is to fuse each dimension of hyper-spectral data to produce a value which describes the chlorophyll concentration almost independent of other components. We confirmed by simulation that our algorithm gives two to ten times more accurate results than the conventional method does. It is another prominent feature that our method is wholly systematic, and widely applicable.
Integration of spatial and spectral information in unsupervised classification for multispectral and hyperspectral data
Luis O. Jimenez-Rodriguez, Jorge Rivera-Medina
Unsupervised classification algorithms are techniques to extract information from Remote Sensing imagery based on machine calculation without prior knowledge of labeled samples. Most of current unsupervised algorithms only use the spectral response as information. The clustering algorithms that takes into consideration the spatial information have a trade off between being accurate and time consuming, or being fast and losing relevant details in the spatial mapping. This paper will present an unsupervised classification system developed to extract information from multispectral and hyperspectral data as well, considering the spectral response, hyperdimensional data characteristics, and the spatial context of the pixel that will be classified. This algorithm constructs local spatial neighborhoods in order to measure their degrees of homogeneity. It resembles the supervised version of the ECHO classifier. An advantage of this mechanism is that the mathematical developments to estimate the degrees of homogeneity enable implementations based on statistical pattern recognition. This clustering algorithm is fast and its results have shown superiority in recognizing objects in multispectral and hyperspectral data over other known mechanism.
Noise subspace projection approaches to determination of intrinsic dimensionality of hyperspectral imagery
Determination of Intrinsic Dimensionality (ID) for remotely sensed imagery has been a challenging problem. For multispectral imagery it may be solvable by Principal Components Analysis (PCA) due to a small number of spectral bands which implies that ID is also small. However, PCA method may not be effective if it is applied to hyperspectral images. This may arise in the fact that a high spectral-resolution hyperspectral sensor may also extract many unknown interfering signatures in addition to endmember signatures. So, determining the ID of hyperspectral imagery is more problematic than that of multispectral imagery. This paper presents a Neyman-Pearson detection theory-based eigen analysis for determination of ID for hyperspectral imagery, particularly, a new approach referred to as Noise Subspace Projection (NSP)-based eigen-thresholding method. It is derived from a noise whitening process coupled with a Neyman- Pearson detector. The former estimates the noise covariance matrix which will be used to whiten the data sample correlation matrix, whereas the latter converts the problem of determining ID to a Neyman-Pearson decision with the Receiver Operating Characteristics (ROC) analysis used as a thresholding technique to estimate ID. In order to demonstrate the effectiveness of the proposed method AVIRIS are used for experiments.
Poster Session
icon_mobile_dropdown
Color space and wavelet transform for merging multispectral and panchromatic SPOT images
Youcef Chibani, Amrane Houacine
We propose in this paper the joint use of the color space and wavelet transform to improve the spatial resolution of multispectral images. The principle consists in transforming the Red Green Blue (RGB) image components into IHS independent components. The I component is merged with the P image in the wavelet domain via an appropriate model and an IHS reverse transformation is then performed to produce new high resolution multispectral images. For this purpose, the redundant wavelet transform may be used since it ensures that significant details coming from P are well injected in the I component. Another advantage of this approach lies in rejecting the noise present in the two components. The merging I and P considered here is based on the local detail mean matching that allows to adjust the high resolution details of the P image with the low resolution of the I component. To evaluate the results of our merging method, two criteria are used: correlation coefficient and index deviation.
Integration of thematic vector data in the analysis of remotely sensed images for reconnaissance
The operation of high resolution sensors can enhance the reconnaissance capability, but as a consequence it is accompanied by an increasing amount of data. Therefore, it is necessary to relieve the data down link and the image analyst, e.g. by a screening process. This task filters out regions of interest (ROI) and recognizes hypotheses of potential objects. In this paper, we describe the support of image analysis by thematic vector data from a GIS. First, the image is automatically or interactively geocoded based on matched objects in the image and map. Three different ways on the use of GIS depending on its information content are presented. In the first approach, the desired ROIs are selected by a thematic query to the GIS and then extracted from the image. Inside these ROIs a subsequent detail analysis can be performed. In the second case in which objects are not integrated in the vector data or interesting objects are not maintained, a structural image analysis approach is used to detect extended objects like, for example, airfields. The GIS is then supplemented by the result of structural image analysis. In the third case, thematic vector data is used to extract training areas for classification and parameter settings for image segmentation.
Robust optimal fuzzy clustering algorithm applicable to multispectral and polarimetric synthetic aperture radar images
Salim Chitroub, Amrane Houacine, Boualem Sansal
In clustering algorithms, there are two problems arising. The first one is the cluster validation. The second one is that the clustering algorithms are similar to descent algorithms, which provide only a local optimization. In this paper, a robust optimal fuzzy clustering algorithm applicable to multispectral and polarimetric synthetic aperture radar (SAR) images is suggested. The idea of the proposed optimal fuzzy clustering algorithm is to build an objective function whose global minimum will characterize a good fuzzy partition of the training data set. To reach such a global minimum we use simulated annealing (SA) algorithm. An adaptation of SA to the fuzzy clustering problem is then established. By robust algorithm, we mean that it leads to classification results that are robust versus the estimated number of clusters. To find the number of clusters that leads to a robust classification, we compare between two different classification results and founding correspondence between their clusters without referencing to the ground truth. We consider such a comparison criterion as an optimization problem, which will be solved by using a new optimization technique based on correspondence analysis. The technique is inspired from SA method. We demonstrate our methodology by classifying two different complex scenes using a multispectral data provided by SPOT satellite and the SIR-C data.
Remotely sensed image processing with multistage inferences
Hiromichi Yamamoto, Kohzo Homma, Toshio Isobe, et al.
A system for analyzing remotely-sensed satellite images using knowledge bases driven by multi-stage inference engines has been developed. Sea-surface temperature analysis is thought to have great potential for effectively identifying the positions and shapes of oceanic conditions such as ocean fronts, eddies, currents, and so on. Knowledge and experience accumulated through conventional oceanic observation by ships and other methods are indispensable when extracting such oceanic conditions from remotely-sensed data, and the extraction process requires the efforts of human experts. This paper discusses some useful strategies for dealing with the problems of automatic extraction of oceanic conditions, including a mechanism for selecting individual algorithms and automatically constructing a sequence of image processing commands, a scheme for verifying consistency between knowledge rules, and a scheme for the intensive accumulation of knowledge information. In addition, this paper presents some experimental applications to remotely-sensed ocean image data which have been performed highly efficiently. The resulting extracted ocean fronts and currents have been successfully verified using oceanographic surveys.
Implementation of real-time SAR systems with a high-performance digital signal processor
Helge Kloos, Jens Peter Wittenburg, Willm Hinrichs, et al.
Real-time Synthetic Aperture Radar (SAR) image synthesis is one of the major problems to solve in the future. To achieve a fully synthesized SAR image, the raw signal must be filtered with a 2-dimensional function representing the system transfer function. These filtering operations are usually processed by multiplication in frequency domain. Therefore, the Fast Fourier Transform (FFT) used for transformation to/from frequency domain is the predominant algorithm in terms of processing power for SAR image synthesis. The presented HiPAR- DSP is a programmable architecture, which is optimized for FFT-dominated applications like SAR image processing. To provide the high requested processing power for these task, the HiPAR-DSP has an array of 4 (HiPAR-DSP4) respectively 16 (HiPAR-DSP16) parallel processing units (datapaths) which is controlled by a single RISC Controller. For data exchange between the processing units there is a shared memory which allows the concurrent access from all processing units in a single clock cycle. So the HiPAR-DSP16 performs a complex FFT with 1024 Samples in 32microsecond(s) . For the implemented SAR- Processing task, the Range Compression with 4096 complex samples per line we achieve a real-time performance of nearly 1500 rangelines/s.
Hybrid pattern recognition method using evolutionary computing techniques applied to the exploitation of hyperspectral imagery and medical spectral data
Hyperspectral image sets are three dimensional data volumes that are difficult to exploit by manual means because they are comprised of multiple bands of image data that are not easily visualized or assessed. GTE Government Systems Corporation has developed a system that utilizes Evolutionary Computing techniques to automatically identify materials in terrain hyperspectral imagery. The system employs sophisticated signature preprocessing and a unique combination of non- parametric search algorithms guided by a model based cost function to achieve rapid convergence and pattern recognition. The system is scaleable and is capable of discriminating and identifying pertinent materials that comprise a specific object of interest in the terrain and estimating the percentage of materials present within a pixel of interest (spectral unmixing). The method has been applied and evaluated against real hyperspectral imagery data from the AVIRIS sensor. In addition, the process has been applied to remotely sensed infrared spectra collected at the microscopic level to assess the amounts of DNA, RNA and protein present in human tissue samples as an aid to the early detection of cancer.
Improved stereo point matching in meteorological satellite images
Fabio Dell'Acqua, Paolo Gamba
In this paper a method for extracting objects from meteorological stereo satellite images and matching parts of them is presented. Great emphasis is put on the choice of the threshold value for object extraction. The matching method is based on a modal matching algorithm: the objects to be matched are sampled and ideally turned into elastic objects. A finite element method allows computing modes of vibration for such elastic body. By comparing displacements due to vibrations in each point from both pictures it is possible to match corresponding parts in the two different viewpoints.
Filtering, Detection, and Segmentation
icon_mobile_dropdown
Remote image segmentation based on color information
Joseph Fernandez, Joan Aranda, Antoni Grau
In this paper we propose a new color descriptor and segmentation algorithm for the analysis of aerial images. The main advantages of the proposed segmentation process are: the new color descriptor is related to element/object properties, it is stable and the resulting segmentation contains a reduced number of regions. These features allow us to obtain segmentation of aerial image matching with the terrain characteristics. The proposed color descriptor is the H/I (Hue/Intensity) space. It is derived from the HSI color space, taking advantage of the high discrimination power of the Hue and solving the major problems of the HSI space: a high color resolution requires high computing resources, and the RGB to HSI transformation presents singularities. The image segmentation based on region approach is really appropriate for land use applications given that land cover is naturally built-up from regions. We have developed a variation of the region growing algorithm in order to reduce the process time and to generate a low number of regions in the segmented image, that are related to the main land areas.
Image processing of airborne scanning laser altimetry for some environmental applications
David C. Mason, David M. Cobby, Ian J. Davenport
Airborne scanning laser altimetry (LiDAR) is an important new data source for environmental applications, being able to map heights to high vertical and horizontal accuracy over large areas. The paper describes a range image segmentation system for data from a LiDAR measuring time of last significant return only. Each spot height represents the height of incidence of the narrow laser pulse with the ground, the top of the vegetation canopy or some point in between. The segmenter is aimed at two specific environmental applications, both of which require the underlying ground heights and the vegetation canopy heights to be estimated from the LiDAR height image. A method of estimating vegetation height in regions of short vegetation such as crops is presented. An advantage of segmentation is that it allows different topographic and vegetation height extraction algorithms to be used in regions of different cover type. Thus the method attempts to maintain ground height accuracy in regions of tall vegetation cover (e.g. forest areas) by reducing spatial resolution in these regions.
Fast method for unwrapping InSAR raw interferogram
Hiroshi Hanaizumi, Masaki Kagawa, Sadao Fujimura
A new method, Multiple Phase Method (MPM), is proposed for fast unwrapping InSAR raw interferogram. The unwrapping process is regarded as one of phase adding or subtracting (lifting) processes when 2-pi phase jump is detected. The proposed method MPM distinguishes the phase jump from noise in the raw interferogram by using the original and its pi phase shifted interferograms. MPM removes the phase jump from the lifting operation by switching the difference data from the original interferogram to the phase shifted one or from the shifted one to the original when the phase jump is detected. The phase jump is detected using a simple mask operation. The phase integration is carried out by using new index 'confidence' derived from the coherence as a guide to select the integration path. MPM selects the integration path from higher confidence region to lower ones. As residues have locally minimum coherence, MPM does not pass through them. The proposed method MPM was successfully applied to an actual pair of ERS-1 SAR single look complex images.
Improved method for isotropic edge orientation estimation: application for the detection of a Roman cadastral grid from multisource images
Sudha Kunduri, Henri Maitre, Michel Roux, et al.
An efficient approach to reduce rotational variance in edge orientation estimation is proposed in this paper. A theoretical analysis followed by experimental data is presented. The robustness of the new operator in the presence of noise is evaluated. The edge estimator is then used in a feature recognition application on satellite images and aerial photos.
Target Detection and Object Recognition
icon_mobile_dropdown
Beyond Gaussian statistical analysis for man-made object detection in hyperspectral images
Emerging Hyper-Spectral imaging technology allows the acquisition of data 'cubes' which simultaneously have high- resolution spatial and spectral components. There is a wealth of information in this data and effective techniques for extracting and processing this information are vital. Previous work by ERIM on man-made object detection has demonstrated that there is a huge amount of discriminatory information in hyperspectral images. This work used the hypothesis that the spectral characteristics of natural backgrounds can be described by a multivariate Gaussian model. The Mahalanobis distance (derived from the covariance matrix) between the background and other objects in the spectral data is the key discriminant. Other work (by DERA and Pilkington Optronics Ltd) has confirmed these findings, but indicates that in order to obtain the lowest possible false alarm probability, a way of including higher order statistics is necessary. There are many ways in which this could be done ranging from neural networks to classical density estimation approaches. In this paper we report on a new method for extending the Gaussian approach to more complex spectral signatures. By using ideas from the theory of Support Vector Machines we are able to map the spectral data into a higher dimensional space. The co- ordinates of this space are derived from all possible multiplicative combinations of the original spectral line intensities, up to a given order d -- which is the main parameter of the method. The data in this higher dimensional space are then analyzed using a multivariate Gaussian approach. Thus when d equals 1 we recover the ERIM model -- in this case the mapping is the identity. In order for such an approach to be at all tractable we must solve the 'combinatorial explosion' problem implicit in this mapping for large numbers of spectral lines in the signature data. In order to do this we note that in the final analysis of this approach it is only the inner (dot) products between vectors in the higher dimensional space that need to be computed. This can be done by efficient computations in the original data space. Thus the computational complexity of the problem is determined by the amount of data -- rather than the dimensionality of the mapping. The novel combination of non- linear mapping and high dimensional multivariate Gaussian analysis, only possible by using techniques from SVM theory, allows the practical application to hyperspectral imagery. We note that this approach also generates the non-linear Principal Components of the data, which have applications in their own right. In this paper we give a mathematical derivation of the method from first principles. The method is illustrated on a synthetic data set where complete control over the true statistics is possible. Results on this data show that the method is very powerful. It naturally extends the Gaussian approach to a variety of more complex probability distributions, including multi-modal and other manifestly non- Gaussian examples. Having shown the potential of this approach it is then applied to real hyperspectral trials data. The relative improvement in performance over the Gaussian approach is demonstrated for the real data.
Subpixel detection for hyperspectral images using projection pursuit
In this paper, we present a Projection Pursuit (PP) approach to target subpixel detection. Unlike most of developed target detection algorithms that require statistical models such as a linear mixture, the proposed PP is to project a high dimensional data set into a low dimensional data space while retaining desired information of interest. It utilizes a projection index to explore projections of interestingness. In the applications of target detection in hyperspectral imagery, an interesting structure of an image scene is the one caused by man-made targets in a large unknown background. If we assume that a large volume of image background pixels can be modeled by a Gaussian distribution via the central limit theorem, then targets can be viewed as anomalies in an image scene due to the fact that their sizes are relatively small compared to their surroundings. As a result, detecting small targets in an unknown image scene is reduced to finding the outliers or deviations from a Gaussian distribution. It is known that Skewness defined by normalized third moment of the sample distribution measures the asymmetry of the distribution and Kurtosis defined by normalized fourth moment of the sample distribution measures the flatness of the distribution. They both are susceptible to outliers. Since Gaussian distribution is completely determined by its first two moments, their skewness and kurtosis are zero. So, using skewness and kurtosis as a base to design a projection index may be effective for target detection. In order to find an optimal projection index, an evolutionary algorithm is also developed. The hyperspectral image experiments show that the proposed PP method provide an effective means for target subpixel detection.
Automatic tracking of linear features on SPOT images using dynamic programming
Regis Bonnefon, Pierre Dherete, Jacky Desachy
Detection of geographic elements on images is important in the perspective of adding new elements in geographic databases which are sometimes old and so, some elements are not represented. Our goal is to look for linear features like roads, rivers or railways on SPOT images with a resolution of 10 meters. Several methods allow this detection to be realized and may be classified in three categories: (1) Detection operators: the best known is the DUDA Road Operator which determine the belonging degree of a pixel to a linear feature from several 5 X 5 filters. Results are often unsatisfactory. It exists too the Infinite Size Exponential Filter (ISEF), which is a derivative filter and allows edge, valley or roof profile to be found on the image. It can be utilized as an additional information for others methods. (2) Structural tracking: from a starting point, an analysis in several directions is performed to determine the best next point (features may be: homogeneity of radiometry, contrast with environment, ...). From this new point and with an updated direction, the process goes on. Difficulty of these methods is the consideration of occlusions (bridges, tunnels, dense vegetation, ...). (3) Dynamic programming: F* algorithm and snakes are the best known. They allow a path with a minimal cost to be found in a search window. Occlusions are not a problem but two points or more near the searched linear feature must be known to define the window. The method described below is a mixture of structural tracking and dynamic programming (F* algorithm).
Generalized constrained energy minimization approach to subpixel detection for multispectral imagery
JihMing Liu, ChunMu Wang, BinChang Chieu, et al.
Subpixel detection for multispectral imagery presents a challenging problem due to relatively low spectral resolution. This paper proposes a Generalized Constrained Energy Minimization (GCEM) approach to detecting objects in multispectral imagery at subpixel level. GCEM is a combination of a dimensionality expansion (DE) approach resulting from a generalized orthogonal subspace projection (GOSP) developed for multispectral image classification and a CEM method developed for hyperspectral image classification. DE allows us to generate additional bands from original multispectral images while CEM is used for subpixel detection to extract objects embedded in multispectral images. CEM has been successfully applied to hyperspectral target detection and image classification. Its applicability to multispectral imagery has not been investigated. A potential limitation of CEM on multispectral imagery is the effectiveness of interference elimination due to the lack of sufficient dimensionality. DE is introduced to mitigate this problem. Experiments have shown that the proposed GCEM detects objects more effectively than CEM without dimensionality expansion and GOSP.
Image Classification
icon_mobile_dropdown
NDVI for sets of mixed pixels
In this paper, vegetation indices are defined to characterize the vegetation content of sets of pixels. Vegetation indices provide information about the presence or lack of vegetation on the ground, but do not provide knowledge for the class of vegetation. However, vegetation indices are widely used for the construction of vegetation maps. In this paper, several vegetation indices are introduced for sets of pixels and their relationship with the fractional vegetation cover is examined with the help of simulated and real satellite data.
Independent component analysis for remote sensing study
Chi Hau Chen, Xiaohui Zhang
Recently there has been much interest in the Independent Component Analysis (ICA) methods for source signal separation. ICA algorithms can be represented by a neural network architecture to decompose a signal or image into components. The potential use of ICA in remote sensing study is examined. For SAR imagery in particular, the use of ICA to enhance the images and to improve the pixel classification is considered. It is shown that ICA processed images generally have lower contrast ratio (standard deviation to mean of an image) which implies a reduced speckle effect. The features extracted by using ICA also are quite effective for pixel classification. There are five pattern classes considered. By using the 9 original SAR images plus all 6 ATM images, the best overall percentage correct is 86.6% which is the same as using 3 ICA and 6 ATM image data. Also ICA is shown to be better than PCA in classification with the same data set. Although the results presented are preliminary, ICA through its de-mixing operations is potentially a useful approach in remote sensing study.
Feasibility study on the use of nonlinear spectral unmixing
S. Liangrocapart, Maria Petrou
The bidirectional reflectance model can be used to perform non-linear spectral unmixing of intimate mixtures. This paper investigates the properties of this model in terms of its stability to small errors in the measured variables.
Unsupervised retraining of a maximum-likelihood classifier for the analysis of multitemporal remote sensing images
Several applications of supervised classification of remote- sensing images involve the periodical mapping of a fixed set of land-cover classes on a specific geographical area. These applications require the availability of a training set (and hence of ground-truth information) for each new image analyzed. However, the collection of ground truth information is a complex and expensive process that only in few cases can be performed every time that a new image is acquired. This represents a serious drawback of classical supervised classifiers. In order to overcome such a drawback, an unsupervised retraining technique for supervised maximum- likelihood (ML) classifiers is proposed in this paper. Such a technique, which is based on the Expectation-Maximization (EM) algorithm, allows the statistical parameters of an already trained ML classifier to be updated so that a new image, for which a training set is not available, can be classified with an acceptable accuracy. Experiments, which have been carried out on a multitemporal data set, confirm the effectiveness of the proposed technique.
Neural Network and Symbolic Techniques
icon_mobile_dropdown
Neuro-fuzzy and soft computing in classification of remote sensing data
Jon Atli Benediktsson, Helgi Benediktsson
Hybrid intelligent systems are discussed. These systems combine neural networks, which recognize patterns and adapt themselves to cope with changing environments, and fuzzy inference systems that incorporate human knowledge and perform inferencing and decision making. The integration of these complimentary techniques along with derivative-free optimization techniques based on genetic algorithms, results in a novel discipline called neuro-fuzzy and soft computing. These approaches will be discussed and applied in classification of multisource remote sensing and geographic data. Both the rationale of the approaches and the results obtained by the methods will be compared to more traditional techniques.
Assessing the accuracy of soft thematic maps using fuzzy set-based error matrices
Elisabetta Binaghi, Pietro Alessandro Brivio, Pier Paolo Ghezzi, et al.
Within the soft classification context, the vagueness conveyed by the grades of membership in classes leads us to conceive classification statements as less exclusive than in conventional hard classification, and to compare them in the light of more relaxed, flexible conditions, which results in degrees of matching. This paper proposes a new evaluation method which uses fuzzy set theory to extend the applicability of the traditional error matrix method to the evaluation of soft classifiers. It is designed to cope with those situations in which classification and/or reference data are expressed in multimembership form and the grades of membership represent different levels of approximation to intrinsically vague classes. To verify the applicability of the method we conducted a remote sensing study on a highly complex real scene of the Venice lagoon (Italy). Alternative evaluation procedures, such as the traditional confusion matrix and the Standard errors of estimate, have been developed for this application in order to demonstrate the value and the advantages of the proposed measures as compared with other approaches.
Neural techniques for SAR intensity and coherence data classificataion
Palma N. Blonda, Giuseppe Satalino, Janusz Wasowski, et al.
In recent years it has been proved that combined analysis of SAR intensity and interferometric correlation images is a valuable tool in classification tasks where traditional techniques such as crisp thresholding schemes and classical maximum likelihood classifiers have been employed. In this work, developed in the framework of the ESA AO3-320 project titled Application of ERS data to landslide activity monitoring in southern Apennines, Italy, our goal is to investigate: (1) usefulness of SAR interferometric correlation information in mapping areas with diffuse erosional activity, including landslides; and (2) effectiveness of soft computing techniques in the combined analysis of SAR intensity and interferometric correlation images. Two neural classifiers are selected from the literature. The first classifier is a one- stage error-driven Multilayer Perceptron (MLP) and the second classifier is a Two-Stage Hybrid (TSH) learning system, consisting of a sequence of an unsupervised data-driven first stage with a supervised error-driven second stage. The TSH unsupervised first stage is implemented as either: (1) the on- line learning, dynamic-sizing, dynamic-linking Fully Self Organizing Simplified Adaptive Resonance Theory (FOSART) clustering model; (2) the batch-learning, static-sizing, no- linking Fuzzy Learning Vector Quantization (FLVQ) algorithm; or (3) the on-line learning, static-sizing, static-linking Self-Organizing Map (SOM). The input data set consists of three SAR ERS-1/ERS-2 tandem pair images depicting an area featuring slope instability phenomena in the Campanian Apennines of Southern Italy. From each tandem pair, four pixel-based features are extracted: the backscattering mean intensity, the interferometric coherence, the backscattering intensity texture and the backscattering intensity change. Our classification task is focused on the discrimination of land cover types useful for hazard evaluation, i.e., evaluation of areas affected by erosion. Classification results show that class erosion can be discriminated from other land cover classes when SAR mean intensity images are combined with coherence and texture information. In addition, our results demonstrate that soft computing techniques provide useful tools for the combined analysis of SAR intensity and coherence images. In particular, the TSH classifier employing the FOSART clustering algorithm shows: (1) an overall accuracy comparable with that of the other classification schemes under testing; (2) a training cost significantly lower than that of MLP and lower than that of TSH employing either FLVQ or SOM as its first stage; and (3) a capability of discriminating class erosion superior to that of the other classification schemes under testing.
Uncertainty management in neural classifiers of remotely sensed data
Elisabetta Binaghi, Paolo Madella, Ignazio Gallo, et al.
This paper presents a novel neural model based on back- propagation for fuzzy Dempster-Shafer (FDS) classifiers. The salient aspect of the approach is the integration within a neuro-fuzzy system of knowledge structures and inferences for evidential reasoning based on Dempster-Shafer theory. In this context the learning task may be formulated as the search the most adequate 'ingredients' of the fuzzy and Dempster-Shafer frameworks such as the fuzzy aggregation operators for fusing data from different sources and focal elements and basic probability assignments for describing the contributions of evidence in the inference scheme. The new neural model allows to establish a complete correspondence between connectionist elements and fuzzy and Dempster-Shafer ingredients ensuring both a high level of interpretability and transparency and high performances in classification. A network-to-rule translation procedure is allowed for extracting Fuzzy Dempster-Shafer classification rules from the structure of the trained network. To evaluate the performances in real domains where the conditions of lack of specificity in data are prevalent, the proposed model has been applied to a multisource remote sensing classification problem. The numerical results are shown here and compared with those obtained by symbolic FDS and pure neuro-fuzzy classification procedure.
Fuzzy classification of pixels using neural networks
Imaging spectrometers acquire images in many narrow spectral bands. because of the limited spatial resolution the measured spectrum of a pixel is often a composition of a number of basic spectra. The purpose of fuzzy classification is to determine the presence and abundance of the basic spectra in a measured spectrum. Previous work demonstrated that a neural network could perform fuzzy classification. In this paper we study a more realistic situation of 10 basic spectra using 12 band airborne data and of 16 basic spectra using 6 band LANDSAT data. Available for this study were images and sets of pixels, which have been classified by inspection on the ground. For the LANDSAT case the set was not very pure and not very large. A method to purify and to expand data sets using image processing methods was therefore developed. Mixed pixels training and testing sets were generated from each original and generated set using a linear mixture model, where a mixed pixel could have a contribution from up to three classes. For each of the training sets a 1 hidden layer backpropagation neural network was trained to do the fuzzy classification. Testing the networks showed that they performed up to 20% better than the developed AnaML method, which is a combination of two classical methods.
Wavelets
icon_mobile_dropdown
Enhancing hyperspectral data throughput utilizing wavelet-based fingerprints
Lori Mann Bruce, Jiang Li
Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, we investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (1) the computational expense of the new method is compared with the computational costs of the current method and (2) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The results show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.
Tracking radar echoes by multiscale correlation: a nowcasting weather radar application
An algorithm for storm tracking through weather radar data is presented. It relies on the crosscorrelation principle as in TREC (Tracking Radar Echoes by Correlation) and derived algorithms. The basic idea is to subdivide the radar maps in Cartesian format in a grid of square boxes and to exploit the so called local translation hypothesis. The motion vector is estimated as the space shift such that corresponding boxes at different times exhibit the maximum correlation coefficient. The discussed technique adopts a multiscale, multiresolution, and partially overlapped box grid which adapts to the radar reflectivity pattern. Multiresolution decomposition is performed through 2D wavelet based filtering. Correlation coefficients are calculated taking into account unreliable data (e.g. due to ground clutter or beam shielding) in order to avoid strong undesired motion estimation biases due to the presence of such stationary features. Data are gathered through a C-band multipolarimetric doppler weather radar. Results show that the technique overcomes some problems highlighted by researchers in previous related studies. Comparison with radial velocity maps shows good correlation values; although they may vary depending on the specific event and on the orographic complexity of the considered area, estimated motion fields are consistent with the shift of the pattern determined through simple visual inspection.
Speckle reduction and enhancement of SAR images using multiwavelets and adaptive thresholding
Speckle reduction and enhancement of synthetic aperture radar (SAR) images with multiple wavelets (multiwavelets) are proposed and investigated. Multiwavelet transformations are useful for speckle reduction through its subband-images and the speckle reduction is obtained by thresholding the subband- image coefficients of the digitized SAR images. A non-linear speckle reduction method based on adaptive sigmoid thresholding of the multiwavelet coefficients for logarithmically transformed SAR image data is investigated. The proposed methods show great promise for speckle removal and hence provide good detection performance for SAR based recognition.
Wavelet and pyramid techniques for multisensor data fusion: a performance comparison varying with scale ratios
Goal of this paper is to provide a quantitative performance evaluation of multiresolution schemes capable to carry out feature-based fusion of data collected by multispectral and panchromatic imaging sensors having different spectral and ground resolutions. To this aim a set of quantitative parameters has been recently proposed. Both visual quality, regarded as contrast, presence of fine details, and absence of impairments and artifacts (e.g., blur, ringing), and spectral fidelity (i.e., preservation of spectral signatures) are concerned and embodied in the measurements. Out of the three methods compared, respectively based on highpass filtering (HPF), wavelet transform (WT), and generalized Laplacian pyramid (GLP), the latter two are far more efficient than the former, thus establishing the advantages for data fusion of a formally multiresolution analysis.
Data Fusion
icon_mobile_dropdown
Advanced techniques for fusion of information in remote sensing: an overview
Maria Petrou, A. Stassopoulou
Traditional techniques for fusing information in Remote Sensing and related disciplines rely on the application of expert rules. These rules, are often applied to data held in the layers of a GIS which are spatially superimposed to yield conclusions based on the fulfillment of certain conditions. Modern techniques in fusion of information try to take into consideration the uncertainty of each source of information. They are divided in distributed and centralized systems according to whether conclusions reached by different classifiers relying on different sources of information are combined, or all data from all available sources of information are used together by a single reference mechanism. In terms of the central inference mechanism used, these techniques fall in six categories, namely rule-based, fuzzy systems, Dempster-Shafer systems, Pearl's inference networks, other probabilistic approaches, and neural networks. All these approaches are discussed and compared.
Integration of GIS and remote sensing image analysis techniques
Paul C. Smits, Alessandro Annoni, Silvana G. Dellepiane
Classical image analysis techniques have proven to be powerful tools in various remote-sensing image interpretation problems. However, applied to large images their usefulness is limited as the spatial complexity of classes used in land-cover databases often exceeds the identification capability of the methods. Moreover, atmospheric and soil conditions introduce a substantial within-class variability. Land-cover/land-use databases can contain 40 or more different categories which cannot all be derived directly from the image data. The robust integration of GIS and remote sensing image interpretation techniques is important, but is feasible only when both possibilities and limitations are considered. In this paper, the design and implementation is described of a tool for the updating of land-cover polygons by remote-sensing imagery. After a preliminary analysis of the neighboring polygons (i.e., background) around a polygon to update (i.e., object), the best feature is selected out of a set of more than 30 features based on its ability to separate object from background. This best feature is used in a successive image- labeling step. The labeling step adopted in this paper is based on a fuzzy intensity connectedness measure.
Multisensor data fusion for automated scene interpretation
Olaf Hellwich, Christian Wiedemann
An approach to the combined extraction of linear as well as two-dimensional objects from multisensor data based on a feature- and object-level fusion of the results is proposed. The data sources are DAIS hyperspectral data, AES-1 SAR data, and high- resolution panchromatic digital orthoimages. Rural test areas consisting of a road network, agricultural fields, and small villages were investigated. The scene interpretation is based on a conceptual model consisting of a semantic net for each of the sensors and a semantic net of the real world objects. The sensor nets and the object net are combined into one network by means of a geometry and material level of network nodes. Road networks are extracted from the panchromatic orthoimage and from selected hyperspectral bands. Based on the knowledge that roads compose networks the extraction results are combined. Two-dimensional, i.e. areal, objects are extracted from hyperspectral data after a principal component transformation. The SAR data is segmented using image intensity and interferometric elevation. The classifications of the hyperspectral and SAR data are combined with the extracted road network using rule- and segment-based methods. In the outlook, comments are given on the trade-off between the improvement of the results using the new method and the increasing costs for data acquisition.
New cellular automata applications on multitemporal and spatial analysis of NOAA-AVHRR images
Jose Andres Moreno-Ruiz, Cesar Carmona Moreno, Manuel-Francicso Cruz-Martinez, et al.
Since 1940, since Ulam and von Neumann conceived the CA (Cellular Automata) until now, they have been applied to the study of general phenomenological aspects of the world as a way of understanding the behavior of complex systems. The basic idea has been not to try to describe a complex system from 'above,' but simulating this system by interaction of cells following easy rules. In this paper we propose to use the Cellular Automata paradigm for Remote Sensing applications on multi-spectral, multi-temporal and spatial data analysis. In a first approach, we are designing two new applications which are being implemented in the Space Applications Institute (SAI) in the Global Vegetation Monitoring (GVM) unit in order to process daily NOAA-AVHRR data for detecting burnt surfaces and also land use/cover changes at global scale. The results obtained (burn scar maps and land use/cover changes maps) at global scale from daily NOAA-AVHRR images GAC -- 8 km are presented.
Poster Session
icon_mobile_dropdown
Genetic algorithm for accomplishing feature extraction of hyperspectral data using texture information
An algorithm to project a high dimensional space (hyperspectral space) to one with few dimensions is studied, therefore most of the information for an unsupervised classification is kept in the process. The algorithm consists of two parts: first, since the experience shows that bands that are close in the spectrum have redundant information, groups of adjacent bands are taken and a genetic algorithm is applied in order to obtain the best representative feature for each group, in the sense of maximizing the separability among clusters. The second part consists in applying the genetic algorithm again, but this time context information is included in the process. The results are compared with the usual methods of feature selection and extraction.