Proceedings Volume 2315

Image and Signal Processing for Remote Sensing

cover
Proceedings Volume 2315

Image and Signal Processing for Remote Sensing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 30 December 1994
Contents: 17 Sessions, 80 Papers, 0 Presentations
Conference: Satellite Remote Sensing 1994
Volume Number: 2315

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Texture
  • Neural Networks
  • Stereoscopic Image Analysis
  • Image Segmentation
  • Fuzzy Techniques
  • Multisource Techniques
  • Knowledge-Based Image Interpretation
  • Image Segmentation
  • Multisensor Techniques
  • Multitemporal Techniques and Change Detection
  • Data Fusion
  • Statistical Pattern Recognition Techniques I
  • Object Recognition II
  • Statistical Pattern Recognition Techniques I
  • Object Recognition I
  • Statistical Pattern Recognition Techniques II
  • Object Recognition II
  • SAR Image Enhancement
  • Image Enhancement
  • Registration Techniques
Texture
icon_mobile_dropdown
Neural networks and cloud classification
Patrick Walder, Iain MacLaren, Carol Reid
The development of an efficient and accurate automated cloud classification method for use on satellite Images will be of great benefit to operational meteorology and climate studies. We have examined the possible use of neural networks as a classification tool for spectral and textural data extracted from Meteosat images. A large number of back-propagation neural network configurations were run and many were found to be highly effective, outperforming more traditional statistical classifiers. A Kohonen type competitive learning network was also tried, but was found to be considerably less successful on this data set. Some suggestions are made for future development based on the experience gained in this project.
Multisource SAR image texture classification using an artificial neural network model
Philippe Mainguenaud, Robert Jeansoulin
SAR image classification work is completely different with optical image work. Information comes from electro-magnetics property of soil. We characterize all kind of soil with texture. This notion depends on noise found from pixel information. Rules and principles of inherence from different kind of soil are not known. We use neural network to build them through example data. After few trials, we select 'variogramme'encoding to describe local fluctuations of pixel values from analyse window. Single information has achieved its limits. We must find another source of information to divide nearest texture classes. We study different architectures of treatment to fit the methodology to the found difficulties to divide classes. Image filtering change strongly textural information found from each class. We hope increase number of information source from the application of different kind of filter on rough image. We analyse the rough image (hole resolution) from (5x5) size analyse window or (11x11) size analyse window and the damaged resolution (by factor 2) image from (5x5) size analyse window. To improve results, we develop noise reducer filters and we elaborate non correlative data. We search best architecture of network and choose hierarchical organization of learning. So, we test the approach with hole image.
Texture classification using principle-component analysis techniques
Xiaoou Tang, William Kenneth Stewart
We use a traditional principle component analysis approach, i.e. the Karhunen-Loeve Transform (KLT), to evaluate texture features in three feature spaces. The first space is the spatial space with feature vectors formed by raster scan ordering the rows of the texture image into long vectors. The second space is a transformation of the image, such as the DFT. The base of the third feature space is formed by the traditional feature vectors, whose components are the feature values extracted from commonly used algorithms, such as, spatial gray level dependence method (SGLDM), the gray level run length method (GLRLM), and the power spectral method (PSM). We apply the algorithms on sidescan sonar image classification and give a performance comparison of the three approaches.
Neural Networks
icon_mobile_dropdown
Ship-traffic control by means of neural networks applied to radar image sequences
Alessandro Mecocci, Giuliano Benelli, Andrea Garzelli
In this paper an Automatic Target Recognition (ATR) system for ship-traffic control in the access are of a seaport is presented. The system employs digital image processing techniques applied to X-band real aperture RADAR images. Problems due to target signature variability, aspect-angle dependency, and noise are considered. The estimate of prow-orientation is also done, which provides useful information for drift-angle computation and automatic collision avoidance. First the radar sequence is segmented t locate the ships, then each contour is analysed to compute prow orientations. The same processing in repeated for all the images in the sequence and the resulting data are linked together to give the trajectory of each ship. Supervised Neural Networks have been used to obtain robust segmentation and accurate ship-heading location. An adaptive version of the *-* filter gives accurate trajectory estimates. To validate the system, a simulator has been used to produce image sequences concerning ships of know dimensions, positions, and headings. The errors introduced by the processing system remain below the uncertainty of the sensor and the prow orientation is always recovered with negligible error in the image plane, showing an extremely precise behaviour of the prow-detection algorithm.
Feature extraction and pattern classification for remotely sensed data analysis by a modular neural system
Palma N. Blonda, Vincenza la Forgia, Guido Pasquariello, et al.
In this paper a modular neural network architecture is proposed for classification of Remote Sensed data. The neural network learning task of the supervised Multi Layer Perceptron (MLP) Classifier has been made more efficient by pre-processing the input with an unsupervised feature discovery neural module. Two classification experiments have been carried for coping with two different situations, very usual in real remote sensing applications: the availability of complex data, such as high dimensional and multisourced data, and on the contrary, the case of imperfect low dimensional data set, with a limited number of samples. In the first experiment on a multitemporal data set, the Linear Propagation Network (LPN) has been introduced to evaluate the effectiveness of neural data compression stage before classification. In the second experiment on a poor data set, the Kohonen Self Organising Feature Map (SOM) Network has been introduced for clustering data before labelling. In the paper is also illustrated the criterion for the selection of an optimal number of cluster centres to be used as node number of the output SOM layer. The results of the two experiments have confirmed that modular learning performs better than the non-modular one in learning quality and speed.
Dynamic learning neural network with optimal multipolarization approach to classification of terrain covers from polarimetric SAR data
Kun Shan Chen, W. L. Kao, A. Faouzi
A learning algorithm for neural network is proposed in this paper for classification of fully polarimetric SAR imagery data. Based on a polynomial basis function expansion, the multilayer precepton network was modified such that at output layer the functional form is linearized while the hidden layers remain to be nonlinear. The weighting function in each layer are cascading to form a long vector though which the outputs and inputs are related. This modification allows us to apply the dynamic Kalman filtering technique to adjust the network weighting in a sense of recursive minimum least square error. The new network has features such as very fast learning and built-in optimization of weighting function. The fast learning rate stems from that the weighting updating is not in a fashion of back-propagation which usually takes lengthy time to finish the learning process. The efficiency of the proposed network to classification polarimetric SAR image was illustrated. For purpose of comparison, the commonly used backpropagation (BP) network and a recently developed fast-learning (FL) were also tested. In particular, the optimal polarizations for best discriminating the terrain covers from polarimetric was implemented through the Kalman filtered neural network, where the only necessary inputs are linear- polarized channels data (hh, vv, vh or hv). Excellent performance of the proposed algorithm is obtained in terms of learning speed and classification accuracy.
Stereoscopic Image Analysis
icon_mobile_dropdown
Using stereo matching and perceptual grouping to detect buildings in aerial images
Tuan Dang, Henri Maitre, Olivier Jamet, et al.
We present an approach for the detection of building in aerial images using binocular and monocular information. A cooperative method is developed in which both stereo matching and perceptual grouping techniques are used to ensure a reliable detection of building structures in a contour map. Indeed, when one uses geometric grouping alone, one has to resolve a hard combinatorial search problem. So, we propose to use the disparity map obtained by stereo matching to filter irrelevant contours. This approach allows us to reduce the search space for the geometric grouping process, which uses the remaining contours to detect buildings according to the principle of perceptual organization. In this work, we only consider the class of buildings which can be modeled as a combination of rectangular structures. The detected buildings are then fed back to the stereo process to generate a new disparity map in which building forms are better preserved. In our stereo matching algorithm, we use a dynamic programming technique to estimate the disparity at each point before computing it using both parametric and non-parametric correlation. This strategy allows us to speed up the correlation process and reduce the risk of false matching.
Correlation algorithm with adaptive window for aerial image in stereo vision
Jean-Luc Lotti, Gerard Giraudon
Binocular stereo vision processes estimate 3D surfaces using a pair of images taken from different points of view. 3D surface characteristics are estimated by matching 2D image areas or features corresponding to the projections of same 3D points. The most classic area-based methods used cross correlation with a fixed window-size, but this technique presents a major drawback: the computation of depth is generally prone to errors close to surface discontinuities. In this paper, we present our current work on aerial stereo images of urban areas. A correlationbased algorithm, with an adaptive window-size constrained by an edge map extracted from the images, is presented. This technique is not used to refine an initial disparity map, but to create one with good discontinuities location. The algorithm follows 3 steps: first window sizes for each pixel are computed; then a disparity map is created; third, map completion is performed and a final dense disparity map with subpixel precision is produced by Kanade correction. Experimental results on real aerial images are presented.
Image Segmentation
icon_mobile_dropdown
Finding thresholds for image segmentation
Theo E. Schouten, Maurice S. Klein Gebbinck, Ron P.H.M. Schoenmakers, et al.
Segmentation methods for images often have cost functions which evaluate the (dis)similarity between pixels or segments. Thresholds on cost values are then used to decide whether or not to grow, join or split segments. The results for a given image critically depend on the selection of the threshold values. In remote sensing, a too low threshold will split up regions of constant ground cover and a too high threshold will join adjacent regions of different ground cover. Optimal thresholds can be determined using different classes of methods: generating cost value distributions from the original image; obtaining statistical distributions from segmented images; comparing a 'true' segmentation with the results of segmentation using a range of thresholds. A so-called 'true' segmentation can be derived from human expert segmentations or from maps obtained by ground surveys or segmentation of higher resolution images. Also artificial images can be generated having the advantage that the segmentation is known to sub-pixel level. Several methods for threshold determination are described for a hybrid segmentation method developed by us. Measures are described for comparison of two segmentations. Results are evaluated using several (parts of) LANDSAT images and artificial generated images.
Integrating map knowledge in satellite image analysis
Shan Yu, Gerard Giraudon, Marc Berthod
With the rapid scientific and technical advances in remote sensing, digital image processing becomes an important tool for quantitative and statistical analysis of remotely sensed images. These images contain most often complex natural scenes. Robust interpretation of such images requires the use of different sources of information about the scenes under consideration. This paper presents our work on analyzing remotely sensed images by integrating multispectral data, map knowledge and contextual information. An overview of the approach is first described. Then the utilization of map knowledge to improve the effectiveness and robustness of urban area detection is explained in more details.
Results of a hybrid segmentation method
Ron P.H.M. Schoenmakers, Graeme G. Wilkinson, Theo E. Schouten
A hybrid segmentation method has been developed integrating two segmentation methods, edge detection and region growing in order overcome weaknesses of either method. The segmentation method involves the following: (i) filtering, (ii) edge detection and following (iii) edge fragment linking, and (iv) region growing. In (ii) edge detection is carried out. The resulting edge magnitude values are thresholded and on the thresholded values a thinning operation is performed in order to create one pixel thick edges. In (iii) the resulting edge fragments are linked together where possible by detecting one pixel wide gaps between edge fragments. By connecting the edge fragments closed polygons are formed, dividing the image into a set of sub-images. Edge fragments not belonging to a closed polygon are pruned. In (iv) region growing is carried out within every polygon. Regions are not allowed to grow outside the polygons. The region growing method used is the best merge, which merges per merging scan over the image the pair with a lowest cost value. For merging remaining isolated pixels context rules are defined. Results of the segmentation method are shown for classification of a non-segmented Landsat-TM scene and its segmented counterpart by an artificial neural network. Moreover the use of the segmentation for filtering SAR imagery is indicated.
Fuzzy Techniques
icon_mobile_dropdown
Fuzzy integrals as a generalized class of order filters
Michel Grabisch
We introduce a new large class of filters similar to order filters, namely fuzzy integrals. Fuzzy integrals are a generalization of Lebesgue integral, and are defined with respect to a nonadditive measure. Fuzzy integrals can also be viewed as function processing filters, and appear to include a large class of known filters. We show that fuzzy integrals includes as particular case all linear filters, all order filters, and thus rank filters, the median, erosion, and dilation, whose properties of noise filtering are well known. In a second part, we give more insight into properties related to morphological filters, in an attempt to generalize results known for rank filters. Considering a gray level image as a fuzzy set, we extend the usual definition of dual filters with respect to complementation to any function processing filter. Based on this, we show that the dual of a fuzzy integral filter is the fuzzy integral filter with respect to the dual measure.
Fuzzy segmentation and structural knowledge for satellite image interpretation
Laurent Wendling, Mustapha Zehana
In this paper we present a segmentation method using fuzzy sets theory applied to remote sensing image interpretation. We have developed a new fuzzy segmentation system in order to take into account complex spatial knowledge (elongated shape, compact area, features based on surfaces, perimeters,...) involving topologic attributes and also relative position of searched areas in Certainty Factors images. A C.F. image represents the belonging degrees of each pixel to a given class and is supposed to have been obtained by a previous classification (involving simple contextual knowledge). To improve this previous classification, we introduce structural rules which allow us to manage with region characteristics. These structural characteristics are obtained by using a fuzzy segmentation technique. The proposed system uses random sets representation (convex combination of sets) for each fuzzy region (weighted set of connected pixels) automatically extracted from a C.F. image. The main interest of this method is to split a C.F. image in fuzzy regions. A fuzzy region is defined by a set of concentric crisp regions and for each of them, topologic attributes are computed to provide the value of the final attribute of the fuzzy region.
Multisource Techniques
icon_mobile_dropdown
Information combination operators for data fusion: a comparative review with classification
Isabelle Bloch
In most data fusion systems, the information extracted from each image or sensor (either numerical or symbolic) is represented as a degree of belief in an event with real values, generally in [0,1], taking in this way into account the imprecise, uncertain and incomplete nature of information. The combination of such degrees of belief is performed through numerical fusion operators. A very large variety of such operators has been proposed in the literature. We propose a classification of these operators issued from different data fusion theories (probabilities, fuzzy sets, possibilities, Dempster- Shafer, etc.) w.r.t. their behaviour. Three classes are thus defined: context independent constant behaviour operators, which have the same behaviour (compromise, disjunction or conjunction) whatever the values to be combined, context independent variable behaviour operators (whose behaviour depends on the combined values), and context dependent operators (where the result depends also on some global knowledge like conflict, reliability of sources). This classification provides a guide for choosing an operator in a given problem. This choice can then be refined from the desired properties of the operators, from their decisiveness, and by examining how they deal with conflictual situations. These aspects are illustrated on simple examples in multispectral satellite imaging. In particular, we stress the interest of the third class for classification problems.
Crop yield prediction using a CMAC neural network
George Simpson
This paper presents the results of a short study to investigate the use of a fast cerebellar model articulation controller (CMAC) neural network for crop yield prediction. It goes on to explore the possibility of combining crop classification and yield prediction into a single network component, suitable for large-scale crop management. In the first part of the work, a small feasibility study of crop classification performance was carried out in two steps. First, prediction performance was evaluated using only monthly agro-met data (soil moisture, temperature, sunshine). Then the improvement in prediction performance after incorporating remote sensing data (Landsat TM) was measured. The standard error was 5% when TM data were included, versus 6% when TM was ignored. The CMAC neural network applied in this study has previously been successfully applied in two similar domains: real-time cloud classification on Meteosat data and mineral identification with airborne visible and infrared imaging spectrometer (AVIRIS) data. Two features peculiar to this classifier are that it handles mixtures naturally and that it is capable of returning a `don't know' response. It is natural therefore to consider the possibility of performing crop classification and yield prediction in a single step. We find that it is feasible to perform weekly combined classification and yield predictions for all of Europe, on a 1 km grid, for 100 different crops, using a cluster of five workstations.
Multisources approach for satellite image interpretation
Ludovic Roux
In this paper, we present a multi-sources information-fusion method for satellite image classification. The main characteristics of this method are the use of possibility theory to handle the uncertainty connected with pixel classification, and the ability to mix numeric sources (the satellite image spectral bands) and symbolic sources (expert knowledge about best localisation of classes and out-image data for example). Moreover, this information fusion method is low time consuming and with a linear complexity. First we introduce briefly the possibility theory and the conjunctive fusion method used here. Then we apply this fusion method to a satellite image classification problem. The classes are defined by their spectral response on the one hand, and by the description of their best geographical context on the other hand. We compute the possibility distribution for the numeric sources on the one hand, and for the symbolic sources on the other hand. Finally the fusion handles the possibility measures coming from the numeric sources and from the symbolic sources.
Classification of multispectral imagery using wavelet transform and dynamic learning neural network
H. C. Chen, Yu-Chang Tzeng
A recently developed dynamic learning neural network (DL) has been successfully applied to multispectral imagery classification and parameter inversion. For multispectral imagery classification, it is noises and edges such as streets in the urban area and ridges in the mountain area in an image that result in misclassification or unclassification which reduce the classificalion rate. At the image spectrum point of view, noises and edges are the high frequency components in an image. Therefore, edge detection and noise reduction can be done by extracting the high frequency parts from an image to improve the classification rale. Although both noises and edges are the high frequency components, edges represent some userul information while noises should be removed. Thus, edges and noiscs must be separated when the high frequency parts are extracted. The conventional edge detection or noise reduction melhods could not distinguish edges from noises. A new approach, Wavelet transform, is selected to fulfill this requirement. The edge detection and noise reduction pre-processing using Wavelet transform and image classification using dynamic learning neural network are presented in this paper. The experimental results indicate that it did improve the classification rate.1
Knowledge-Based Image Interpretation
icon_mobile_dropdown
Neural classification guided by background knowledge
Jerzy J. Korczak, Denis Blamont, F. Hammadi
The problem discussed in the paper concerns the elaboration, in a very complex landscape, of a cartographical map, using remote sensing data and partial ground-truth knowledge. Maps are created by the neural classification process, regarded as being made up of a sequence of dependent self-organizing phases. To guide the process of classification a background knowledge is to be proposed. The aim is to explore how background knowledge can be integrated into a neural network classifier, and support the classification process. Class descriptions obtained by the method are substantially better than those obtained by the classical backpropagation algorithm. The elaborated maps are at least as good as the maps generated by the classical supervised algorithms.
Theory of knowledge-based image analysis with applications to SAR data of agriculture
Nanno J. Mulder
The requirements for a theory of image analysis imply predictability of RS image measurements. RS images are predicted from a combined model of objects in 3 dimensions with samples taken at short time intervals. Image analysis is the inverse of image synthesis or image prediction. Inversion of the model of image synthesis requires additional knowledge about objects, processes and sensing. The role of knowledge is mainly to constrain the search effort in a problem space of hypotheses and parameters. The method of image analysis as reported here is a hypothesis driven method, in contrast to data driven methods of image interpretation, image processing or data fusion. In a reaction to a failing search for suitable GIS theories and structures, an alternative is reported for the classical integration of 2.5 dimensional GIS and RS with data driven image processing. The required theories and structures are taken from the domain of physical modelling. Knowledge about 3 dim objects and about processes is represented in physical models which may have a probabilistic component. Given a model for sensors, the atmosphere, radiation with matter interaction, and a set of hypotheses and parameters about objects and their state, hypotheses are evaluated and parameters are estimated. Hypothesis based analysis means comparison of hypotheses in the model domain with evidence coming from the RS measurement domain or feature domain. A specific problem addressed here is that of the estimation of geometric parameters of objects in microwave images. The treatment of prior probabilities appears to be critical. The relationship between statistics of the radiometric and geometric parameter estimators was investigated and results are reported. After the introduction of basic concepts of geometric and radiometric parameter estimation, a case of agricultural landuse classification is given. The case introduces the problem of converting classical vector data to parameterised geometric decision functions.
Image Segmentation
icon_mobile_dropdown
Refining region estimates for post-processing image classification
Paul L. Rosin
This paper describes a method for post-processing classified images to enable generalisation to be performed whilst maintaining or improving the accuracy of region boundaries. This is achieved by performing region growing, and incorporates both spatial context and spectral information. In contrast, few classifiers use any spatial context, and many post-processing techniques, such as iterative majority filtering, discard all spectral information. If class models are available these can also be included in the region growing process, otherwise, the algorithm operates in a data-driven mode, and locally estimates models for each region.1
Thematic image segmentation by a concept formation algorithm
Jerzy J. Korczak, Denis Blamont, Alain Ketterlin
Unsupervised empirical machine learning algorithms aim at discovering useful concepts in a stream of unclassified data. Since image segmentation is a particular instance of the problem addressed by these methods, one of these algorithms has been employed to automatically segment remote-sensing images. The region under study is Nepalese Himalayas. Because of important variations in altitude, effects of lighting conditions are multiplied, and the image becomes a very complex object. The behavior of the clustering algorithm is studied on such data. Because of the hierarchical organization of the resulting classes, the segmentation produced may be interpreted in a variety of thematic mappings, depending on the desired level of detail. Experimental results prove the influence of lighting conditions, but also demonstrate very good accuracy on sectors of the image where lighting in almost homogenous.
Error analysis for finding Deriche's optimum filters and 3D range images segmentation
Mourad Djebali, Mahmound Melkemi, D. Vandorpe
The analysis of three-dimensional (3D) scenes from range images needs robust and efficient methodology to recover exact and useful information. Different approaches for surface segmentation in range data are proposed, and the most interesting are the segmentation methods based on function approximation and local neighborhood properties such as curvatures. Our scheme is to deal with classification of each range pixel by the sign of Gaussian (K) and mean (H) curvatures into three (3) fundamental surface types: convex, concave, and plane. Since the derivation of H and K involves calculation of partial derivatives in the image, the KH-map is susceptible to noise. Therefore, we propose the use of DERICHE's optimum filters which give more precise results. The performance degree of these optimum filters depends strongly on the choice of the values of the parameter (alpha) . Using a matrix representation of DERICHE's recursive systems we also propose a theoretical error analysis which allows the determination of the best `range' of (alpha) 's values.
Improvement of 1-look SAR image segmentations with mathematical morphology
Alejandro C. Frery, Ana Lucia Bezerra Candeias
Synthetic Aperture Radar (SAR) images are an important source of information. This kind of imaging is little affected by adverse atmospheric conditions, such as ram, clouds, fog, etc., since it operates at frequencies other than the visible. Also, since the sensor is active and carries its own source of illumination, it can operate by night. The problem that arises with the use of this technology is a signal- dependent noise, called speckle. This kind of noise is common to all imaging devices that use coherent illumination, such as laser, microwaves, etc. One of the most useful techniques for image analysis is the segmentation. Using statistical modelling, two multiclass segmentation techniques for 1-look and linear detection SAR images are derived: the maximum likelihood and the Iterated Conditional Modes (ICM), both assuming multiplicative Rayleigh models for the data. Although the ICM segmentation yields significatively better results than the maximum likelihood segmentation, the 1-look linear detection case is noisy enough to deserve some improvement. Mathematical Morphology, a non linear approach to signal processing, is then used as a refinement technique in order to extract information.
Multisensor Techniques
icon_mobile_dropdown
Geologic mapping of the Hekla volcano (Iceland) using integrated data sets from optic and radar sensors
Tobias Wever, Gerhard Loercher
During the MAC-Europe campaign in June/July 1991 different airborne data sets (AIRSAR, TMS and AVIRIS) were collected over Iceland. One test site is situated around the Hekla-volcano in South Iceland. This area is characterised by a sequence of lava flows of different ages together with tuffs and ashes. This case study shall contribute to demonstrate the potential of MAC-Europe data for geological mapping. The optical- and the SAR data was analysed separately to elaborate the preferences of the different sensors. An approach was carried out to process an image representing the advantages of the respective sensors in only one presentation. The synergetic approach improves the separation of geological units clearly by combination of two completely different data sets due to the utilisation of spectral bands in the visible and infrared region on one side and on the other side in the microwave region. Beside the petrographical information extracted from optical data using spectral signatures the combination includes physical information like roughness and dielectricity of a target. The geologic setting of the test area is characterised by a very uniform petrography hence the spectral signatures are showing only little variations. Due to this fact, the differentiation of geological units using optical data is limited. The additional use of SAR data establishes the new dimension of the surface roughness which improves the discrimination clearly. This additional parameter presents a new information tool about the state of weathering, age and sequence of the different lava flows. The NASA/JPL AIRSAR system is very suitable for this kind of investigation due to its multifrequency and polarimetric capabilities. The three SAR frequencies (C-, L- and P-Band) enable the detection of a broad range of roughness differences. These results can be enhanced by comprising the full scattering matrix of the polarimetric AIRSAR data.
Three-dimensional signal analysis of remotely sensed data
The paper presents a method for the simultaneous analysis of a collection of satellite images derived from different sources. The images are multispectral, multisensor, multitemporal and synthetically generated images. All these images must have the same dimension, the same resolution, and they must refer to the same geographical area. The images are organized in a parallel structure that form a 3-D block of data. We analyze this 3-D block of data using the 3- D sliding window Fourier transform (SWFT) applied on volumes of size 8 X 8 X 8. The reasons for using this strategy are: (1) the SWFT is a technique which leads to good results in 1-D signals processing like vocal signals. (2) Measurements of the receptive fields of simple cells in visual cortex having shown them to be like Gaussian modulated sinusoids. (3) The transform on the third dimension does the fusion of the different types of data included in the original multimodal image. After the computation of the 3-D transformed images we used a clustering procedure in order to reduce the dimensionality of the transformed data. To achieve a great flexibility in the selection of the significant images a slightly modified k means algorithm was used.
Multitemporal Techniques and Change Detection
icon_mobile_dropdown
Determining uncertainties and their propagation in classified remotely sensed image-based dynamic change detection
Wenzhong Shi, Manfred Ehlers
This paper provides an approach to determine uncertainties and their propagation in remotely sensed images-based dynamic change detection. In this approach, uncertainties of a classified image using maximum likelihood classification method for each date is firstly determined. The probability vectors which are generated during maximum likelihood classification are used as the uncertainty indicators. The second problem is to determine uncertainty propagation when multi-images are compared to detect changes of land cover. The problem is defined by formulating them in a mathematical language to facilitate the following analyses. Two techniques are used to determine the propagation of uncertainties in the comparison two classified images. One is based on the product rule in probability theory and the other is based on the certainty factor (CF) model with probabilistic interpretation. The third problem is to represent uncertainties to communicate them to the users. Two forms of results are presented in the paper: (a) statistics tables and (b) 3D plus colour figures.
Method for change analysis with weight of significance using multitemporal multispectral images
Hiroshi Hanaizumi, Shinji Chino, Sadao Fujimura
A new method is proposed for change analysis with weight of significance between two multi- temporal multi-spectral images. This method gives us areas which indicate the assigned temporal change, for example, from vegetation to bare soil. Image data are projected onto a feature space in which the assigned change is emphasized, and temporal changes between two images are detected with suppression of irrelevant changes. The validity of the method is confirmed by numerical simulation. The method is successfully applied to actual multi- temporal and multi-spectral images.
Parcel-based change detection
Paul L. Rosin
Various methods for automatic change detection in multi-temporal LANDSAT-TM images are described. In contrast to most previous work in change detection, which has operated at a pixel level, we operate at a parcel level (within a minimum size of 25 Ha). This makes it easier to employ structural measures (e.g. based on edges, corners, and texture) as well as correlation methods since these approaches cannot be calculated at each pixel independently. A neural network is trained to combine the different change measures in an appropriate manner.
Updating cartographic models by Spot images interpretation
Marie-Lise Duplaquet, Eliane Cubero-Castan
This paper presents the first results of a study devoted to the update of cartographic models with multispectral Spot images. With the current proliferation of GIS, this updating problem will be of prime importance in the next years. We first present the general issue and the tested application, which is limited to surfacic entities, small number of model classes, and registered images. The proposed methodology is inspired by previous research on image interpretation: the first step consists in the segmentation of the multispectral image in regions, the second step is a classification of these regions. We can improve the results of these two automatic steps by using the cartographic model, which remains mostly correct. The last step of the methodology is a probabilistic analysis of the changes between the model and the interpreted new image. The most probable changes are then proposed to the photo-interpretor as updating candidates. We show results for each step and describe possible improvements. The tests were done with extracts of the IGN BD-Carto, i.e. cartographic models with a few land-use classes: water, forest, urban... A validation procedure is currently undertaken by photo- interpretors, and their remarks will orient our future work.
Data Fusion
icon_mobile_dropdown
Urban aerial image understanding using symbolic data
Henri Moissinac, Henri Maitre, Isabelle Bloch
An image interpretation method using symbolic data is presented. This method is adapted to urban scene analysis of aerial images, where the use of a priori knowledge is very helpful since the landscape is quite complex. A structure based on graphs is introduced to manage the knowledge learned about the observed scene. This structure tends to illustrate the hierarchical levels among the available data. The key point of this structure is the management of uncertainty. For the image interpretation stage, since such images can be too complicated for only one algorithm, we introduce a method for combining several algorithms that work competitively toward a same goal. Each one provides a different solution to the question studied. Then, based on conventional data fusion decision methods, for each object, the most reliable interpretation is selected taking advantage of the most efficient algorithm for these data. This approach is experimented on an urban aerial image where several algorithms for the road network extraction are combined with the help of symbolic data provided by a simple geographic map.
Classification of multisource imagery based on a Markov random field model
In this paper, a general model for multisource classification of remotely sensed data based on Markov random fields (MRF) is proposed. A specific model for fusion of optical images, synthetic aperture radar (SAR) images, and GIS (geographic information systems) ground cover data is presented in detail and tested. The MRF model exploits spatial class dependency context between neighboring pixels in an image, and temporal class dependency context between the different images. The performance of the specific model is investigated by fusing Landsat TM images, multitemporal ERS-1 SAR images, and GIS ground-cover maps for land- use classification. The MRF model performs significantly better than a simpler reference fusion model it is compared to.
Land cover mapping using combined Landsat TM imagery and textural features from ERS-1 synthetic aperture radar imagery
Ioannis Kanellopoulos, Graeme G. Wilkinson, Claudio Chiuderi
Texture features computed from unfiltered ERS-1 SAR imagery have been used as additional features alongside Landsat TM radiances to map Mediterranean land cover. The texture features were normalized to reduce the impact of speckle noise. The classification procedure was carried out with a multilayer perceptron neural network. The results show that the addition of contrast, angular second moment, entropy, and inverse difference moment features from SAR, in addition to TM channels, can give overall accuracy improvement in land cover classification of 2 - 3%. While overall this is not very significant, for particular classes the use of texture leads to greater improvements in accuracy which could be useful in mapping applications. The results of the use of the SAR texture measures were compared using a number of different accuracy measures derived from individual confusion matrices.
Analysis and enhancement of multitemporal SAR data
Jerome Bruniquel, Armand Lopes
SAR images are subject to degradation by speckle. One of the possible approaches to reduce speckle consists of using the fading diversity of several channels. Now, the stability and repetitivity of satellites promote the obtaining of temporal SAR images which can then be used to improve radiometric resolution. This study presents several algorithms allowing us to filter SLC or multi-looks data. The temporal diversity can also be used during the creation of interferograms with a MMSE vectorial filter. All these algorithms were tested on ERS-1 SAR images and numerical results and filtered images are also presented in this paper.
Kinematic modeling of scanner trajectories
Satellites are free-moving rigid bodies subject to various external forces which make them deviate from their predetermined positional and rotational trajectories. Since many remote sensing imaging devices use the linear pushbroom scanning model, trajectory deviation during the image scanning period causes geometric distortion in the imagery. Unless actual satellite trajectory during imaging is modeled, accurate rectification of imagery is impossible. A means of recovering the trajectory from known satellite motion is presented here. Rotational motion is usually sensed by gyroscopes which measure angular velocity. Translational motion can be determined in several ways including telemetry analysis and linear accelerometers. In more recent satellites GPS receivers may be used to determine motion data. We show how to interpolate and subsequently integrate angular velocity to yield a rotational trajectory. The screw, implemented as a dual-number quaternion, is shown to be a suitable parameterization of motion to model the trajectory as a kinematic chain. This representation is useful for image geometry analysis and hence for correction of image distortion. Applications of this parameterization to scanned image resampling and rectification are mentioned.
Concurrent computation of Zernike coefficients used in a phase diversity algorithm for optical aberration correction
This paper describes a method to compute the optical transfer function, in terms of Zernike polynomials, one coefficient at a time using a neural network and gradiant decent. Neural networks, which are a class of self-tutored non-linear transfer functions, are shown to be appropriate for this problem as a closed form solution does not exist. A neural network provides an approximation to the optical transfer function computed from examples using gradient descent methods. Orthogonality of the Zernike polynomials allow image wavefront aberrations to be described as an ortho-normal set of coefficients. Atmospheric and system distortion of astronomical observations can introduce an unknown phase error with the observed image. This phase distortion can be described by a set of coefficients of the Zernike polynomials. This orthogonality is shown to contribute to the simplicity of the neural network method of computation. Two paradigms are used to determine the coefficient description of the wave front error to provide to a compensation system. The first uses a phase diverse image as input to a feedforward backpropagation network for generation of a single coefficient. The second method requires the transfer function to be computed in the Fourier domain. Architecture requirements are investigated and reported together with saliency determination of each input the the network to optimize computation and system requirements.
Implementation and experiences of a nationwide automatic satellite image registration system
Mikael Holm, Eija Parmes, Arto Vuorela
A system for automatic ground control point measurement and rectification of satellite images to a nationwide reference database has been developed at VTT. The method is based on feature based matching. The reference database consists of about two hundred thousand features covering the whole Finland. The features are islands and lakes extracted from the nationwide Land Use Classification, produced from Landsat TM - images by the National Land Survey of Finland. Lakes and islands are extracted from the satellite image to be rectified. Their attributes are compared to those in the reference database. Using feature based matching and robust estimation a few hundred ground control points of subpixel accuracy are selected to the rectifi-cation. Images of different resolution can be measured automatically using this system. It has been tested with SPOT, Landsat TM and NOAA AVHRR imagery. The search for control points takes only a few minutes per satellite image. The accuracy of the result has proved to be at least as good as when measuring the control points manually. The method is analysed by the parallaxes between the reference features and the rectified images.
Neural network processing of FMCW Doppler radar
Satoshi Fujii
We can obtain ranges and velocities of targets at same time by using a frequency modulated continuous wave (FMCW) Doppler radar. The two-dimensional Fourier transform is conventionally used as the two-dimensional (range and Doppler) spectral analysis on the received signal. The range resolution is determined by the bandwidth over which the FMCW signal is swept. The Doppler resolution is closely equal to the inverse of the coherent integration time (CIT). In this paper, we propose the spectral analysis method for FMCW Doppler radar using a Hopfield neural network, which can yield high-resolution spectra. In this method, the spectral analysis is reformulated as a minimisation problem of difference between the covariance matrixes calculated from observed data and theoretical values. This minimisation problem is mapped on the energy function of the Hopfield neural network and then letting the network converge to the minimum state of that energy function. High-resolution spectra are caused by the non-linear function of neurons and the global connectivity of the network. The performance of this method is evaluated by computer simulations and experimental results. Especially in Doppler resolution, the proposed method is demonstrated to be at least 3-times better than that processed by the conventional two-dimensional Fourier transform method. This high-resolution result contributes to shortening of the CIT and can improve time resolution of the FMCW Doppler radar system.
Chip-type optoelectronic processor for synthetic aperture radar system image formation
Nickolay N. Evtikhiev, Vadim A. Dolgy, Serguei A. Shestak, et al.
Developed in our previous works the concept of the multichannel correlator for SAR system picture forming device has been realised in the form of sandwich type opto-electronic chip, containing 3 layers of 2D optically connected optoelectronic devices: LED array with 64 strip shaped elements, fixed or adjustable mask and the 288x232 pixels CCD photomatrix, that is a 64 channel 288 steps correlator. Each channel of the processor forms one range resolution channel thus giving the opportunity to obtain at the output of the device the line of the synthesised SAR picture. In order to enhance range channel number, several chips, connected in appropriate architecture, should be used. SAR picture processor architecture and experimental results are discussed.
Computation accuracy of optoelectronic array image processor for SAR system
Nickolay N. Evtikhiev, Sergey B. Odinokov, Alex V. Petrov
Optoelectronic array image processor based on optical matrix-vector multiplier (OMVM) can be used for real-time processing of large-format SAR images. To analyse its practical performance the mathematical model of the signal transformation in OMVM, which regards the effect of different disturbing factors (error and noise the OMVM elements, threshold discrimination, ADC non-linearity, point spread function of the OMVM optics) was developed. The obtained theoretical results are closely approximated with modern experimental data.
Detecting nonlinearities in microwave return from the sea surface
Karsten Heia, Torbjorn Eltoft
In this paper we discuss the properties of two different methods for extracting information of non-linear mechanisms influencing the backscatter of microwaves from the ocean surface. In the first of these methods, denoted the Multi Frequency Technique (MFT), several frequencies, distributed in a narrow band around a carrier frequency, are simultaneously transmitted, and the non-linearities are detected as secondary peaks in their mutual cross-product spectrum. This technique has been extensively discussed in earlier papers as a proper method for extracting sea surface information (Alpers, W., and K. Hasselmann, 1978). The second method uses only the transmitted frequency and the non-linear effects are detected as secondary (or over-harmonic) frequency peaks in the bispectrum of the backscattered signal. This method has been successfully applied to studies of non-linear wave-wave iterations in experimental plasma physics, but has to the authors' knowledge not been used in studies of microwave scattering from the sea surface. We refer to this method as the Bispectral Analysis Technique (BAT). In order to correctly interpret the different signatures observed in microwave remote sensing of the ocean surface, it is important to fully understand how various physical phenomena influence the backscattered signal. The data used in this work, are generated by a numerical simulator. Based on the Holliday scattering model and a theoretical description of the power spectrum of the surface elevation, we are able to study in detail how various physical and geometrical conditions influence the backscattered signal. Specifically, we address the problem of detecting non-linear hydrodynamical phenomena induced by non-linear hydrodynamical phenomena (Stokes-type gravity waves) or non-linear modulation mechanisms (tilt and hydrodynamic modulation), using the MFT and BAT. We show in this paper that both methods are capable of detecting non-linear features, but that the performance is heavily dependent on the size of the foot-print relative the wave-length of the waves in question. The MFT can only be used with when the size of the foot-print is (much' greater than the waves, while the BAT applies to the opposite case.
Use of morphological filters for SAR data filtering
Marie-Catherine Mouchot, Eric Pelet, Thomas Alfoldi
The identification and classification of targets in synthetic aperture radar images has been difficult due to the presence of multiplicative noise from the coherent energy source. This manifests itself in the image as the familiar `speckle.' A variety of solutions to this problem have been proposed by others, all with different degrees of success. In many cases, users have resorted to the simple and partly effective solution of using a median filter. Mathematical morphology is a technique which is finding some selective uses in the realm of remote sensing image analysis especially outside of France (where it was first introduced by Matheron). We have investigated in the past its applicability to define/detect target `shapes' in satellites images. Our current work evaluates the success of using various morphological filters to reduce/remove speckle-based noise on SAR images. A number of target types on land (forested area) and water (open ocean and ice pack) were tested. They all present specific features that have to be preserved after filtering. In this work, we report on the results as compared to more conventional filtering approaches, evaluated with respect to the criteria of noise reduction and feature preservation.
SAR segmentation by a two-scale contextual classifier
Markku Similae
In this paper the SAR image segmentation problem, which here is identified with the classification task, is discussed in the following situation. The intensity level represents only a vague information about the ground truth class although it is not totally uninformative. We assume, however, that it is possible to extract meaningful textural features from the image, e.g. often it is natural to assume that the ground truth classes have a different dependence structure which in turn implies that one meaningful feature is the autocorrelation. The segmentation problem is formulated as a posterior distribution maximization. Under these conditions the informative value of the intensity is low. So we restrict the configuration space over which the maximization takes place by conditioning on the textural features. The textural features segment the image crudely. This segmentation is called a larger scale classification. In many cases it is possible to refine this segmentation in a smaller pixelwise scale using the intensity values. The uncertainty relating to the larger scale segmentation is passed to the pixelwise classifier as a distribution of the classes. This distribution is then combined with the spatial prior. The final step in the segmentation is the use of ICM algorithm by Besag to achieve the desired classification map. This approach to the segmentation is motivated by the problems confronted in the sea ice/open water classification from ERS-1 SAR images. Hence all the examples are from this application.
Estimation of sea surface velocities from space using neural networks
Stephane Cote, A. R. L. Tatnall
An automatic technique for estimating sea surface velocities is presented. The technique is based on pattern matching of features from successive satellite images of a common region. The patterns are matched in parallel using a Hopfield neural network. A cost function is defined in order to represent the constraints of the matching problem, and mapped onto a Hopfield net for minimisation. This method makes it possible to track deformable objects, and recursively match parts of these objects. Therefore, it gives a very precise information on the object's movement and deformation. The method has been tested on Meteosat visible images. Displacement vectors are obtained by tracking clouds on successive images. The method is shown to be faster than cross-correlation based methods, and to give denser and more precise displacement vectors along cloud edges. An extension of the method to include estimation of sea-surface velocities from sea surface temperature images is described. Future developments for automatic cloud tracking and sea surface velocities estimation are outlined in the conclusion.
Land use classification of ERS-1 images with an artificial neural network
Sebastian Carl, Roland Kraft
Agricultural crop monitoring by remote sensing techniques benefits from the availability of weather independent data like ERS-1.SAR.PRI scenes. Signature based traditional agricultural landuse classification methods can only be performed on heavily filtered radar data due to apparent speckle noise. However, the speckle contains textural information on the illuminated area. Thus a processing pipeline for a texture based classification approach of unfiltered ERS- 1.SAR.PRI data was developed using two different kinds of neural networks. A Kohonen Map is used to visualize the textural features and to define proper training areas for a subsequent supervised classification with a backpropagation net. The pipeline was applied to mono- and multitemporal ERS-1 scenes of a site near Prenzlau in Brandenburg, Germany. First results are encouraging and demonstrate the possibility to discriminate several landuse types. The reclassification error of the training samples was less than 5%. Entire classification results correspond to the ground truth data quite well. The main advantage of the processing pipeline is that both signature and texture features of the unfiltered image are used to distinguish between different classes.
Statistical Pattern Recognition Techniques I
icon_mobile_dropdown
Classification of remote sensing images with the aid of Gibbs distribution
D. Zhang, Luc J. Van Gool, Andre J. Oosterlinck
In a maximum likelihood classification (MLC) of a satellite image it is explicitly assumed that the spectral properties of one pixel are independent of the properties of all other pixels. As a result, the MLC is unable to distinguish the pixels which come from different land-cover classes but have the same spectral properties and the result is usually a snow-like map. On the other hand, data in the field of remote sensing often appear in the form of distinct parcels and all the pixels in a specific parcel are assumed to come from a single land-cover class. Therefore, there must exist some spatial continuity between adjacent pixels. This property must be of great importance and should be taken into account in the process of land-cover classification. This paper proposes to make use of the theory of the Markov random field (MRF) and the Gibbs distribution for imposing the spatial continuity either by making use of the joint distribution of the Gibbs distribution and the conventional multinormal distribution or by using the Gibbs distribution separately as a postprocessing procedure to the MLC. While the application of the joint distribution and the Gibbs distribution may result in different classifications, experiments show that very significant improvement can be achieved with at least one of these models.
Filtering remote sensing data in the spatial and feature domains
Freddy Fierens, Paul L. Rosin
We present a comparative study of the effects of applying pre-processing and post-processing to remote sensing data both in the spatial image domain and the feature domain. We use a neural network for classification since it is not biased by a priori assumptions about the distributions of the spectral values of the classes. Spatial smoothing was applied both as pre- and post-processing steps. Pre-processing involved smoothing the image spectral values by means of anisotropic diffusion, whereas iterative majority filtering was applied as a post- processing step to improve spatial coherence by reclassifying pixels. While it is common practice to filter the image before classification (smoothing) or after classification (iterative majority filtering) it is less obvious what happens if pre-processing is applied to the training or image data in feature space. To minimize the effect of noisy training pixels we applied a k- nearest neighbor filtering algorithm to the training data. This involved reclassifying each training pixel by the majority class of the set of k closest training pixels (in terms of Euclidean distance) in feature space. The procedure eliminates isolated training pixels and tends to produce more compact class clusters. The effects of all spatial and spectral filtering methods were validated by applying them to three different testcases.
Object Recognition II
icon_mobile_dropdown
Definition of multisource prior probabilities for maximum likelihood classification of remotely sensed data
Fabio Maselli, Claudio Conese, A. Rodolfi, et al.
This paper presents an application of a methodology for the probabilistic integration of ancillary information into maximum likelihood classifications of remotely sensed data. The methodology is based on the definition of modified prior probabilities from the spectral and ancillary data sets avoiding most of the problems connected with the common uses of priors. A case study was considered concerning two rugged areas in Central Italy covered by 11 main land-use categories. Bitemporal Landsat TM scenes and the three information layers of a Digital Elevation Model (elevation, slope, aspect) were used as spectral and ancillary data. The results show that the integration of the ancillary information was fundamental for the discrimination of some classes which were practically indistinguishable only on the basis of the spectral data. The possible utilisation of the procedure within Land Information Systems is also discussed.
Statistical Pattern Recognition Techniques I
icon_mobile_dropdown
Studying the behavior of neural and statistical classifiers by interaction in feature space
Freddy Fierens, Graeme G. Wilkinson, Ioannis Kanellopoulos
Unsupervised as well as classification of multi-spectral remote sensing data can be done by statistical as well as by neural network classifiers. Since classifiers are often approached as black boxes, it is not clear why one particular classifier performs better for a certain problem than another. In order to gain some insight in the actual training and classification processes, we implemented a software tool to study these processes in n-dimensional feature space. This tool allows visualization of the data points in feature space, of the individually classified clusters, and of the decision boundaries of the classifier. Image sequences are used to visualize higher-dimensional feature spaces as well as dynamic processes such as the training of neural networks or the effect of this training on an image to be classified. The visualization approach was further extended by allowing interaction with the decision boundaries. Feedback of this interaction is provided by a direct link between the decision boundaries and the classified landuse image. Pushing or pulling a decision boundary is directly reflected by changes to the corresponding classified image. Finally, we give an example of a combined classification scheme where visualization is used in order to validate the approach.
Mixed pixel classification in remote sensing
P. Bosdogianni, Maria Petrou, Josef Kittler
In this paper we present a novel method for mixed pixel classification where the classification of groups of pixels is achieved taking into consideration the higher order moments of the distributions of the pure and the mixed classes. The method is demonstrated using simulated data and is also applied to real Landsat TM data for which ground data are available.
Object Recognition I
icon_mobile_dropdown
Road detection from aerial images: a cooperation between local and global methods
Sylvain Airault, Renaud Ruskone, Olivier Jamet
We present in this paper a road detection method based on the cooperation between a road following algorithm and a global method which consists of a whole image segmentation followed by a characterization of regions (using shape and texture criteria). One will find there an overview of the approach, a description of the used techniques and then a presentation of the mean we use to make both methods cooperate. Both of our methods are based on quite the same road model (roads are elongated objects with parallel edges and an homogeneous texture) and both try to extract the same kind of roads: the interest of the approach is more to improve the detection reliability for a great part of the network than to look for an exhaustive extraction.
Application of computational geometry to the analysis of directional wave spectra as measured by hf radar
Frederick E. Isaac, Lucy R. Wyatt
Directional ocean wave spectra are extremely important to oceanographers in that they provide full quantative information on the wave systems present on the ocean surface. The use of H. F. radar as a remote sensing tool for the direct measurement of directional ocean spectra has, in recent years, been shown to be extremely effective and reliable. It possesses a number of characteristics that are important for remote sensing. Namely these are excellent offshore range (typically 150 - 200 km), and excellent frequency, spatial, and temporal resolution. It is the frequency resolution that is furnishing scientists with a quality of directional ocean spectra that is unrivalled. In analysing the measured spectra, it is useful to consider the ocean surface as being made up of a number of combined wave systems. These would be for example a mixed wind and swell wave system. These systems manifest themselves as separate modes in the directional spectra and, may be analysed separately in order to individually parameterize them. Work at Sheffield has concentrated on how this analysis may be performed automatically. This involves the segmentation of the spectra into the individual modes followed by parameter extraction. Presented will be results of how computational geometry, in particular the Voronoi diagram, and recent results from mathematical morphology are being employed to pIovide a system capable of roviding such analysis.
Model-based approach to automatically locating tree crowns in high spatial resolution images
Richard J. Pollock
Individual tree crowns were automatically located in aerial electro-optical sensor images of a forested scene in Ontario, Canada. The images have pixels with a ground dimension of 36 cm. A model of the appearance of individual tree crowns in a single channel of the image data was used to generate a set of templates. The model has parameters for size, shape, and projected one-sided leaf area density distribution, which are evaluated according to the expected range of values for the scene. Knowledge of the scene illumination and sensing geometry is also incorporated into the model. Tree crown locations were derived from the locations of plausible linear relationships between the templates and the sensed images, as determined with a weighted least-squares straight line regression analysis. The weighting was used to accommodate crown margin irregularity. The procedure successfully located 57% of the trees in a 548 member ground-surveyed sample representing a wide range of species, sizes, and growing situations. Success was defined by a one-to-one spatial relationship between computed crown extents and reference locations. The success rate increased to 73% when only 224 trees that had almost no physical contact with their neighbors were considered. The commission error rate was estimated to be 23%.
Statistical Pattern Recognition Techniques II
icon_mobile_dropdown
SAR imagery classification: the fractal approach
Antonio Iodice, Maurizio Migliaccio, Daniele Riccio
The operational and applicative contribute of Synthetic Aperture Radar (SAR) imagery is heavily affected by the possibility to extract some desired features. In this paper we consider such a problem on a new perspective based on the fractal approach. The fractal approach to SAR imagery classification seems to be very promising tool but still needs further clarifications. Different procedures are here discussed an compared, showing limits and potentials of such an intriguing mathematical tool.
Geometric parameter estimation for agriculture fields
Liu Lu, Fang Luo, Nanno J. Mulder
Geometric parameter estimation is a common task in remote sensing image processing. Quite often, it is needed to find out the area and location of a certain kind of crop in a remotely sensed image. If we take a parcel of a uniform crop type in a remote sensing image as an object, the task is to determine the orientation, location, and scale of the object. In this paper, we propose a model based method for parameter estimation in which the radiometric distribution obtained by radar simulation is used as a global feature of an object to characterize its spectral properties. A cost function is defined as a quantitative evaluation for the hypothesis of object parameters in terms of its feature fitting and the minimum cost corresponds to the best parameters of the object with the least misclassified pixels. The feature matching is completed through cost minimization. Experiments show that this method is quite efficient especially in the cases of bad signal to noise ratios.
Network of modified 1-NN and fuzzy k-NN classifiers in application to remote sensing image recognition
Adam Jozwik, Sebastiano Bruno Serpico, Fabio Roli
A parallel network of modified 1-NN classifiers and fuzzy k-NN classifiers is proposed. All the component classifiers decide between two classes only. They operate as follows. For each class i a certain area Ai is constructed. If the classified point lies outside of each area Ai, then the classification is refused. When it belongs only to one of the areas Ai, then the classification is being performed by 1-NN rule. Points that lie in an overlapping area of some areas Ai, are classified by the fuzzy k-NN rule with hard (nonfuzzy) output. Two feature selection sessions are recommended. One to minimise the size of overlapping areas, another to minimise an error rate for the fuzzy k-NN rule. The aim of this work is to create a classifier that is nearly as fast as 1-NN rule and which performance is as good as that for the fuzzy k-NN rule. The effectiveness of the proposed approach was verified on a real data set containing 5 classes, 15 features and 2440 objects.
Feature selection for remote-sensing data classification
A great amount of parameters can be derived from the original bands of multispectral remotely-sensed irnages. In particular, for classification purposes it is important to select which of these parameters allow the classes of interest to be well separated in the feature space. In fact, both classification accuracy and computational efficiency rely on the set of features used. Unfoltunately, as spectral responses are strongly influenced by various environmental factors (e.g., atmosphere interferences and non- homogeneous sunshine distribution) the derived parameters depend not only on the considered classes but also on the peculiar characteristics of analyzed images. Even if many studies have been carried out both to identify more stable parameters and to correct images, the problem is still open. It cannot be a-priori solved on the basis of the only ground classes considered, but an ad-hoc selection is required for each image to be classified. In literature, several feature-selection criteria have been proposed. In this paper, a critical review of different techniques to accomplish feature-selection for remote-sensing classification problems is presented. To preserve the physical meaning of selected features only criteria that do not make transformation of the feature space are considered. Most of such criteria were originally defined to evaluate the separability among couple of classes. A formal extension of these techniques based on the statistical theory to face also multiclass cases is considered and compared with traditional heuristic extensions. Finally, with the aim to give a good approximation of the Bayes error probability a new feature-selection criteria is proposed. Preliminary tests carried out on a multispectral data-set witness its potentialities.
Object Recognition II
icon_mobile_dropdown
Comparison between MDA and two edge detectors for SAR image analysis
Carlo S. Regazzoni
A comparison is performed among three edge detectors operating on Synthetic Aperture Radar (SAR) images: Canny filter, Ratio detector and Multilevel Deterministic Annealing with a speckle noise model. The first method represents classical edge extractors, based on a regularised estimation of the derivatives of the image function. The Ratio Detector is an example of a Constant False Alarm Rate methods used for edge detection in speckle-noise, which rely on a detection criterion independent from the average gray-level of the decision region. The method defined Multilevel Deterministic Annealing is based on a non-linear statistical criterion which takes into account not only local properties of the image function, but also its global characteristics. The three detectors are evaluated on the basis of false-alarm and detection errors computed on the basis of the apriori known output of a synthetic scene. A qualitative evaluation on real images is also provided. It is shown that Multilevel Deterministic Annealing provides better results despite of a higher computational load.
Iterative algorithm for computing the shape of a finite set of points
Mahmound Melkemi, D. Vandorpe
A new method is introduced whereby the shape of a finite set of points in the plane is computed. The shape is approximated by a sequence of sets named k-connected hulls. We present an efficient algorithm for constructing the shape of a set of points. This algorithm is based on the relationship between the k-connected hulls sets an the Voronoi diagram.
Airport detection using a simple model, multisource images, and altimetric informations
Alain Michel
A fuzzy set approach has been developed to integrate heterogeneous data such as SAR and optronic radiometry, old cartography and Digital Elevation Model in a multi-criterial decision process, in order to detect airport. We also use a simple airport model with well known specifications (runway length longitudinal admitted slope, orientation constraint between runway and taxiways,...). This automatic algorithm works in 3 steps: runway detection on ERS-l image, taxiways detection on SPOT image an buildings detection on both images and has been successfully tested on different locations. The scanned area covers more than 300 square miles for each image.
Remote sensing images segmentation by Deriche's filter and neural network
Raphael K. Koffi, Basel Solaiman, Marie-Catherine Mouchot
An image segmentation method for remote sensing data using hybride techniques is proposed. Edge detection approach for segmentation is considered in our study. Our aim is to integrate segmentation results in further processing namely classification. Images of the land from satellite are often corrupted by noise. On one hand, optimal edge detectors insure good noise immunity. On the other hand, the multi-layer perceptron (MLP) neural network has been found to be suited for classification. So we propose to combine these two techniques to improve segmentation process. Satellites for remote sensing provide several images for the same area, coded differently according to spectral bands. In order to bear in mind spectral and spatial information, neighborhood relation of pixels and different bands are taken into consideration during the classification realized by the neural network. Samples which constituate the training set for the MLP are selected from the third, fourth and fifth band and represent edge and non-edge patterns. Each sample vector is composed of the value of a current pixel in the local maxima image (enhancement image obtained by Deriche's filter) and its 8 nearest neighbors. The proposed method provides satisfactory results for our application and compared to other similar methods.
New merging method of multispectral and panchromatic SPOT images for vegetation mapping
Bruno Garguet-Duport, Jacky Girel, Jean-Marc Chassery, et al.
The French satellite SPOT is carrying two high resolution sensors. The first supplies 10-m panchromatic (0.51-0.73) data; the second supplies multispectral 20-m data (XS1 in the blue band 0.50 - 0.59, XS2 in the green band 0.61 - 0.69, XS3 in the near infra-red band 0.79 - 0.90). The multispectral and panchromatic merging processes are becoming current procedures for studying land covers and particularly for vegetation mapping which needs a good spatial resolution and a good spectral resolution respectively. The usually applied merging methods are more or less altering radiometric values of the original XS data. Original information is modified and, consequently, photointerpretation procedures are becoming awkward or really impossible so a new merging method has been performed. Thanks to the combined panchromatic image, this method is simulating 10-m images while conserving spectral properties of original XS 20-m images. This method uses two tools coming from the signal processing field and based on solid mathematical procedures: the multiresolution analysis and the wavelet transform. The results issued from this merging method are really promising. Analyses and thematic processings (i. e. production of thematic colour composite images) carried out from that high resolution data, have exhibited a significant improvement of imagery added to a great potential concerning vegetation mapping of the floodplains.
Vortex segmentation on satellite oceanographic images
Jean-Paul Berroir, Sonia Bouzidi, Isabelle L. Herlin, et al.
Sea temperature images coming from infrared sensors (AVHRR) allow the visualization of oceanographic activity at meso-scale (100 km), such as temperature fronts, vortices, ... These phenomena are, by their very nature, deformable, and deserve specific studies. We focused our study on the segmentation of vortices. We present three different methods to do this, using hybrid hyperquadric functions, Markovian framework, and geometrical modelization.
Multispectral edge detection in remote sensing images
J. Vandeneede, D. Zhang, Patrick Wambacq, et al.
The research work reported in this paper deals with edge information derived from the individual bands of multispectral satellite images. The combination of this edge information in a meaningful way should lead to an edge image that is more complete than those extracted from any single band. For the extraction of edges from an individual band, three commonly used approximations of the first derivative of a function f(x,y) were tested. Gradient based edge detectors were selected because methods based on vector algebra can be considered to combine them. Furthermore single pixel wide edges can be obtained using both the magnitude and orientation information. As a first method to combine edges from individual planes the algebraic vector sum is evaluated. A second method consists of computing new edge components from the rms values of the corresponding single band components. A third way to combine edges is to take as magnitude the largest magnitude found in any of the bands. The corresponding orientation value is used as the multispectral edge orientation. The last method implements the only correct formulation of the multispectral gradient according to Di Zenzo which is based on the tensor gradient associated with a vector field. As shown in the paper, the last two methods produce acceptable and comparable edge combinations.
Rule extraction based on neural networks for satellite image interpretation
Laurent Mascarilla
In the frame of an image interpretation system for automatic cartography based on remote sensing image classification improved by a photo interpreter knowledge, we propose a system using neural networks to produce fuzzy production rules. These rules are intended to describe class vegetation context relatively to out image data (generally a G.I.S.) as a human expert could do. In the system, the expert only gives samples of concerned classes via a G.U.I. (Graphic User Interface) connected to a G.I.S. In a first stage, a Kohonen neural network is used to found clusters and membership functions, and then to compute a first set of fuzzy 'IF-THEN' rules with certainty factors. The human expert then updates these rules, and the given samples, according to his own experience. Once satisfying and discriminating classification rules are found, a second kind of neural network using back propagation is used to tune the final set of rules. At the same time, it produces neural nets able to give for each pixel and each class, the realisation degree of the favourable context relatively to the knowledge inferred by the samples.
Finding the structure of a satellite image
Philippe Marthon, Bruno Paci, Eliane Cubero-Castan
This is a method for the analysis of a satellite image (SPOT data) by radiometry and geometry. This method consists of three steps. First of all, image outlines are obtained by applying the Watershed algorithm to the gradient modules of the image. Because of the memory size required to analyse a whole image, we must break it down into a set of small images. However, this technique introduces some extra outlines which can be detected and removed easily. Then, we extract the vertex graph: each node represents a pixel of the outline with at least three neighbouring pixels. An arc between two nodes shows the presence of an outline between two closely related pixels. Finally, the region tree is computed. The tree structure allows the coding of sub-regions. Each region is described by a polygonal approximation of its boundary. This method has been successfully tested for the extraction of the agricultural area of King's Lynn (UK).
AI-based technique for tracking chains of discontinuous symbols and its application to the analysis of topographic maps
Alessandro Mecocci, Massimiliano Lilla
Automatic digitization of topographic maps is a very important task nowadays. Among the different elements of a topographic map discontinuous lines represent important information. Generally they are difficult to track because they show very large gaps, and abrupt direction changes. In this paper an architecture that automates the digitalization of discontinuous lines (dot-dot lines, dash-dot-dash lines, dash-asterisk lines, etc.) is presented. The tracking process must detect the elementary symbols and then concatenate these symbols into a significant chain that represents the line. The proposed architecture is composed of a common kernel, based on a suitable modification of the A* algorithm, that starts different auxiliary processes depending on the particular line to be tracked. Three auxiliary processes are considered: search strategy generation (SSG) which is responsible for the strategy used to scan the image pixels; low level symbol detection (LSD) which decides if a certain image region around the pixel selected by the SSG is an elementary symbol; cost evaluation (CE) which gives the quality of each symbol with respect to the global course of the line. The whole system has been tested on a 1:50.000 map furnished by the Istituto Geografico Militare Italiano (IGMI). The results were very good for different types of discontinuous lines. Over the whole map (i.e. about 80 Mbytes of digitized data) 95% of the elementary symbols of the lines have been correctly chained. The operator time required to correct misclassifications is a small part of the time needed to manually digitize the discontinuous lines.
Agricultural field recognition by the method of a constrained optimization
Fang Luo, Liu Lu, Nanno J. Mulder
In this paper, a model based method to recognize agricultural fields is presented and demonstrated. At first, the task of the recognition is formulated as the problem of cost minimization. The approach is implemented through an hypothesis-correction-improvement process which usually starts with an initial hypothesis of object feature, compares it with the measured one, and then a cost can be calculated to control the improvement process. In this method, firstly, the cost is a function of the object shape parameters, and its value indicates the difference between the predicted and measured feature of an object. Secondly, to drive the cost to its minimum, the method used geometrical and topological properties of objects to constrain the optimization procedure. This prior knowledge helps the method reach the global minimum instead of a local one. The object recognition experiments performed on the high noise images (SAR) and the comparison results between different search strategies are given.
Spatial land cover classification with the aid of neural network
Dony Kushardono, Kiyonari Fukue, Haruhisa Shimoda, et al.
A land cover classification method using a neural network was applied for the purpose of utilizing spatial information, which is expressed as a two-dimensional array of a co-occurrence matrix. The adopted neural network has three layers feed forward network architecture with back-propagation learning algorithm. In this study, the three kinds of neural network classification models were proposed. The first and the second model classifies each band image at the first stage, then performs final decision based on the first stage result. At the decision stage, arithmetic decision algorithm and second neural network are used by the first and the second model, respectively. The third model is a single stage classifier that enters all band information into the neural network for learning and classification at the same time. In order to evaluate proposed models, land cover classification using the proposed models and conventional pixel wise maximum likelihood method was conducted with Landsat TM and SPOT HRV data. As a result, the third model showed best performance, with accuracies about 4% to 6% higher than those of the classification result of the first and second model, and it showed about 17% to 27% higher than that of the maximum likelihood classification result. Finally, we examine the best performance of the neural network classification model for multitemporal remote sensing data classification, which was successful.
SAR Image Enhancement
icon_mobile_dropdown
Frequency domain convolution for SCANSAR
Guy Cantraine, Didier Dendal
Starting from basic signals expressions, the rigorous formulation of frequency domain convolution is demonstrated, in general and impulse terms, including antenna patterns and squint angle. The major differences with conventional algorithms are discussed and theoretical concepts clarified. In a second part, the philosophy of advanced SAR algorithms is compared with that of a SCANSAR observation (several subswaths). It is proved that a general impulse response can always be written as the product of three factors, i.e., a phasor, an antenna coefficient, and a migration expression, and that the details of antenna effects can be ignored in the usual SAR system, but not the range migration (the situation is reversed in a SCANSAR reconstruction scheme). In a next step, some possible inverse filter kernels (the matched filter, the true inverse filter, ...) for general SAR or SCANSAR mode reconstructions, are compared. By adopting a noise corrupted model of data, we get the corresponding Wiener filter, the major interest of which is to avoid all divergence risk. Afterwards, the vocable `a class of filter' is introduced and summarized by a parametric formulation. Lastly, the homogeneity of the reconstruction, with a noncyclic fast Fourier transform deconvolution is studied by comparing peak responses according to the burst location. The more homogeneous sensitivity of the Wiener filter, with a stepper fall when the target begins to go outside the antenna pattern, is confirmed. A linear optimal merging of adjacent looks (in azimuth) minimizing the rms noise is also presented, as well as consideration about squint ambiguity.
Noise filtering of interferometric SAR images
Jong-Sen Lee, Thomas L. Ainsworth, Mitchell R. Grunes, et al.
The interferometric SAR with two antennas aligned in the along-track direction is capable of mapping the ocean current field, and in the cross-track direction can be used for topographic mapping. Like amplitude SAR images, interferometric phase images are susceptible to speckle and other noise sources due to decorrelations by thermal noise, spatial baseline, etc. (Zebker, et al.,1992). The noise effect is more pronounced in the space-borne multi-pass interferometry for topographic mapping due to long baseline and temporal decorrelations. However, unlike the SAR images, in which noise is characterized by a multiplicative noise model, the noise in the phase image has properties of additive noise, and the noise standard deviation depends on the corre-lation coefficient between two SAR complex images. In this paper, two additive noise filtering algorithms,i.e. the sigma filter and the local statistics filter, are adapted to the statistical characteristics of the phase image (after the phase unwrapping). The basic idea is to apply more filtering action in areas with a lower correlation coefficient (high noise level), and less filtering in areas with a higher correlation coefficient. An ideal noise filter for the phase image should be able to smooth the noise, while retaining the spatial resolution and radiometric information.
Filtering of radar images for geological structural mapping
Catherine Mering, Jean-Francois Parrot
Geological structural features may be described on satellite images as thin lines or contours. The extraction of these features is usually performed by enhancement techniques such as filtering and edge detection. However, radar images being characterized by the presence of speckle noise, the classical enhancement techniques do not provide good results in this case. We present here two methods of filtering based on local transformations of the gray-tone function. They provide gray-tone images where the speckle is removed and the continuity and sharpness of thin structures are preserved. The valuation of the methods is done by analyzing the binary images resulting from an automatic thresholding of the filtered images according to two criteria: homogeneity and connectivity.
Physical parameter effects on radar backscatter using principal component analysis
Hean Teik Chuah, K. B. Teh
This paper contains a sensitivity analysis of the effects of physical parameters on radar backscatter coefficients from a vegetation canopy using the method of principal component analysis. A Monte Carlo forward scattering model is used to generate the necessary data set for such analysis. The vegetation canopy is modeled as a layer of randomly distributed circular disks bounded below by a Kirchhoff rough surface. Data reduction is accomplished by the statistical principal component analysis technique in which only three principal components are found to be sufficient, containing 97% of the information in the original set. The first principal component can be interpreted as volume-volume backscatter, while the second and the third as surface backscatter and surface-volume backscatter, respectively. From the correlation matrix obtained, the sensitivity of radar backscatter due to various physical parameters is investigated. These include wave frequency, moisture content, scatterer's size, volume fraction, ground permittivity and surface roughness.
Image Enhancement
icon_mobile_dropdown
Role of positivity for error reduction in images
In this paper, the role positivity plays in error reduction in images is analyzed both theoretically and with computer simulations for the case of wide-sense-stationary Fourier- domain noise. It is shown that positivity behaves as a signal-dependent support constraint. As a result, the mechanism by which positivity results in noise reduction in images is by correlating measured Fourier spectra. An iterative linear algorithm is employed to enforce the positivity constraint in order to facilitate an image domain variance analysis as a function of the number of iterations of the algorithm. Noise reduction can occur only in the asymmetric part of the positivity-enforced support constraint when positivity is applied just as noise reduction only occurs in the asymmetric part of the true support constraint when support is applied. Unlike for support, noise decreases in the image domain in a mean square sense as the signal-to-noise ratio of the image decreases. However, it is shown that this image-domain noise decrease does not noticeably improve identification of image features.
Filtering and edge detection of remote sensing images by Hermite integration
Jun Shen, Wei Shen
Image smoothing and edge detection by use of Gaussian filters are much used in remote sensing image processing. In the present paper, we propose Hermite integration method to realize Gaussian filters and their derivatives by use of orthogonal polynomial theory and interpolation. We analyze at first 1-D cases and show that the output of a Gaussian filter can be calculated by the weighted sum of the input signal sampled at the positions corresponding to the Hermite polynomial roots, which gives a much better algebraic precision and a less important complexity than the classical mask convolution method. The digital implementation is then presented. The Hermite integration method is then generalized to the calculation of Gaussian-filtered derivatives and to multidimensional cases, such as 2-D image processing in remote sensing. Our method shows the following advantages: (1) Better algebraic precision. (2) Constant and reduced computational complexity independent of the filter window size. (3) Processing completely in parallel. (4) The possibility to detect edges with subpixel precision. The method is implemented and tested for artificial data and real remote sensing images and is compared with the classical mask convolution method, the experimental results are reported as well.
Radial predicting filters to recover clear-column infrared radiance fields from satellite
Valerio Tramutoli, Carmine Serio
In order to produce temperature or water vapor profiles from infrared radiance measurements the detection of possible cloud contamination within the field-of-view is required. In most retrieval schemes a correction phase follows so that the inversion algorithm operates on clear column infrared radiances. In the present paper we describe an objective filtering scheme aiming at processing radiances, for each infrared measuring channel, to produce a field of cloud cleared values with sufficiently well defined statistical properties and error structures. Basically the method uses clear measurements only and treats cloudy data as unmeasured or missing data. Synthetic values of clear column radiances for HIRS/2 channel 4,7,13,8 are used as a test field. The results presented are retrieval of clear radiance fields from cloudy data sets, each consisting of the test field with instrumental noise added and a cloud mask defining whether each individual field of view is clear or not. Radiances defined as cloudy are consistently treated as missing data. The cloud masks used for the present exercise are obtained from processing of real data with a very high cloud content, in order to understand the behavior and quality of the algorithm in situations close as possible to worst real cases.
Gram-Schmidt orthogonalization technique for atmospheric and sun glint correction of Landsat imagery
Charles L. Walker, Maria T. Kalcic
A technique for correcting for haze and sunglint in Landsat Thematic Mapper imagery in coastal regions has been developed and demonstrated using Gram-Schmidt orthogonalization of the band covariance matrix. This procedure is an adaptation of Wiener filtering and noise cancellation stochastic signal processing. Using a covariance matrix constructed from an over water portion of the image containing haze and sunglint pixels, a transfer function between infrared (IR) bands (e.g. TM 5) and visible bands (e.g. TM 2) is derived. This transfer function is then applied to the entire image and the visible band contribution predicted by the IR is subtracted from the measured visible signal, pixel by pixel. A comparison between images with and without haze of the same scene indicates that the procedure allows the observation of underwater features not previously visible.
Registration Techniques
icon_mobile_dropdown
High-precision geometric correction of airborne remote sensing revisited: the multiquadric interpolation
Manfred Ehlers, David N. Fogel
For a geographic analysis of multispectral scanner data from aircraft and their integration in spatial databases and geographic integration systems (GIS), geometric registration/rectification of the scanner imagery is required as a first step. Usually, one has to rely on global mapping functions such as polynomial equations as provided by most commercial image processing systems. These techniques have been proven to be very effective and accurate for satellite images. However, there are a umber of shortcomings when this method is applied to aircraft data. We see the multiquadric interpolation method as a promising alternative. The multiquadric function was first developed for the interpolation of irregular surfaces. It could be modified, however, to be used for image correction of remotely sensed data. In this form, it is particularly suited for the rectification of remote sensing images of large scale and locally varying geometric distortions. The multiquadric interpolation method yields a perfect fit at the used control points (CPs). With this, it is necessary to withhold independent test points that can be used for accuracy assessment. Within the registration/rectification process, all CPs contribute to the geometric warping of any given pixel in the image. Their effects, however, are weighted inversely to the distances between CPs and the current pixel location. The paper presents the multiquadric interpolation techniques and demonstrates successful application with airborne scanner data.
Rigorous geometric processing of airborne and spaceborne data
Thierry Toutin
This article presents a method to generate ortho-images with digital elevation model and a few ground control points. The method has been integrated to take into account the geometric distortions of the full geometry of viewing (sensor-platform-Earth), and unified to process images from different sensors (VIR and SAR) on various platforms (airborne and spaceborne). Results from eight types of images show an absolute accuracy of 1/3 of a pixel for VIR satellite images and one to two pixels for the other images (SAR and airborne), and a relative accuracy of one pixel between the image. Mosaicing of these ortho-images with the road network overlaid confirms the relative and absolute accuracies.
Optimal extrinsic stereo image matching based on local invariant properties
Xianghui Xu, Zhimin Tan
One of the key steps of automatic digital elevation model production from aerial photos or satellite remote sensed images is introducing a robust matching algorithm. In this paper a feature-based matching method is presented. The characteristics of this algorithm are that: first, the matching input for the first matching procedure are the radiometric and geometric noise invariant properties of image patches; second, the local matching inputs are used for extrinsic optimal matching procedure (presently only for the scan line).
Registration of images with affine geometric distortion by means of moment invariants
Jan Flusser, Stanislav Saic, Tomas Suk
This paper deals with the registration of images with affine geometric distortion. It describes a new method for automatic control point selection and matching. First, reference and sensed images are segmented and closed-boundary regions are extracted. Each region is represented by a set of affine-invariant moment-based features. Correspondence between the regions is then established by a two-stage matching algorithm that works both in the feature space and in the image space. Centers of gravity of corresponding regions are used as control points. A practical use of the proposed method is demonstrated by registration of SPOT and Landsat TM images. It is shown that our method can produce subpixel registration accuracy.