Show all abstracts
View Session
- Front Matter: Volume 8892
- Pansharpening, Super-Resolution and Interpolation
- Image Restoration and Segmentation
- Image Registration and Object Recognition
- Hyperspectral Image Processing
- Unmixing and Classification in Hyperspectral Images
- Image Classification
- Data Mining and Data Fusion
- Change Detection and Multitemporal Analysis
- Data Processing Applications
- SAR Data Processing I: Joint Session
- SAR Data Processing II: Joint Session
- Poster Session
Front Matter: Volume 8892
Front Matter: Volume 8892
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 8892 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Pansharpening, Super-Resolution and Interpolation
Pansharpening of hyperspectral images: a critical analysis of requirements and assessment on simulated PRISMA data
Show abstract
In this paper, pansharpening methods suitable for the spatial enhancement of hyperspectral images are critically discussed
and assessed on simulated data from the upcoming PRISMA mission, featuring a panchromatic camera mounted on the
hyperspectral payload. Since most fusion techniques are limited to the fusion of multispectral (MS) images with panchromatic
(Pan) images, the focus shall be on the extension of such methods towards hyperspectral (HS) images. In particular,
the impact of the bandwidth of Pan on fusion performances will be analyzed and discussed.
A new super resolution method based on combined sparse representations for remote sensing imagery
Show abstract
While developing high resolution payloads, it is also necessary to make full use of the present spaceborne/airborne payload
resources by super resolution (SR). SR is a technique of restoring a high spatial resolution image from a series of low
resolution images of the same scene captured at different times in a short period. Common SR methods, however, may
fail to overcome the irregular local warps and transformation in low resolution remote sensing images caused by platform
vibration and air turbulence. It is also difficult to choose a generalized prior for remote sensing images for Maximum a
Posteriori based SR methods. In this paper, irregular local warps and transformation within low resolution remote sensing
images will be corrected by incorporating an elastic registration method. Moreover, combined sparse representation will
be proposed for remote sensing SR problem. Experimental results show that the new method constructs a much better high
resolution image than other common methods. This method is promising for real applications of restoring high resolution
images from current low resolution on-orbit payloads.
Linear spectral unmixing-based method including extended nonnegative matrix factorization for pan-sharpening multispectral remote sensing images
Show abstract
This paper presents a new fusion approach for pan-sharpening multispectral remote sensing images. This approach,
related to Linear Spectral Unmixing (LSU) techniques, includes Extended Nonnegative Matrix Factorization (ExNMF)
for combining low spatial resolution multispectral and high spatial resolution panchromatic data. ExNMF is applied to
different real multispectral and panchromatic data sets with different spatial resolutions and different number of spectral
bands. The quality of pan-sharpened multispectral images is evaluated by the jointly spectral and spatial Quality with No
Reference (QNR) index. Obtained results show that our proposed method outperforms the Principal Component Analysis
(PCA) and Gram-Schmidt (GS)-based standard literature methods.
On non-uniform sampling for remote sensing optical images: the METEOSAT third generation rectification case study
Show abstract
The METEOSAT Third Generation (MTG) Programme will provide the geostationary platforms for operational
meteorological data acquisitions over Europe in 2018-2030. The Flexible Combined Imager (FCI) instrument is one of
the MTG imager instruments and has a heritage from SEVIRI flown on the current METEOSAT Second Generation
(MSG) satellites. It is a radiometer providing measurements in 16 spectral bands with a full Earth coverage every 10
minutes. For the Level 2 processing of FCI datasets the measurements have to be re-sampled on a constant reference grid
in a geostationary projection – this process is referred to as rectification.
The use of a three-axis stabilised platform and the scanning scheme applied to the FCI make rectification in MTG more
challenging than in the MSG/SEVIRI case. Classical interpolation formulas assume a uniform sampling spacing of the
measurements. However, non-uniform sampling may occur in the FCI sampling acquisition due to platform dynamics,
micro-vibrations, thermo-elastic focal plane and optical distortions. In such a case, classical methods can cause
significant rectification errors and interpolation algorithms, which can cope with non-uniform sampling, are required.
This paper analyses the effect of non-uniform sampling in the FCI rectification process and aims to select and assess
suitable resampling algorithms for the FCI L1 processing chain. Several techniques tailored to non-uniform resampling
have been implemented. Performances of both uniform and non-uniform interpolation algorithms have been evaluated
and compared using simulated FCI-like data samples. The analysis has been done for a nominal and a worst-case sample
acquisition scenario. The presentation will show the results of our simulations with respect to the MTG requirements.
Image Restoration and Segmentation
Hyperspectral image restoration using wavelets
Show abstract
In this paper a new hyperspectral image based on wavelets and sparse regularization is proposed. This new method is called Wavelet Based Sparse Restoration (WBSR). The hyperspectral signal is restored by utilizing penalized least squares and the `1 penalty. Iterative Soft Thresholding (IST) algorithm is used to solve the convex optimization problem. It is shown that not only WBSR improves the denioising results both visually and based on Signal to Noise Ratio (SNR) but also increases the classification accuracies.
Evaluation of a segmentation algorithm designed for an FPGA implementation
Show abstract
The present work has to be seen in the context of real-time on-board image evaluation of optical satellite
data. With on board image evaluation more useful data can be acquired, the time to get requested information
can be decreased and new real-time applications are possible. Because of the relative high processing power in
comparison to the low power consumption, Field Programmable Gate Array (FPGA) technology has been chosen
as an adequate hardware platform for image processing tasks. One fundamental part for image evaluation is
image segmentation. It is a basic tool to extract spatial image information which is very important for many
applications such as object detection. Therefore a special segmentation algorithm using the advantages of
FPGA technology has been developed. The aim of this work is the evaluation of this algorithm. Segmentation
evaluation is a difficult task. The most common way for evaluating the performance of a segmentation method
is still subjective evaluation, in which human experts determine the quality of a segmentation. This way is not
in compliance with our needs. The evaluation process has to provide a reasonable quality assessment, should
be objective, easy to interpret and simple to execute. To reach these requirements a so called Segmentation
Accuracy Equality norm (SA EQ) was created, which compares the difference of two segmentation results. It
can be shown that this norm is capable as a first quality measurement. Due to its objectivity and simplicity the
algorithm has been tested on a specially chosen synthetic test model. In this work the most important results of
the quality assessment will be presented.
Comparison of an L1-regression-based and a RANSAC-based planar segmentation procedure for urban terrain data with many outliers
Jian Luo,
Zhibin Deng,
Dimitri Bulatov,
et al.
Show abstract
For urban terrain data with many outliers, we compare an ℓ1-regression-based and a RANSAC-based planar
segmentation procedure. The procedure consists of 1) calculating the normal at each of the points using ℓ1
regression or RANSAC, 2) clustering the normals thus generated using DBSCAN or fuzzy c-means, 3) within
each cluster, identifying segments (roofs, walls, ground) by DBSCAN-based-subclustering of the 3D points that
correspond to each cluster of normals and 4) fitting the subclusters by the same method as that used in Step 1
(ℓ1 regression or RANSAC). Domain decomposition is used to handle data sets that are too large for processing
as a whole. Computational results for a point cloud of a building complex in Bonnland, Germany obtained
from a depth map of seven UAV-images are presented. The ℓ1-regression-based procedure is slightly over 25%
faster than the RANSAC-based procedure and produces better dominant roof segments. However, the roof
polygonalizations and cutlines based on these dominant segments are roughly equal in accuracy for the two
procedures. For a set of artificial data, ℓ1 regression is much more accurate and much faster than RANSAC. We
outline the complete building reconstruction procedure into which the ℓ1-regression-based and RANSAC-based
segmentation procedures will be integrated in the future.
Automatic urban road extraction on DSM data based on fuzzy ART, region growing, morphological operations and radon transform
Show abstract
In recent years, an automatic urban road extraction, as part of Intelligent Transportation research, has attracted the researchers due to the important role for the next modern transportation where urban area plays the main role within the transportation system. In this work, we propose a new combination of fuzzy ART clustering, Region growing, Morphological Operations and Radon transform (ARMOR) for automatic extraction of urban road networks from the digital surface model (DSM). The DSM data, which is based-on the elevation of surface, overcome a serious building's shadow problem as in the aerial photo image. Due to the different elevation between the road and the buildings, the thresholding technique yields a fast initial road extraction. The threshold values are obtained from Fuzzy ART clustering of the geometrical points in the histogram. The initial road is then expanded using region growing. Though most of the road regions are extracted, it contains a lot of non-road areas and the edge is still rough. A fast way to smoothing the region is by employing the morphology closing operation. Furthermore, we perform the road line filter by opening operation with a line shape structuring element, where the line orientation is obtained from the Radon Transform. Finally, the road network is constructed based-on B-Spline from the extracted road skeleton. The experimental result shows that the proposed method running faster and increases the quality and the accuracy about 10% higher than the highest result of the compared method.
Soil surface roughness modeling: limit of global characterization in remote sensing
Show abstract
Many scientists use a global characterization of bare soil surface random roughness. Surface roughness is often
characterized by statistical parameters deduced from its autocorrelation function. Assuming an autocorrelation model and
a Gaussian height distribution, some authors have developed algorithms for numerical generation of soil surfaces that
have the same statistical properties. This approach is widespread and does not take into account morphological aspects of
the soil surface micro-topography. Now a detail surface roughness analysis reveals that the micro-topography is
structured by holes, aggregates and clods. In the present study, we clearly show that when describing surface roughness
as a whole, some information related to morphological aspects is lost. Two Digital Elevation Model (DEM) of a same
natural seedbed surface were recorded by stereo photogrammetry. After estimating global parameters of these natural
surfaces, we generated numerical surfaces of the same average characteristics by linear filtering. Big aggregates and
clods were then captured by a contour-based approach. We show that the two-dimensional autocorrelation functions of
generated surfaces and of the two agricultural surfaces are close together. Nevertheless, the number and shape of
segmented object contours change from generated surfaces to the natural surfaces. Generated surfaces show fewer and
bigger segmented objects than in the natural case. Moreover, the shape of some segmented objects is unrealistic in
comparison to real clods, which have to be convex and of low circularity.
Image Registration and Object Recognition
Dense registration of CHRIS-Proba and Ikonos images using multi-dimensional mutual information maximization
Show abstract
We investigate the potential of multidimensional mutual information for the registration of multi-spectral remote
sensing images. We devise a gradient flow algorithm which iteratively maximizes the multidimensional
mutual information with respect to a differentiable displacement map, accounting for partial derivatives of the
multivariate joint distribution and the multivariate marginal of the float image with respect to each variable of
the mutual information derivative. The resulting terms are shown to weight the band specific gradients of the
warp image, and we propose in addition to compute them with a method based on the k-nearest neighbours. We
apply our method to the registration of Ikonos and CHRIS-Proba images over the region of Baabdat, Lebanon,
for purposes of cedar pines detection. A comparison between (crossed) single band and multi-band registration
results obtained shows that using the multidimensional mutual information brings a significant gain in positional
accuracy and is suitable for multispectral remote sensing image registration.
A self-adaptive image registration method: from local learning to overall processing
Show abstract
Image registration is a fundamental and crucial step in remote sensing image analysis. However, it is known that image
registration method is application-based. The type and content of remote sensing images affect the choice of image
registration methods. Previous image registration task took experts to manually choose the image registration elements.
This paper presents a self-adaptive image registration method which could automatically choose registration elements
which are more appropriate for remote sensing images under processing. The proposed method first chooses several
local regions for the representation of the whole image, and then different registration elements are tested on these local
regions. The local registration results are evaluated and the registration of the whole image is done with learned
registration elements from local registrations. The registration chain is automatic; therefore it is a self-adaptive
registration method. The proposed method is demonstrated on several real remote sensing image pairs, and its feasibility
and superiority are proved by the results.
Automated search for livestock enclosures of rectangular shape in remotely sensed imagery
Igor Zingman,
Dietmar Saupe,
Karsten Lambers
Show abstract
We introduce an approach for the detection of approximately rectangular structures in gray scale images. Our
research is motivated by the Silvretta Historica project that aims at automated detection of remains of livestock
enclosures in remotely sensed images of alpine regions. The approach allows detection of enclosures with linear
sides of various sizes and proportions. It is robust to incomplete or fragmented rectangles and tolerates deviations
from a perfect rectangular shape. Morphological operators are used to extract linear features. They are grouped
into parameterized linear segments by means of a local Hough transform. To identify appropriate configurations
of linear segments we define convexity and angle constraints. Configurations meeting these constraints are rated
by a proposed rectangularity measure that discards overly fragmented configurations and configurations with
more than one side completely missing. The search for appropriate configurations is efficiently performed on a
graph. Its nodes represent linear segments and edges encode the above constraints. We tested our approach
on a set of aerial and GeoEye-1 satellite images of 0.5m resolution that contain ruined livestock enclosures of
approximately rectangular shape. The approach showed encouraging results in finding configurations of linear
segments originating from the objects of our interest.
A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection
Show abstract
Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray
level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input-output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.
Hyperspectral Image Processing
Preprocessing of hyperspectral images: a comparative study of destriping algorithms for EO1-hyperion
Show abstract
In this study, data from the EO-1 Hyperion instrument were used. Apart from atmospheric influences or topographic
effects, the data represent a good choice in order to show different steps of the preprocessing process targeting sensorinternal
sources of errors. These include the diffuse sensor noise, the striping effect, the smile effect, the keystone effect
and the spatial misalignments between the detector arrays. For this research paper, the authors focus on the striping effect
by comparing and evaluating different algorithms, methods and configurations to correct striping errors. The correction
of striping effects becomes necessary due to the imprecise calibration of the detector array. This inaccuracy affects
especially the first 12 visual and near infrared bands (VNIR) and also a large number of the bands in the short wave
infrared array (SWIR). Altogether six destriping techniques were tested on the basis of a Hyperion dataset covering a test
site in Central Europe. For the final evaluation, various analyses across all Hyperion channels were performed. The
results show that some correction methods have almost no effect on the striping in the images. Other methods may
eliminate the striping, but analyses show that these algorithms also alter pixel values in adjacent areas which originally
had not been disturbed by the striping effect.
Wavelet based hyperspectral image restoration using spatial and spectral penalties
Show abstract
In this paper a penalized least squares cost function with a new spatial-spectral penalty is proposed for hyper-
spectral image restoration. The new penalty is a combination of a Group LASSO (GLASSO) and First Order
Roughness Penalty (FORP) in the wavelet domain. The restoration criterion is solved using the Alternative
Direction Method of Multipliers (ADMM). The results are compared with other restoration methods where the
proposed method outperforms them for the simulated noisy data set based on Signal to Noise Ratio (SNR) and
visually outperforms them on a real degraded data set.
Hyperspectral image segmentation using a cooperative nonparametric approach
Show abstract
In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is
presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are
evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are
used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to
segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of
regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and
validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management
of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment
steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the
spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature
extraction step, up to the final classification results. This approach was first evaluated on a large number of
monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a
respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image
for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate
(CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR)
is over 99%.
VST-based lossy compression of hyperspectral data for new generation sensors
Show abstract
This paper addresses lossy compression of hyperspectral images acquired by sensors of new generation for which signaldependent
component of the noise is prevailing compared to the noise-independent component. First, for sub-band
(component-wise) compression, it is shown that there can exist an optimal operation point (OOP) for which MSE between
compressed and noise-free image is minimal, i.e., maximal noise filtering effect is observed. This OOP can be observed for
two approaches to lossy compression where the first one presumes direct application of a coder to original data and the
second approach deals with applying direct and inverse variance stabilizing transform (VST). Second, it is demonstrated
that the second approach is preferable since it usually provides slightly smaller MSE and slightly larger compression ratio
(CR) in OOP. One more advantage of the second approach is that the coder parameter that controls CR can be set fixed for
all sub-band images. Moreover, CR can be considerably (approximately twice) increased if sub-band images after VST are
grouped and lossy compression is applied to a first sub-band image in a group and to “difference” images obtained for this
group. The proposed approach is tested for Hyperion hyperspectral images and shown to provide CR about 15 for data
compression in the neighborhood of OOP.
Unmixing and Classification in Hyperspectral Images
Estimating the number of endmembers in hyperspectral imagery using hierarchical agglomerate clustering
Jee-Cheng Wu,
Heng-Yang Wu,
Gwo-Chyang Tsuei
Show abstract
A classical spectral un-mixing of hyperspectral image involves identifying the unique signatures of the endmembers (i.e.
pure materials) and estimating the proportions of endmembers for each pixel by inversion. The key to successful spectral
un-mixing is indicating the number of endmembers and their corresponding spectral signatures. Currently, eigenvaluebased
estimation of the number of endmembers in hyperspectral image is widely used. However, the eigenvalue-based
methods are difficult to separate signal sources such as anomalies.
In this paper, a two-stage process is proposed to estimate the endmember numbers. At the preprocessing stage, the
spectral dimensions are reduced using principal component analysis and the spatial dimensions are reduced using convex
hull computation based on reduced-spectral bands. At the hierarchical agglomerate clustering stage, a pixel vector is
found by applying orthogonal subspace projection (OSP) and cluster pixel vectors using the spectral angle mapper
(SAM), hierarchically. If the number of pixel vectors in a cluster is greater than the predefined number, the found pixel
vector is set as the endmember. Otherwise, anomalous vectors are found. The proposed method was carried with both
synthetic and real images for estimating the number of endmembers. The results demonstrate that the proposed method
can be used to estimate more reasonable and precise number of endmembers than the eigenvalue-based methods.
Boundary constraints for singular value decomposition of spectral data
Show abstract
Singular value decomposition (SVD) and principal component analysis enjoy a broad range of applications, including,
rank estimation, noise reduction, classification and compression. The resulting singular vectors form orthogonal basis
sets for subspace projection techniques. The procedures are applicable to general data matrices. Spectral matrices
belong to a special class known as non-negative matrices. A key property of non-negative matrices is that their
columns/rows form non-negative cones, with any non-negative linear combination of the columns/rows belonging to the
cone. This special property has been implicitly used in popular rank estimation techniques know as virtual dimension
(VD) and hyperspectral signal identification by minimum error (HySime). Data sets of spectra reside in non-negative
orthants. The subspace spanned by a SVD of a set of spectra includes all orthants. However SVD projections can be
constrained to the non-negative orthants. In this paper two types of singular vector projection constraints are identified,
one that confines the projection to lie within the cone formed by the spectral data set, and a second that only restricts
projections to the non-negative orthant. The former is referred to here as the inner constraint set, the latter the outer
constraint set. The outer constraint set forms a broader cone since it includes projections outside the cone formed by the
data array. The two cones form boundaries for the cones formed by non-negative matrix factorizations (NNF).
Ambiguities in the NNF lead to a variety of possible sets of left and right non-negative vectors and their cones. The
paper presents the constraint set approach and illustrates it with applications to spectral classification.
Extraction of spatial features in hyperspectral images based on the analysis of differential attribute profiles
Show abstract
The new generation of hyperspectral sensors can provide images with a high spectral and spatial resolution. Recent improvements in mathematical morphology have developed new techniques such as the Attribute Profiles (APs) and the Extended Attribute Profiles (EAPs) that can effectively model the spatial information in remote sensing images. The main drawbacks of these techniques is the selection of the optimal range of values related to the family of criteria adopted to each filter step, and the high dimensionality of the profiles, which results in a very large number of features and therefore provoking the Hughes phenomenon. In this work, we focus on addressing the dimensionality issue, which leads to an highly intrinsic information redundancy, proposing a novel strategy for extracting spatial information from hyperspectral images based on the analysis of the Differential Attribute Profiles (DAPs). A DAP is generated by computing the derivative of the AP; it shows at each level the residual between two adjacent levels of the AP. By analyzing the multilevel behavior of the DAP, it is possible to extract geometrical features corresponding to the structures within the scene at different scales. Our proposed approach consists of two steps: 1) a homogeneity measurement is used to identify the level L in which a given pixel belongs to a region with a physical meaning; 2) the geometrical information of the extracted regions is fused into a single map considering their level L previously identified. The process is repeated for different attributes building a reduced EAP, whose dimensionality is much lower with respect to the original EAP ones. Experiments carried out on the hyperspectral data set of Pavia University area show the effectiveness of the proposed method in extracting spatial features related to the physical structures presented in the scene, achieving higher classification accuracy with respect to the ones reported in the state-of-the-art literature
Affinity propagation for large size hyperspectral image classification
Show abstract
The affinity propagation (AP)1 is now among the most used methods of unsupervised classification. However, it has two
major disadvantages. On the one hand, the algorithm implicitly controls the number of classes from a preference
parameter, usually initialized as the median value of the similarity matrix, which often gives over-clustering. On the
other hand, when partitioning large size hyperspectral images, its computational complexity is quadratic and seriously
hampers its application. To solve these two problems, we propose a method which consists of reducing the number of
individuals to be classified before the application of the AP, and to concisely estimate the number of classes. For the
reduction of the number of pixels, a pre-classification step that automatically aggregates highly similar pixels is
introduced. The hyperspectral image is divided into blocks, and then the reduction step is applied independently within
each block. This step requires less memory storage since the calculation of the full similarity matrix is not required. The
AP is then applied on the new set of pixels which are then set up from the representatives of each previously formed
cluster and non-aggregated individuals. To estimate the number of classes, we introduced a dichotomic method to assess
classification results using a criterion based on inter-class variance. The application of this method on various test images
has shown that AP results are stable and independent to the choice of the block size. The proposed approach was
successfully used to partition large size real datasets (multispectral and hyperspectral images).
Improving the efficiency of MESMA through geometric unmixing principles
Show abstract
Spectral Mixture Analysis is a widely used image analysis tool with many applications. Yet, one of the major issues with
this technique remains the lack of ability to properly account for the spectral variability of endmembers or ground cover
components that occur throughout an image scene. Endmember variability is most often addressed using iterative
mixture cycles (e.g. MESMA) in which different endmember combination models are compared for each pixel. The
model with the best fit is assigned to the pixel. The drawback of MESMA is the computational burden which often
hampers the operational use. In an attempt to address this issue we proposed a new geometric based methodology to
more efficiently evaluate different endmember combinations in MESMA. This geometric unmixing methodology has a
two-fold benefit. First of all, geometric unmixing allows a fast and fully constrained unmixing, which was previously
unfeasible in MESMA due to the long processing times of the available fully constrained unmixing methods. Secondly,
whereas the traditional MESMA explores all different endmember combinations separately, and selects the most
appropriate combination as a final step, our approach selects the best endmember combination prior to unmixing, as such
increasing the computational efficiency of MESMA. To do so, we built upon the equivalence between the reconstruction
error in least-squares unmixing and spectral angle minimization in geometric unmixing. With the inclusion of the
proposed endmember combination selection technique, the computation time decreased by a factor between 5 and 8.5,
depending on the size and organization of the libraries. The spectral angle can as such be used as a proxy for model fit,
enabling the selection of the proper endmember combination from large spectral libraries prior to unmixing.
Image Classification
Hyperspectral image classification using a spectral-spatial sparse coding model
Show abstract
We present a sparse coding based spectral-spatial classification model for hyperspectral image (HSI) datasets. The
proposed method consists of an efficient sparse coding method in which the l1/lq regularized multi-class logistic regression technique was utilized to achieve a compact representation of hyperspectral image pixels for land cover
classification. We applied the proposed algorithm to a HSI dataset collected at the Kennedy Space Center and compared
our algorithm to a recently proposed method, Gaussian process maximum likelihood (GP-ML) classifier. Experimental
results show that the proposed method can achieve significantly better performances than the GP-ML classifier when
training data is limited with a compact pixel representation, leading to more efficient HSI classification systems.
Classification of hyperspectral images with binary fractional order Darwinian PSO and random forests
Show abstract
A new binary optimization method inspired on the Fractional-Order Darwinian Particle Swarm Optimization is proposed
and applied to a novel spectral-spatial classification framework.Afterwards, the new optimization algorithm is used in a
novel spectral-spatial classification frameworkfor the selection of the most effective group of bands. In the proposed
approach, first, the raw data set (only spectral data) along with the morphological profiles of the first effective principal
components are integrated into a stacked vector. Then, the output of this step is considered as the input of the new
optimization method. The Random Forest classifier is used as a fitness function for the cross-validation samples and the
overall classification accuracy for the evaluation of the group of bands.Finally, the selected bands are classified by a
classifier and the output provides the final classification map. Experimental results successfully confirm that the new
approach works better than whenconsideringall the raw bands, the whole morphological profile and the combination of
the raw bands and morphological profile.
Smoothing parameter estimation framework for Markov random field by using contextual and spectral information
Show abstract
Markov random field (MRF) is currently the most common method to find the optimal solution for the classification of
image data incorporating contextual visual information. The labeling for a site in MRF is dependent on smoothing
parameters. Therefore, this paper deals with the development of a new robust two-step method to determine the
smoothing parameter which balances spatial and spectral energies for the purpose of maximizing the classification
accuracy. Multispectral images obtained by WorldView-2 satellite were employed in this research.
In the first step, a support vector machine (SVM) was used to provide a vector of multi-class probability and a class label
for each pixel. Then, the summation of the maximum probability of each pixel and its 8 neighbors is calculated for a
dynamic block and this value is assigned to the central pixels of each block. The blocks of each class are sorted and an
equal proportion of blocks of each class with the highest probability are selected. Then, the class codes and spectral
information of the selected blocks are extracted from the classified map and multispectral image, respectively. This
information is used to calculate class label co-occurrence matrices of the blocks (CLCMB), class label co-occurrence
matrix (CLCM) and class separability indices. Finally, different smoothing parameters are calculated and the results
show that estimated smoothing parameter can produce a more accurate map.
Road extraction from satellite images by self-supervised classification and perceptual grouping
Show abstract
A fully automatized method which can extract road networks by using the spectral and structural features of the roads is
proposed. First, Anti-parallel Centerline Extraction (ACE) is used to obtain road seed points. Then, the road seeds are
improved with perceptual grouping method and the road class is determined with Maximum Likelihood Estimation
(MLE) by modeling the seed points with Gaussian Mixture. The morphological operations (opening, closing and
thinning) are performed for improving classification results and determining the road topology roughly. Finally,
perceptual grouping is performed for removing non-road line segments and filling the gaps on the topology. The
proposed algorithm is tested on 1 meter resolution IKONOS images and results better than previous algorithms are
obtained.
Extraction and refinement of building faces in 3D point clouds
Melanie Pohl,
Jochen Meidow,
Dimitri Bulatov
Show abstract
In this paper, we present an approach to generate a 3D model of an urban scene out of sensor data. The first milestone
on that way is to classify the sensor data into the main parts of a scene, such as ground, vegetation, buildings and their
outlines. This has already been accomplished within our previous work. Now, we propose a four-step algorithm to model
the building structure, which is assumed to consist of several dominant planes. First, we extract small elevated objects,
like chimneys, using a hot-spot detector and handle the detected regions separately. In order to model the variety of roof
structures precisely, we split up complex building blocks into parts. Two different approaches are used: To act on the
assumption of underlying 2D ground polygons, we use geometric methods to divide them into sub-polygons. Without
polygons, we use morphological operations and segmentation methods. In the third step, extraction of dominant planes
takes place, by using either RANSAC or J-linkage algorithm. They operate on point clouds of sufficient confidence within
the previously separated building parts and give robust results even with noisy, outlier-rich data. Last, we refine the
previously determined plane parameters using geometric relations of the building faces. Due to noise, these expected
properties of roofs and walls are not fulfilled. Hence, we enforce them as hard constraints and use the previously extracted
plane parameters as initial values for an optimization method. To test the proposed workflow, we use both several data sets,
including noisy data from depth maps and data computed by laser scanning.
Data Mining and Data Fusion
Multisource oil spill detection
Show abstract
In this paper we discuss how multisource data (wind, ocean-current, optical, bathymetric, automatic identification systems (AIS)) may be used to improve oil spill detection in SAR images, with emphasis on the use of automatic oil spill detection algorithms. We focus particularly on AIS, optical, and bathymetric data. For the AIS data we propose an algorithm for integrating AIS ship tracks into automatic oil spill detection in order to improve the confidence estimate of a potential oil spill. We demonstrate the use of ancillary data on a set of SAR images. Regarding the use of optical data, we did not observe a clear correspondence between high chlorophyll values (estimated from products derived from optical data) and observed slicks in the SAR image. Bathymetric data was shown to be a good data source for removing false detections caused by e.g. sand banks on low tide. For the AIS data we observed that a polluter could be identified for some dark slicks, however, a precise oil drift model is needed in order to identify the polluter with high certainty.
Recurrent neural networks for automatic clustering of multispectral satellite images
Show abstract
In the present work we applied a recently developed procedure for multidimensional data clustering to multispectral
satellite images. The core of our approach lays in projection of the multidimensional image to a two dimensional space.
For this purpose we used extensively investigated family of recurrent artificial neural networks (RNN) called “Echo state
network” (ESN). ESN incorporates a randomly generated recurrent reservoir with sigmoid nonlinearities of neurons
outputs. The procedure called Intrinsic Plasticity (IP) that is aimed at reservoir output entropy maximization was applied
for adapting of reservoir steady states to the multidimensional input data. Next we consider all possible combinations
between steady states of each two neurons in the reservoir as two-dimensional projections of the original
multidimensional data. These low dimensional projections were subjected to subtractive clustering in order to determine
number and position of data clusters. Two approaches to choose a proper projection among the all possible combinations
between neurons were investigated. The first one is based on the calculation of two-dimensional density distributions of
each projection, determination of number of their local maxima and choice of the projections with biggest number of
these maxima. The second one applies clustering to all projections and chooses those with maximum number of clusters.
Multispectral data from Landsat 7 Enhanced Thematic Mapper Plus (ETM+) instrument are used in this work. The
obtained number and position of clusters of a multi-spectral image of a mountain region in Bulgaria is compared with the
regional landscape classification.
Joint processing of Landsat ETM+ and ALOS-PALSAR data for species richness and forest biodiversity monitoring
Show abstract
Optical remote sensing data is commonly used for estimating biophysical characteristics of forest like tree biodiversity
and species richness. Recent advances in radar remote sensing technology raise significant interest to take advantage of
the complementary nature of optical and radar data. This paper proposes an approach combining Landsat ETM+ and
Advanced Land-Observing Satellite- Phased Array L-band Synthetic Aperture Radar (ALOS/PALSAR) data for forest
biodiversity and species richness monitoring. Inventory data from one part of the Hyrcanian forest in north of Iran is
used as field data. Visible and infrared ETM+ bands, indices and textures information as well as HH, HV backscattering,
polarimetric features (alpha angle, entropy and anisotropy) and SAR texture are extracted. We use the multiple linear
regression model to find the best components to describe the biodiversity indices in the study area. We show how tree
biodiversity is related to information derived from ETM+ and ALOS/PALSAR data at 95% confidence level (R!=0.63,
root mean square error (RMSE) =1.70). Also, the effects of each component on the variation of biodiversity are shown.
ETM+ reflectance, polarimetric features and texture from ALOS/PALSAR can describe approximately 63% of
biodiversity. The results of multi source monitoring of tree biodiversity and species richness are promising and worth
further investigation.
Data mining and model adaptation for the land use and land cover classification of a Worldview 2 image
Show abstract
Forest fragmentation studies have increased since the last 3 decades. Land use and land cover maps (LULC) are
important tools for this analysis, as well as other remote sensing techniques. The object oriented analysis classifies the
image according to patterns as texture, color, shape, and context. However, there are many attributes to be analyzed, and
data mining tools helped us to learn about them and to choose the best ones. In this way, the aim of this paper is to
describe data mining techniques and results of a heterogeneous area, as the municipality of Silva Jardim, Rio de Janeiro,
Brazil. The municipality has forest, urban areas, pastures, water bodies, agriculture and also some shadows as objects to
be represented. Worldview 2 satellite image from 2010 was used and LULC classification was processed using the
values that data mining software has provided according to the J48 method. Afterwards, this classification was
analyzed, and the verification was made by the confusion matrix, being possible to evaluate the accuracy (58,89%). The
best results were in classes “water” and “forest” which have more homogenous reflectance. Because of that, the model
has been adapted, in order to create a model for the most homogeneous classes. As result, 2 new classes were created,
some values and some attributes changed, and others added. In the end, the accuracy was 89,33%. It is important to
highlight this is not a conclusive paper; there are still many steps to develop in highly heterogeneous surfaces.
Application of genetic programming and Landsat multi-date imagery for urban growth monitoring
Show abstract
Monitoring of earth surface changes from space by using multi-date satellite imagery was always a main concern to
researchers in the field of remotely sensed image processing. Thus, several techniques have been proposed to saving
technicians from interpreting and digitizing hundreds of areas by hand.
The exploiting of simple, easy to memorize and often comprehensible mathematical models such band-ratios and indices
are one of the widely used techniques in remote sensing for the extraction of particular land-cover/land-use like urban
and vegetation areas. The results of these models generally only need the definition of adequate threshold or using
simple unsupervised classification algorithms to discriminate between the class of interest and the background.
In our work a genetic programming based approach has been adopted to evolve simple mathematical expression to
extract urban areas from image series. The model is built from a single image by using a basic set of operators between
spectral bands and maximizing a fitness function, which is based on the using of the M-statistic criterion.
The model was constructed from the Landsat 5 TM image acquired in 2006 by using training samples extracted with the
help of a Quick-bird high spatial resolution satellite image acquired the same day as the Landsat image over the city of
Oran, Algeria. The model has been tested to extract urban areas from multi-date series of Landsat TM imagery
Change Detection and Multitemporal Analysis
Detection of damage to building side-walls in the 2011 Tohoku, Japan earthquake using high-resolution TerraSAR-X images
Show abstract
Building damage such as to side-walls or mid-story collapse is often overlooked in vertical optical images. Hence, in
order to observe such building damage modes, high-resolution SAR images are introduced considering the side-looking
nature of SAR. In the 2011 Tohoku, Japan, earthquake, a large number of buildings were collapsed or severely damaged
due to repeated tsunamis. One of the important tsunami effects on buildings is that the damage is concentrated to their
side-walls and lower stories. Thus this paper proposes the method to detect this kind damage from the change in layover
areas in SAR intensity images. Multi-temporal TerraSAR-X images covering the Sendai-Shiogama Port were employed
to detect building damage due to the tsunamis caused by the earthquake. The backscattering coefficients in layover areas
of individual buildings were extracted and then, the average value in each layover area was calculated. The average value
was seen to decrease in the post-event image due to the reduced backscatter from building side-walls. This example
demonstrated the usefulness of high-resolution SAR intensity images to detect severe damage to building side-walls
based on the changes of the backscattering coefficient in the layover areas.
Connectivity constraint-based sequential pattern extraction from Satellite Image Time Series (SITS)
Show abstract
The temporal evolution of pixel values in Satellite Image Time Series (SITS) is considered as criterion for the
characterization, discrimination and identification of terrestrial objects and phenomena. Due to the exponential behavior
of sequences number with specialization, Sequential Data Mining (SDM) techniques need to be applied. The huge search
and solution spaces imply the use of constraints according to the user’s knowledge, interest and expectation. The spatial
aspect of the data was taken into account by the introduction of connectivity measures that characterize the pixels
tendency to form objects. These measures can highlight stratifications in data structure, can be useful for shape
recognition and offer a base for post-processing operations similar to those from mathematical morphology (dilation,
erosion etc.). The conjunction of corresponding Connectivity Constraints (CC) with the Support Constraint (SC) leads to
the extraction of Grouped Frequent Sequential Patterns (GFSP), a concept with proved capability for preliminary
description and localization of terrestrial events. This work is focused on efficient SITS extraction of evolutions that
fulfill SC and CC. Different types of extractions using anti-monotone constraints are analyzed. Experiments performed
on two interferometric SITS are used to illustrate the potential of the approach to find interesting evolution patterns.
Fusion of satellite and aerial images for identification and modeling of nature types
Show abstract
In this paper we propose a framework for fusion of very high resolution (VHR) optical aerial images, satellite images (optical or SAR) and other ancillary data (e.g. a digital elevation model) for identification and modeling of nature types typically present in mountain vegetation in Arctic alpine areas. The data fusion methodology consists of three steps. (i) Segmentation of VHR aerial photo into spectrally homogeneous regions (polygons). (ii) Estimation of complementary information for each polygon using geo-referenced data from other sources. (iii) Analysis of the constructed feature vectors. We also demonstrated the strength of satellite data by qualitatively evaluating the potential for creating high resolution snow cover maps. These maps may be used to describe important environmental variables. Using a set of data consisting of an aerial photo, two SPOT 5 images and a Radarsat-2 quad-pol image, we demonstrated the potential of the data fusion methodology by an example where the polygon-derived features were analysed using PCA.
A robust nonlinear scale space change detection approach for SAR images
Show abstract
In this paper, we propose a change detection approach based on nonlinear scale space analysis of change images
for robust detection of various changes incurred by natural phenomena and/or human activities in Synthetic
Aperture Radar (SAR) images using Maximally Stable Extremal Regions (MSERs). To achieve this, a variant
of the log-ratio image of multitemporal images is calculated which is followed by Feature Preserving Despeckling
(FPD) to generate nonlinear scale space images exhibiting different trade-offs in terms of speckle reduction
and shape detail preservation. MSERs of each scale space image are found and then combined through a
decision level fusion strategy, namely “selective scale fusion” (SSF), where contrast and boundary curvature of
each MSER are considered. The performance of the proposed method is evaluated using real multitemporal
high resolution TerraSAR-X images and synthetically generated multitemporal images composed of shapes with
several orientations, sizes, and backscatter amplitude levels representing a variety of possible signatures of change.
One of the main outcomes of this approach is that different objects having different sizes and levels of contrast
with their surroundings appear as stable regions at different scale space images thus the fusion of results from
scale space images yields a good overall performance.
Data Processing Applications
Investigating vegetation spectral reflectance for detecting hydrocarbon pipeline leaks from multispectral data
Show abstract
The aim of this paper is to analyse spectral reflectance data from Landsat TM of vegetation that has been exposed to
hydrocarbon contamination from oil spills from pipelines. The study is undertaken in an area of mangrove and swamp
vegetation where the detection of an oil spill is traditionally difficult to make. We used a database of oil spill records to
help identify candidate sites for spectral analysis. Extracted vegetation spectra were compared between polluted and nonpolluted
sites and supervised (neural network) classification was carried out to map hydrocarbon (HC) contaminated
sites from the sample areas. Initial results show that polluted sites are characterised by high reflectance in the visible
(VIS) 0.4μm - 0.7μm, and a lower reflectance in the near-infrared (NIR) 0.7μm - 1.1μm. This suggests that the
vegetation is in a stressed state. Samples taken from pixels surrounding polluted sites show similar spectral reflectance
values to that of polluted sites suggesting possible migration of HC to the wider environment. Further work will focus on
increasing the sample size and investigating the impact of an oil spill on a wider buffer zone around the spill site.
The development of a remote sensing system with real-time automated horizon tracking for distance estimation at sea
Abdulquadir L. Baruwa,
Adrian N. Evans,
Roy Wyatt
Show abstract
This paper outlines the development of a robust, automated and real-time horizon detection and tracking system for the
purpose of distance estimation of the sea surface. The distance information facilitates visual monitoring of the sea
surface in remote sensing or monitoring applications. The specific application of the system in marine mammal
mitigation is used to trial and demonstrate the effectiveness of the system. Results from the trials are presented and
shows the effectiveness of the system.
On board processing procedures for the Solar Orbiter METIS coronagraph
Show abstract
Solar Orbiter is an ESA space mission devoted to improve the knowledge of those effects nowadays still not fully
understood on the physical mechanisms underlying the behaviour of our star. The mission has a peculiar trajectory that
will bring the S/C close to the Sun up to 0.28 AU, exploiting the opportunity to follow up our star as close as never
before. METIS, one of the instruments selected to be part of the Solar Orbiter payload, is a coronagraph that will
investigate the inner part of the heliosphere performing imaging in the visible band and in the hydrogen Lyman α line @
121.6 nm. METIS will be able to simultaneously operate the two detectors: an Intensified APS for the UV channel and
an APS for the visible light, and a Liquid Crystal Variable Retarder (LCVR) plate, for broadband visible polarimetry.
They will be operated by means of the centralised management unit of the instrument, the METIS Processing and Power
Unit. This payload subsystem hosts a microprocessor that implements, thanks to the application software, all the needed
functionalities to fully control the instrument subsystems and its own processing capabilities. Both sensors will be
readout at high rate and the acquired data shall undergo through a preliminary on-board processing to maximize the
scientific return and to provide the necessary information to validate the results on ground. Being Solar Orbiter a deepspace
mission, some METIS procedures have been designed to provide to the instrument an efficient autonomous
behaviour in case of an immediate reaction is required as for arising transient events or occurrence of safety hazards
condition. METIS will implement an on-board algorithm for the automatic detection of this kind of events in order to
promptly react and autonomously adapt the observing procedure.
A spectral water index based on visual bands
Essa Basaeed,
Harish Bhaskar,
Mohammed Al-Mualla
Show abstract
Land-water segmentation is an important preprocessing step in a number of remote sensing applications such
as target detection, environmental monitoring, and map updating. A Normalized Optical Water Index (NOWI)
is proposed to accurately discriminate between land and water regions in multi-spectral satellite imagery data
from DubaiSat-1. NOWI exploits the spectral characteristics of water content (using visible bands) and uses a
non-linear normalization procedure that renders strong emphasize on small changes in lower brightness values
whilst guaranteeing that the segmentation process remains image-independent. The NOWI representation is
validated through systematic experiments, evaluated using robust metrics, and compared against various supervised
classification algorithms. Analysis has indicated that NOWI has the advantages that it: a) is a pixel-based
method that requires no global knowledge of the scene under investigation, b) can be easily implemented in
parallel processing, c) is image-independent and requires no training, d) works in different environmental conditions,
e) provides high accuracy and efficiency, and f) works directly on the input image without any form of
pre-processing.
SAR Data Processing I: Joint Session
Labeled co-occurrence matrix for the detection of built-up areas in high-resolution SAR images
Show abstract
The characterization of urban environments in synthetic aperture radar (SAR) images is becoming increasingly
challenging with the increased spatial ground resolutions. In SAR images having a geometrical resolution of few meters
(e.g. 3 m), urban scenes are roughly speaking characterized by three main types of backscattering: low intensity, medium
intensity, and high intensity, which correspond to different land-cover types. Based on the observations of the behavior
of the backscattering, in this paper we propose the labeled co-occurrence matrix (LCM) technique to detect and extract
built-up areas. Two textural features, autocorrelation and entropy, are derived from LCM. The image classification is
based on a similarity classifier defined in the general Lukasiewicz structure. Experiments have been carried out on
TerraSAR-X images acquired on Nanjing (China) and Barcelona (Spain), respectively. The obtained classification
accuracies point out the effectiveness of the proposed technique in identifying and detecting built-up areas compared
with the traditional grey level co-occurrence matrix (GLCM) texture features.
Robust tie points selection for InSAR image coregistration
Show abstract
Image coregistration is an important step in SAR interferometry which is a well known method for DEM generation and
surface displacement monitoring. A practical and widely used automatic coregistration algorithm is based on selecting a
number of tie points in the master image and looking for the correspondence of each point in the slave image using
correlation technique. The characteristics of these points, their number and their distribution have a great impact on the
reliability of the estimated transformation. In this work, we present a method for automatic selection of suitable tie points
that are well distributed over the common area without decreasing the desired tie points’ number. First we select
candidate points using Harris operator. Then from these points we select tie points depending on their cornerness
measure (the highest first). Once a tie point is selected, its correspondence is searched for in the slave image, if the
similarity measure maximum is less than a given threshold or it is at the border of the search window, this point is
discarded and we proceed to the next Harris point, else, the cornerness of the remaining candidates Harris points are
multiplied by a spatially radially increasing function centered at the selected point to disadvantage the points in a
neighborhood of a radius determined from the size of the common area and the desired number of points. This is
repeated until the desired number of points is selected. Results of an ERS1/2 tandem pair are presented and discussed.
A new heterogeneity scale to improve anisotropic diffusion based speckle filters in SAR images
Rohit K. Chatterjee,
Avijit Kar
Show abstract
The noise like quality characteristic of SAR images known as speckle become most critical impediment for
automatic segmentation and classification of targets. For last few decades many adaptive speckle filters were
proposed. Most of these classical filters are single stage i.e. repeated use of them causes blurring of image features
(e.g. edges or textures) and generates artefacts. But iterative uses of filters are required for desired amount of
smoothing. To retain structure in adaptive filtering, key component is precise estimation of scene heterogeneity.
Failure of traditional filters to retain features upon iteration is due to failure to measure scene heterogeneity
optimally. Perona and Malik proposed an Anisotropic Diffusion (AD) equation which iteratively diffuses gray
values by preserving edges. Black et al developed a robust statistical interpretation of AD, and opened a broader
context to choose between alternative diffusion functions. Depending on Black’s work earlier we proposed a robust
speckle reducing anisotropic diffusion (ROSRAD) filter and also a heterogeneity scale based on Otsu’s thresholding
algorithm. This scale is optimal but not robust. This paper is an extension to our work, here we propose a different
heterogeneity scale which is robust and performs better for speckle noise distribution.
SAR Data Processing II: Joint Session
A semi-automatic approach for estimating bedrock and surface layers from multichannel coherent radar depth sounder imagery
Jerome E. Mitchell,
David J. Crandall,
Geoffrey C. Fox,
et al.
Show abstract
The dynamic responses of the polar ice sheets in Greenland and Antarctica can have substantial impacts on sea level rise. Understanding the mass balance requires accurate assessments of the bedrock and surface layers, but identifying each layer in ground-penetrating radar imagery must typically be performed by time-consuming hand selection. We have developed an approach for semi-automatically estimating bedrock and surface layers from radar depth sounder imagery acquired from Antarctica. Our solution utilizes an active contours method (level sets"), which identifies surface and bedrock boundaries by evolving initial estimates of a layer's position and depth until a gradient-based cost function is minimized. We evaluated the semi-automatic proposed method on 20 images with respect to hand labeled ground-truth. Compared to an existing automatic technique, our approach reduced labeling error by factors of 5 and 3.5 for tracing bedrock and surface layers, respectively.
Poster Session
Comparison on accuracy of image matching between lossy JPEG compression and lossy JPEG 2000 compression
Ryuji Matsuoka,
Mitsuo Sone,
Noboru Sudo,
et al.
Show abstract
This paper reports an experiment conducted in order to compare lossy JPEG compression and lossy JPEG 2000
compression on the accuracy of image matching. The experiment has been conducted by using 54 color images of
diverse textures and diverse tones of color on the assumption that image matching utilizes a pair of images reconstructed
from image data which are lossily compressed in an ordinary digital camera. Lossy JPEG compression has been executed
with a set of compression parameters utilized in a digital camera Canon EOS 20D, while lossy JPEG 2000 compression
has been executed in the way as the file size of a piece of JPEG 2000 compressed image data is the same file size of the
corresponding JPEG compressed image data. Moreover, we have prepared another set of JPEG and JPEG 2000
compressed image data which are lossily compressed with the compression ratio expected for an ordinary color image
when it is compressed in the EOS 20D. When the file size of a piece of JPEG 2000 compressed image data is the same as
that of a piece of JPEG compressed image data, the experiment results clearly show that the compression performance of
JPEG 2000 compression would be superior to that of JPEG compression on image quality as to pixel value in RGB color
space. On the contrary, the results do not necessarily indicate that JPEG 2000 compression would be able to provide
more accurate matching results than JPEG compression.
Multi-source remote-sensing image matching based on epipolar line and least squares
Show abstract
In remote sensing image applications, the image matching is a very key technology, its quality directly related to the
quality of the subsequent results. This paper studied an improved SIFT features matching method for muili-source
remote-sensing image registration based on GPU computing, epipolar line and least squares, its main purpose is to take
both accuracy and efficiency into consideration. This method is firstly based on tonal balanced methods matching, and
then exracts SIFT features based on the GPU computing technology, and then matchs feature points based on epipolar
line and least squares matching method with RANSAC method, finally analies error sources of SIFT mismatch,
researchs an improved SIFT mismatch reduce strategy.The experimental results prove that the method can effectively
improve the efficiency and precision of SIFT feature matching.
Bartlett algorithm modification for energy spectrum assessment of optical radiation
Arseny Zhdanov,
Oleg Moskaletz
Show abstract
This research is conducted in order to develop a spectral device for instant remote sensing. The spectral device should
process radiation from sources, direct contact with radiation of which is either impossible or undesirable. In proposed
spectral device optical radiation is guided out of unfavorable environment via a piece of optical fiber with high negative
dispersion. Dispersion properties of optical fiber allow using it as a key element of proposed spectral device.
In this paper stationary in time optical radiation energy spectrum assessment is proposed using a modified Bartlett
algorithm. Original Bartlett algorithm is based on power sample spectrum calculation on the series of following samples
with duty cycle Q=1. Bartlett algorithm modification is a significant increase in duty cycle (Q>>1) of stationary process
samples. Proposed algorithm is to be realized basing on complex sample spectra calculation using a piece of optical fiber
as a dispersive system. Dispersion properties application to acquire sample optical spectra causes that time function of a
single sample spectrum has much longer duration than the original sample. This is the reason to modify the Bartlett
algorithm.
Significant increase of duty cycle Q means that separate analyzed radiation samples are uncorrelated in time. This allows
considering set of separate samples as an ensemble of N stochastic process realizations. Temporal stationarity condition
allows acquiring a big number of sample spectra from a single realization.
In this paper we perform mathematical analysis of optical radiation processing using modified Bartlett algorithm. A
complex sample spectrum is described as convolution of window spectrum function and complex spectral function of
optical radiation. Power spectrum is considered as product of complex spectrum and its complex. Spectral slot of
modified Bartlett algorithm matches the spectral slot of original Bartlett algorithm.
MIMO radar arrays with minimum redundancy: a design method
A. J. Kirschner,
U. Siart,
J. Guetlein,
et al.
Show abstract
Coherent multiple-input multiple-output (MIMO) radar systems with co-located antennas, form monostatic vir-
tual arrays by discrete convolution of a bistatic setup of transmitters and receivers. Thereby, a trade-off between
maximum array dimension, element spacing and hardware efforts exists. In terms of estimating the direction of
arrival, the covariance matrix of the array element signals plays an important role. Here, minimum redundancy
arrays aim at a hardware reduction with signal reconstruction by exploiting the Toeplitz characteristics of the
covariance matrix. However, the discrete spatial convolution complicates the finding of an optimal antenna setup
with minimum redundancy. Combinatorial effort is the consequence. This paper presents a possible simplified
algorithm in order to find MIMO array setups of maximum dimension with minimum redundancy.
The remote sensing image retrieval based on multi-feature
Show abstract
With the rapid development of remote sensing technology and variety of earth observation satellites have been
successfully launched, the volume of image datasets is growing exponentially in many application areas. The Contentbased
image retrieval (CBRSIR), as an efficient means for management and utilization of the information in image
database from the viewpoint of comprehension of image content, is applied on the remote sensing images retrieval.
However, one kind of features always can’t express the image content exactly. So, a multi-feature retrieval model based
on three color features and four texture features is proposed in this paper. The experiment results show that the multifeatures
model can improve the retrieval results than other model just by each singular feature.
Visual appearance of wind turbine tower at long range measured using imaging system
Show abstract
Wind turbine towers affect the visual appearance of the landscape, as an example in the touristic woodland of Dalecarlia,
and the fear is that the visual impact will be too negative to the important tourist trade. The landscape analysis,
developed by municipalities around Lake Siljan, limited expansion of wind power, due to the strong visual impression of
wind turbine towers. In order to facilitate the assessment of the visual impact of towers a view, from Tällberg, over the
ring of height on the other side of Lake Siljan, has been photographed every ten minutes for a year (34,727 images, about
65% of the possible number during a year). Four towers are possible to see in the photos, three of them have been used in
the assessment of visual impression. This contribution presents a method to assess visibility of wind turbine towers from
photographs, describing the measuring situation (location and equipment) as well as the analytical method and results of
the analysis.
The towers are possible to see in about 48% of analyzed images taken during daytime with the used equipment. During
the summer (winter) months the towers were apparent in 49% (46%) of the images. At least one red warning light was
possible to see on towers in about 66% of the night images.
One conclusion of this work is that the method to assess the visibility within digital photographs and translate it into the
equivalent of a normal eye can only provide an upper limit for visibility of an object.
Pixel response non-uniformity correction for multi-TDICCD camera based on FPGA
Show abstract
A non-uniformity correction algorithm is proposed and implemented on a Field-Programmable Gate Array (FPGA) hardware platform to solve a pixel response non-uniformity(PRNU) problem of multi Time Delay and Integration Charge Couple Device(TDICCD) camera. The non-uniformity are introduced and the synthetical correction algorithm is presented, in which the two-point correction method is used in a single channel, gain averaging correction method among multi-channel and the sceneadaptive correction method among multi-TDICCD. Then, the correction algorithm is designed. Finally, analyzing the FPGA ability for fix-point processing, the correction algorithm is optimized, and implemented on FPGA. Testing results indicate that the non-uniformity can be decreased from 8.27% to 0.51% for three TDICCDs camera's images with the proposed correction algorithm, proving that this correction algorithm is with high real-time performance, great engineering realization and satisfaction for the system requirements.