Proceedings Volume 9988

Electro-Optical Remote Sensing X

cover
Proceedings Volume 9988

Electro-Optical Remote Sensing X

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 28 December 2016
Contents: 8 Sessions, 31 Papers, 20 Presentations
Conference: SPIE Security + Defence 2016
Volume Number: 9988

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9988
  • Active Sensing I
  • Passive Sensing
  • Active Sensing II
  • Sensor Development
  • Infrared Sensing
  • Signal Processing
  • Poster Session
Front Matter: Volume 9988
icon_mobile_dropdown
Front Matter: Volume 9988
This PDF file contains the front matter associated with SPIE Proceedings Volume 9988 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Active Sensing I
icon_mobile_dropdown
Transient imaging for real-time tracking around a corner
Non-line-of-sight imaging is a fascinating emerging area of research and expected to have an impact in numerous application fields including civilian and military sensing. Performance of human perception and situational awareness can be extended by the sensing of shapes and movement around a corner in future scenarios.

Rather than seeing through obstacles directly, non-line-of-sight imaging relies on analyzing indirect reflections of light that traveled around the obstacle. In previous work, transient imaging was established as the key mechanic to enable the extraction of useful information from such reflections.

So far, a number of different approaches based on transient imaging have been proposed, with back projection being the most prominent one. Different hardware setups were used for the acquisition of the required data, however all of them have severe drawbacks such as limited image quality, long capture time or very high prices. In this paper we propose the analysis of synthetic transient renderings to gain more insights into the transient light transport. With this simulated data, we are no longer bound to the imperfect data of real systems and gain more flexibility and control over the analysis.

In a second part, we use the insights of our analysis to formulate a novel reconstruction algorithm. It uses an adapted light simulation to formulate an inverse problem which is solved in an analysis-by-synthesis fashion. Through rigorous optimization of the reconstruction, it then becomes possible to track known objects outside the line of side in real time. Due to the forward formulation of the light transport, the algorithm is easily expandable to more general scenarios or different hardware setups. We therefore expect it to become a viable alternative to the classic back projection approach in the future.
Automated object detection and tracking with a flash LiDAR system
Marcus Hammer, Marcus Hebel, Michael Arens
The detection of objects, or persons, is a common task in the fields of environment surveillance, object observation or danger defense. There are several approaches for automated detection with conventional imaging sensors as well as with LiDAR sensors, but for the latter the real-time detection is hampered by the scanning character and therefore by the data distortion of most LiDAR systems. The paper presents a solution for real-time data acquisition of a flash LiDAR sensor with synchronous raw data analysis, point cloud calculation, object detection, calculation of the next best view and steering of the pan-tilt head of the sensor. As a result the attention is always focused on the object, independent of the behavior of the object. Even for highly volatile and rapid changes in the direction of motion the object is kept in the field of view. The experimental setup used in this paper is realized with an elementary person detection algorithm in medium distances (20 m to 60 m) to show the efficiency of the system for objects with a high angular speed. It is easy to replace the detection part by any other object detection algorithm and thus it is easy to track nearly any object, for example a car or a boat or an UAV in various distances.
Sensing and reconstruction of arbitrary light-in-flight paths by a relativistic imaging approach
Martin Laurenzis, Jonathan Klein, Emmanuel Bacher, et al.
Transient light imaging is an emerging technology and interesting sensing approach for fundamental multidisciplinary research ranging from computer science to remote sensing. Recent developments in sensor technologies and computational imaging has made this emerging sensing approach a candidate for next generation sensor systems with rapidly increasing maturity but still relay on laboratory technology demonstrations. At ISL, transient light sensing is investigated by time correlated single photon counting (TCSPC). An eye-safe shortwave infrared (SWIR) TCSPC setup, consisting of an avalanche photodiode array and a pulsed fiber laser source, is used to investigate sparsely scattered light while propagating through air. Fundamental investigation of light in light are carried out with the aim to reconstruct the propagation path of arbitrary light paths. Light pulses are observed in light at various propagation angles and distances. As demonstrated, arbitrary light paths can be distinguished due to a relativistic effect leading to a distortion of temporal signatures. A novel method analyzing the time difference of arrival (TDOA) is carried out to determine the propagation angle and distance with respect to this relativistic effect. Based on our results, the performance of future laser warning receivers can be improved by the use of single photon counting imaging devices. They can detect laser light even when the laser does not directly hit the sensor or is passing at a certain distance.
Penetration of pyrotechnic effects with SWIR laser gated viewing in comparison to VIS and thermal IR bands
In this paper, the potential capability of short-wavelength infrared laser gated-viewing for penetrating the pyrotechnic effects smoke and light/heat has been investigated by evaluating data from conducted field trials. The potential of thermal infrared cameras for this purpose has also been considered and the results have been compared to conventional visible cameras as benchmark. The application area is the use in soccer stadiums where pyrotechnics are illegally burned in dense crowds of people obstructing visibility of stadium safety staff and police forces into the involved section of the stadium. Quantitative analyses have been carried out to identify sensor performances. Further, qualitative image comparisons have been presented to give impressions of image quality during the disruptive effects of burning pyrotechnics.
Passive Sensing
icon_mobile_dropdown
Standoff midwave infrared hyperspectral imaging of ship plumes
Marc-André Gagnon, Jean-Philippe Gagnon, Pierre Tremblay, et al.
Characterization of ship plumes is very challenging due to the great variety of ships, fuel, and fuel grades, as well as the extent of a gas plume. In this work, imaging of ship plumes from an operating ferry boat was carried out using standoff midwave (3-5 μm) infrared hyperspectral imaging. Quantitative chemical imaging of combustion gases was achieved by fitting a radiative transfer model. Combustion efficiency maps and mass flow rates are presented for carbon monoxide (CO) and carbon dioxide (CO2). The results illustrate how valuable information about the combustion process of a ship engine can be successfully obtained using passive hyperspectral remote sensing imaging.
Expanding the dimensions of hyperspectral imagery to improve target detection
On-going research to improve hyperspectral target detection generally focuses on statistical detector performance, reduction of background or environmental contributions to at-sensor radiance, dimension reduction and many other mathematical or physical techniques. These efforts are all aimed at improving target identification in a single scene or data cube. This focus on single scene performance is driven directly by the airborne collection concept of operations (CONOPS) of a single pass per target location. Today's pushbroom and whiskbroom sensors easily achieve single passes and single collects over a target location. If multiple passes are flown for multiple collects on the same location, the time scale for revisit is several minutes.

Emerging gimbaled hyperspectral imagers have the capability to collect multiple scans over the same target location in a time scale of seconds. The ability to scan the same location from slightly different collection geometries below the time scale of significant solar and atmospheric change forces us to reexamine the methods for target detection via the fundamental radiance equation. By expanding the radiance equation in the spatial and temporal dimensions, data from multiple hyperspectral images is used simultaneously for determining at-sensor radiance and surface leaving radiance with the ultimate goal of improving target detection.

This research reexamines the fundamental radiance equation for multiple scan collection geometries expanding both the spatial and temporal domains. In addition, our assumptions for determining at-sensor radiance are revisited in light of the increased dimensionality. The expanded radiance equation is then applied to data collected by a gimbaled long wave infrared hyperspectral imager. Initial results and future work are discussed.
Image enhancement and color constancy for a vehicle-mounted change detection system
Marco Tektonidis, David Monnin
Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.
Detection of object vibrations from high speed infrared images
Remote detection of vibrational features from an object is important for many short range civil applications, but it is also of interest for long range applications in the defense and security area. The well-established laser Doppler vibrometry technique is widely used as a high-sensitivity, non-contact method. The development of camera technology in recent years made image-based methods reliable passive alternatives for vibration and dynamic measurements. Very sensitive applications have been demonstrated using high speed cameras in the visual spectral range. However, for long range applications, where turbulence becomes a limiting factor, image acquisition in the short- to mid-wave IR region would be desirable, as the atmospheric effects attenuate at longer wavelength.

In this paper, we investigate experimentally the vibration detection from short- and mid-wave IR image sequences using high speed imaging technique. Experiments on the extraction of vibration signature under strong local turbulence conditions are presented.
Reconstruction method of compressed sensing for remote sensing images cooperating with energy compensation
Remote sensing features are varied and complicated. There is no comprehensive coverage dictionary for reconstruction. The reconstruction precision is not guaranteed. Aiming at the above problems, a novel reconstruction method with multiple compressed sensing data based on energy compensation is proposed in this paper. The multiple measured data and multiple coding matrices compose the reconstruction equation. It is locally solved through the Orthogonal Matching Pursuit (OMP) algorithm. Then the initial reconstruction image is obtained. Further assuming the local image patches have the same compensation gray value, the mathematical model of compensation value is constructed by minimizing the error of multiple estimated measured values and actual measured values. After solving the minimization, the compensation values are added to the initial reconstruction image. Then the final energy compensation image is obtained. The experiments prove that the energy compensation method is superior to those without compensation. Our method is more suitable for remote sensing features.
Active Sensing II
icon_mobile_dropdown
Optical and acoustical UAV detection
Recent world events have highlighted that the proliferation of UAVs is bringing with it a new and rapidly increasing threat for national defense and security agencies. Whilst many of the reported UAV incidents seem to indicate that there was no terrorist intent behind them, it is not unreasonable to assume that it may not be long before UAV platforms are regularly employed by terrorists or other criminal organizations. The flight characteristics of many of these mini- and micro-platforms present challenges for current systems which have been optimized over time to defend against the traditional air-breathing airborne platforms. A lot of programs to identify cost-effective measures for the detection, classification, tracking and neutralization have begun in the recent past. In this paper, lSL shows how the performance of a UAV detection and tracking concept based on acousto-optical technology can be powerfully increased through active imaging.
Eye safe lidar and passive EO sensing for cloud monitoring
Ove Steinvall, Ove Gustafsson, Folke Berglund
This study has investigated the use of an eye safe lidar (1550 nm) with equivalent performance of an ordinary military laser range finder for cloud monitoring. The aim was to combine lidar data with camera images in the visible, short wave infrared (SWIR) and infrared (IR) to better estimate the cloud density and cloud coverage.

The measurement was concentrated on low clouds, mostly of the cumulus type. We found that these clouds between 0-2 km often showed a layered structure and that they often indicated a limited optical density probably allowing for observation through the cloud. This information is hard to achieve from a passive EO sensor only. This was supported both from the simulation of the lidar response from thin clouds and from inverting the measured lidar waveform.

The comparison between the camera image intensities and the integrated range corrected lidar signals showed both negative and positive correlations. The highest positive correlation was obtained from comparing the lidar signal with the cloud temperature as derived from the FLIR camera. However, there were many cases when one or two of the camera intensities correlated negatively with the lidar signal. We could for example observe that under certain conditions the cloud which was dark in the SWIR appeared as white in the visible camera and vice versa. Example of lidar and image data will be presented and analyzed.
Rapid 2-axis scanning lidar prototype
Daryl Hartsell, Paul E. LaRocque, Jeffrey Tripp
The rapid 2-axis scanning lidar prototype was developed to demonstrate high-precision single-pixel linear-mode lidar performance. The lidar system is a combined integration of components from various commercial products allowing for future customization and performance enhancements. The intent of the prototype scanner is to demonstrate current stateof- the-art high-speed linear scanning technologies.

The system consists of two pieces: the sensor head and control unit. The senor head can be installed up to 4 m from the control box and houses the lidar scanning components and a small RGB camera. The control unit houses the power supplies and ranging electronics necessary for operating the electronics housed inside the sensor head.

This paper will discuss the benefits of a 2-axis scanning linear-mode lidar system, such as range performance and a userselectable FOV. Other features include real-time processing of 3D image frames consisting of up to 200,000 points per frame.
Effect of optical turbulence along a downward slant path on probability of laser hazard
The importance of the optical turbulence effect along a slant path downward on probability of exceeding the maximum permissible exposure level (MPE) from a laser is discussed.

The optical turbulence is generated by fluctuations (variations) in refractive index of the atmosphere. These fluctuations are caused in turn by changes in atmospheric temperature and humidity. The structure function of refractive index, Cn2, is the single most important parameter in the description of turbulence effects on the propagation of electromagnetic radiation. In the boundary layer, the lowest part of the atmosphere where the ground directly influence the atmosphere, is the variation of Cn2 in Sweden between about 10-17 and 10-12 m-2/3, see Bergström et al. [5]. Along a horizontal path is the Cn 2 often assumed to be constant. The variation of the Cn2 along a slant path is described by the Tatarski model as function of height to the power of -4/3 or -2/3, depending on day or night conditions.

The hazard of laser damage of eye is calculated for a long slant path downward. The probability of exceeding the maximum permissible exposure (MPE) level is given as a function of distance in comparison with nominal ocular hazard distance (NOHD) for adopted levels of turbulence. Furthermore, calculations are carried out for a laser pointer or a designator laser from a high altitude and long distance down to a ground target. The used example shows that there is an 10% risk of exceeding the MPE at a distance 2 km beyond the NOHD, in this example 48 km, due to turbulence level of 5·10-15 m-2/3 at ground height. The turbulence influence on a laser beam along horizontal path on NOHD have been shown before by Zilberman et al. [4].
Sensor Development
icon_mobile_dropdown
Multiband optics for imaging systems (Conference Presentation)
There is a strong desire to reduce size and weight of single and multiband IR imaging systems in Intelligence, Surveillance and Reconnaissance (ISR) operations on hand-held, helmet mounted or airborne platforms. NRL is developing new IR glasses that expand the glass map and provide compact solutions to multispectral imaging systems. These glasses were specifically designed to have comparable glass molding temperatures and thermal properties to enable lamination and co-molding of the optics which leads to a reduction in the number of air-glass interfaces (lower Fresnel reflection losses). Our multispectral optics designs using these new materials demonstrate reduced size, complexity and improved performance. This presentation will cover discussions on the new optical materials, multispectral designs, as well fabrication and characterization of new optics. Additionally, graded index (GRIN) optics offer further potential for both weight savings and increased performance but have so far been limited to visible and NIR bands (wavelengths shorter than about 0.9 µm). NRL is developing a capability to extend GRIN optics to longer wavelengths in the infrared by exploiting diffused IR transmitting chalcogenide glasses. These IR-GRIN lenses are compatible with all IR wavebands (SWIR, MWIR and LWIR) and can be used alongside conventional materials. The IR-GRIN lens technology, design space and anti-reflection considerations will be presented in this talk.
Noncontact thermoacoustic detection of targets embedded in dispersive media
Kevin C. Boyle, Hao Nan, Butrus T. Khuri-Yakub, et al.
A microwave-induced thermoacoustic detection system for embedded targets in lossy media is presented. The system achieves reliable detection of 5 cm × 5 cm × 2 cm targets embedded in a large Agarose sample at a 20 cm acoustic standoff. Repeated measurements across different target and sample configurations confirm the system’s ability to distinguish between a target signal and a baseline control signal generated by the package without embedded targets. Post-processing techniques including filtering and baseline signal characterization further improve detection performance.
Research of dynamic goniometer method for direction measurements
Yu. V. Filatov, E. D. Bokhman, P. A. Ivanov, et al.
The report presents the results of experimental research of the angle measurement system intended for measuring angles between normal to some mirrors setting directions in the space. Dynamic mode of system operation is defined by continuous rotation of platform with the autocollimating null-indicator. The angle measurements are provided by the holographic optical encoder.
Optimization design and evaluation specifications analysis for the optical remote system with a high spatial resolution
Ningjuan Ruan, Jinping He, Zhaojun Liu, et al.
For high spatial resolution optical remote sensing imaging system, the performances of sampling imaging system are traditionally designed and evaluated according to the system SNR and the system MTF at Nyquist frequency. On the basis of information theory, this paper proposed an optimization design and evaluation specification based on full remote sensing imaging chain: information density. It combined various imaging quality parameters, such as MTF, SNR and sideband aliasing, as well as included the influences of the scene, atmosphere, remote sensor and satellite platform in in-orbit imaging chain to the imaging quality. The system designs and experiments under different resolutions were also conducted. The experiment result showed that information density can be used to evaluate the performance of sampling imaging system and direct the optimization design of optical remote sensing system with a high spatial resolution.
Infrared Sensing
icon_mobile_dropdown
High sensitivity InAs photodiodes for mid-infrared detection
Jo Shien Ng, Xinxin Zhou, Akeel Auckloo, et al.
Sensitive detection of mid-infrared light (2 to 5 μm wavelengths) is crucial to a wide range of applications. Many of the applications require high-sensitivity photodiodes, or even avalanche photodiodes (APDs), with the latter generally accepted as more desirable to provide higher sensitivity when the optical signal is very weak. Using the semiconductor InAs, whose bandgap is 0.35 eV at room temperature (corresponding to a cut-off wavelength of 3.5 μm), Sheffield has developed high-sensitivity APDs for mid-infrared detection for one such application, satellite-based greenhouse gases monitoring at 2.0 μm wavelength. With responsivity of 1.36 A/W at unity gain at 2.0 μm wavelength (84 % quantum efficiency), increasing to 13.6 A/W (avalanche gain of 10) at -10V, our InAs APDs meet most of the key requirements from the greenhouse gas monitoring application, when cooled to 180 K. In the past few years, efforts were also made to develop planar InAs APDs, which are expected to offer greater robustness and manufacturability than mesa APDs previously employed. Planar InAs photodiodes are reported with reasonable responsivity (0.45 A/W for 1550 nm wavelength) and planar InAs APDs exhibited avalanche gain as high as 330 at 200 K. These developments indicate that InAs photodiodes and APDs are maturing, gradually realising their potential indicated by early demonstrations which were first reported nearly a decade ago.
Real-time detection of small and dim moving objects in IR video sequences using a robust background estimator and a noise-adaptive double thresholding
Andrea Zingoni, Marco Diani, Giovanni Corsini
We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.
Use of multivariate analysis to minimize collecting of infrared images and classify detected objects
An infrared image contains spatial and radiative information about objects in a scene. Two challenges are to classify pixels in a cluttered environment and to detect partly obscured or buried objects like mines and IEDs. Infrared image sequences provide additional temporal information, which can be utilized for a more robust object detection and an improved classification of object pixels. A manual evaluation of multi-dimensional data is generally time consuming and inefficient and therefore various algorithms are used. By a principal component analysis (PCA) most of the information is retained in a new, reduced system with fewer dimensions. The principal component coefficients (loadings) are here used both for classifying detected object pixels and for reducing the number of images in the analysis by computing of score vectors. For the datasets studied, the number of required images can be reduced significantly without loss of detection and classification ability. This allows for a more sparse sampling and scanning of larger areas when using a UAV, for example.
Generating object proposals for improved object detection in aerial images
Lars W. Sommer, Tobias Schuchert, Jürgen Beyerer
Screening of aerial images covering large areas is important for many applications such as surveillance, tracing or rescue tasks. To reduce the workload of image analysts, an automatic detection of candidate objects is required. In general, object detection is performed by applying classifiers or a cascade of classifiers within a sliding window algorithm. However, the huge number of windows to classify, especially in case of multiple object scales, makes these approaches computationally expensive. To overcome this challenge, we reduce the number of candidate windows by generating so called object proposals. Object proposals are a set of candidate regions in an image that are likely to contain an object. We apply the Selective Search approach that has been broadly used as proposals method for detectors like R-CNN or Fast R-CNN. Therefore, a set of small regions is generated by initial segmentation followed by hierarchical grouping of the initial regions to generate proposals at different scales. To reduce the computational costs of the original approach, which consists of 80 combinations of segmentation settings and grouping strategies, we only apply the most appropriate combination. Therefore, we analyze the impact of varying segmentation settings, different merging strategies, and various colour spaces by calculating the recall with regard to the number of object proposals and the intersection over union between generated proposals and ground truth annotations. As aerial images differ considerably from datasets that are typically used for exploring object proposals methods, in particular in object size and the image fraction occupied by an object, we further adapt the Selective Search algorithm to aerial images by replacing the random order of generated proposals by a weighted order based on the object proposal size and integrate a termination criterion for the merging strategies. Finally, the adapted approach is compared to the original Selective Search algorithm and to baseline approaches like sliding window on the publicly available DLR 3K Munich Vehicle Aerial Image Dataset to show how the number of candidate windows to classify can be clearly reduced.
Signal Processing
icon_mobile_dropdown
Compressed sensing for super-resolution spatial and temporal laser detection and ranging
Martin Laurenzis, Stephane Schertzer, Frank Christnacher
In the past decades, laser aided electro-optical sensing has reached high maturity and several commercial systems are available at the market for various but specific applications. These systems can be used for detection i.e. imaging as well as ranging. They cover laser scanning devices like LiDAR and staring full frame imaging systems like laser gated viewing or LADAR. The sensing capabilities of these systems is limited by physical parameter (like FPA array size, temporal band width, scanning rate, sampling rate) and is adapted to specific applications. Change of system parameter like an increase of spatial resolution implies the setup of a new sensing device with high development cost or the purchase and installation of a complete new sensor unit. Computational imaging approaches can help to setup sensor devices with flexible or adaptable sensing capabilities. Especially, compressed sensing is an emerging computational method which is a promising candidate to realize super-resolution sensing with the possibility to adapt its performance to various sensing tasks. It is possible to increase sensing capabilities with compressed sensing to gain either higher spatial and/or temporal resolution. Then, the sensing capabilities depend no longer only on the physical performance of the device but also on the computational effort and can be adapted to the application. In this paper, we demonstrate and discuss laser aided imaging using CS for super-resolution tempo-spatial imaging and ranging.
Multi-spectral texture analysis for IED detection
The use of Improvised Explosive Devices (IEDs) has increased significantly over the world and is a globally widespread phenomenon. Although measures can be taken to anticipate and prevent the opponent's ability to deploy IEDs, detection of IEDs will always be a central activity. There is a wide range of different sensors that are useful but also simple means, such as a pair of binoculars, can be crucial to detect IEDs in time.

Disturbed earth (disturbed soil), such as freshly dug areas, dumps of clay on top of smooth sand or depressions in the ground, could be an indication of a buried IED. This paper brie y describes how a field trial was set-up to provide a realistic data set on a road section containing areas with disturbed soil due to buried IEDs. The road section was imaged using a forward looking land-based sensor platform consisting of visual imaging sensors together with long-, mid-, and shortwave infrared imaging sensors.

The paper investigates the presence of discriminatory information in surface texture comparing areas with disturbed against undisturbed soil. The investigation is conducted for the different wavelength bands available. To extract features that describe texture, image processing tools such as 'Histogram of Oriented Gradients', 'Local Binary Patterns', 'Lacunarity', 'Gabor Filtering' and 'Co-Occurence' is used. It is found that texture as characterized here may provide discriminatory information to detect disturbed soil, but the signatures we found are weak and can not be used alone in e.g. a detector system.
Deep subspace mapping in hyperspectral imaging
We propose a novel Deep learning approach using autoencoders to map spectral bands to a space of lower dimensionality while preserving the information that makes it possible to discriminate different materials. Deep learning is a relatively new pattern recognition approach which has given promising result in many applications. In Deep learning a hierarchical representation of increasing level of abstraction of the features is learned. Autoencoder is an important unsupervised technique frequently used in Deep learning for extracting important properties of the data. The learned latent representation is a non-linear mapping of the original data which potentially preserve the discrimination capacity.
Evaluating automatic registration of UAV imagery using multi-temporal ortho images
Günter Saur, Wolfgang Krüger
Accurate geo-registration of acquired imagery is an important task when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. As an example, change detection needs accurately geo-registered images for selecting and comparing co-located images taken at different points in time. One challenge using small UAVs lies in the instable flight behavior and using low-weight cameras. Thus, there is a need to stabilize and register the UAV imagery by image processing methods since using only direct approaches based on positional information coming from a GPS and attitude and acceleration measured by an inertial measurement unit (IMU) are not accurate enough. In order to improve this direct geo-registration (or pre-registration"), image matching techniques are applied to align the UAV imagery to geo-registered reference images. The main challenge consists in matching images taken from different sensors at different day time and seasons. In this paper, we present evaluation methods for measuring the performance of image registration algorithms w.r.t. multi-temporal input data. They are based on augmenting a set of aligned image pairs by synthetic pre-registrations to an evaluation data set including truth transformations. The evaluation characteristics are based on quantiles of transformation residuals at certain control points. For a test site, video frames of a UAV mission and several ortho images from a period of 12 years are collected and synthetic pre-registrations corresponding to real flight parameters and registration errors are computed. Two algorithms A1 and A2 based on extracting key-points with a floating point descriptor (A1) and a binary descriptor (A2) are applied to the evaluation data set. As evaluation result, the algorithm A1 turned out to perform better than A2. Using affine or Helmert transformation types, both algorithms perform better than in the projective case. Furthermore, the evaluation classifies the ortho images w.r.t. their degree of difficulty and even for the most unfavorable ortho image, the evaluation characteristics yield better results than those attached to the default pre-registration. Finally, the proposed evaluation methods have been proven to derive valuable results even for input data with a high degree of difficulty.
Importance of using field spectroscopy to support the satellite remote sensing for underground structures intended for security reasons in the eastern Mediterranean region
George Melillos, Kyriacos Themistocleous, George Papadavid, et al.
Underground structures can affect their surrounding landscapes in different ways such as soil moisture content, soil composition and vegetation vigor. Vegetation vigor is often observed on the ground as a crop mark; a phenomenon which can be used as a proxy to denote the presence of underground and not visible structures. This paper presents the results obtained from field spectroradiometric campaigns at ‘buried’ underground structures in Cyprus. A SVC-HR1024 field spectroradiometer was used and in-band reflectances were calculated for the Landsat 5 TM medium spatial resolution satellite sensor. A number of vegetation indices such as NDVI, SR and EVI were obtained while a ‘smart index’ was developed aiming for detection of underground military structures by using existing vegetation indices or other in-band algorithms. In this study, test areas were identified, analyzed and modeled. The areas have been analyzed and tested in different scenarios, including: (a) the ‘natural state’ of the underground structure (b) the different type of crop over the underground structure and imported soil (c) the different types of non-natural material over the underground structure. A reference target in the nearby area was selected as a baseline. Controllable meteorological and environmental parameters were acquired and monitored.
Aerial vehicles collision avoidance using monocular vision
Oleg Balashov, Vadim Muraviev, Valery Strotov
In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier’s speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.
Range intensity coding under triangular and trapezoidal correlation algorithms for 3D super-resolution range gated imaging
Xinwei Wang, Liang Sun, Pingshun Lei, et al.
Three-dimensional super-resolution range-gated imaging (3D SRGI) is a new technique for high-resolution 3D sensing. Up to now, 3D SRGI has been developed with two range-intensity correlation algorithms, including trapezoidal algorithm and triangular algorithm. To obtain high depth-to-resolution ratio of 3D imaging, coding method was developed for 3D SRGI based on the trapezoidal algorithm in 2011. In this paper, we propose the range-intensity coding based on the triangular algorithm and the hybrid range-intensity coding based on the triangular and trapezoidal algorithms. The theoretical models to predict the maximum coding bin number are developed for different coding methods. In the models, the maximum coding bin number is 7 for three coding gate images under the triangular algorithm, and the maximum is extended to 16 under the hybrid algorithm. The coding examples of 7 bins and 16 bins mentioned above are also given in this paper. The comparison among the three coding methods is performed by the depth-to-resolution ratio defined as the ratio between the 3D imaging depth and the product of the range resolution and raw gate image number, and the hybrid coding method has the highest depth-to-resolution ratio. Higher depth-to-resolution ratio means better 3D imaging capability of 3D SRGI.
Poster Session
icon_mobile_dropdown
Remote sensing for oil products on water surface via fluorescence induced by UV filaments
E. S. Sunchugasheva, A. A. Ionin, D. V. Mokrousova, et al.
Remote monitoring of water pollution, namely thin films of oil or oil products on water surface, can be carried out by laser fluorimetry. The pollutants fluorescence during its interaction with ultrashort UV laser pulses was experimentally studied in this paper. The laser pulses power was considered in a wide range of values including the filamentation regime. We compared fluorescence stimulated by femtosecond UV laser pulses with two central wavelengths (248 and 372 nm) for detection of crude oil and the following oil products: oil VM-5, oil 5W-40 and solvent WhiteSpirit. It was shown that shorter UV wavelengths are more suitable for fluorescence excitation. The spatial resolution of the fluorescence localization was no worse than 30 cm. We discuss techniques of high intensity emission delivery to the remote target as post-filamentation channels and multifilamentation beam propagation regime as well experimentally and numerically.
An efficient visual saliency analysis model for region-of-interest extraction in high-spatial-resolution remote sensing images
Lin Wang, Shiyi Wang, Libao Zhang
Accurate region of interest (ROI) extraction is a hotspot of remote sensing image analysis. In this paper, we propose a novel ROI extraction method based on multi-scale hybrid visual saliency analysis (MHVSA) that can be divided into two sub-models: the frequency feature analysis (FFA) model and the multi-scale region aggregation (MRA) model. In the FFA sub-model, we utilize the human visual sensitivity and the Fourier transform to produce the local saliency map. In the MRA sub-model, saliency maps of various scales are generated by aggregating regions. A tree-structure graphical model is suggested to fuse saliency maps into one global saliency map. We obtain two binary masks by segmenting the local and global saliency maps and perform the logical AND operation on the two masks to acquire the final mask. Experimental results reveal that the MHVSA model provides more accurate extraction results.
Structured output tracking guided by keypoint matching
Zhiwen Fang, Zhiguo Cao, Yang Xiao
Current keypoint-based trackers are widely used in object tracking system because of their robust capability against scale, rotation and so on. However, when these methods are applied in tracking 3D target in a forward-looking image sequences, the tracked point usually shifts away from the correct position as time increases. In this paper, to overcome the tracked point drifting, structured output tracking is used to track the target point with its surrounding information based on Haar-like features. First, around the tracked point in the last frame, a local patch is cropped to extract Haar-like features. Second, using a structured output SVM framework, a prediction function is learned in a larger radius to directly estimate the patch transformation between frames. Finally, during tracking the prediction function is applied to search the best location in a new frame. In order to achieve the robust tracking in real time, keypoint matching is adopted to coarsely locate the searched field in the whole image before using the structured output tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art keypoint-based trackers.