Proceedings Volume 9250

Electro-Optical Remote Sensing, Photonic Technologies, and Applications VIII; and Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing II

cover
Proceedings Volume 9250

Electro-Optical Remote Sensing, Photonic Technologies, and Applications VIII; and Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 31 October 2014
Contents: 11 Sessions, 33 Papers, 0 Presentations
Conference: SPIE Security + Defence 2014
Volume Number: 9250

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9250
  • New Devices and Technology
  • Lidar/Ladar Sensing I
  • Lidar/Ladar Sensing II
  • Lidar/Ladar Sensing III
  • Passive Sensing and Processing I
  • Passive Sensing and Processing II
  • Poster Session
  • Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing I
  • Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing II
  • Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing III
Front Matter: Volume 9250
icon_mobile_dropdown
Front Matter: Volume 9250
This PDF file contains the front matter associated with SPIE Proceedings Volume 9250, including the Title Page, Copyright information, Table of Contents, Invited Panel Discussion, and Conference Committee listing.
New Devices and Technology
icon_mobile_dropdown
Statistical analysis of dark count rate in Geiger-mode APD FPAs
We present a temporal statistical analysis of the array-level dark count behavior of Geiger-mode avalanche photodiode (GmAPD) focal plane arrays that distinguishes between Poissonian intrinsic dark count rate and non-Poissonian crosstalk counts by considering “inter-arrival” times between successive counts from the entire array. For 32 x 32 format sensors with 100 μm pixel pitch, we show the reduction of crosstalk for smaller active area sizes within the pixel. We also compare the inter-arrival time behavior for arrays with narrow band (900 - 1100 nm) and broad band (900 - 1600 nm) spectral response. We then consider a similar analysis of larger format 128 x 32 arrays. As a complement to the temporal analysis, we describe the results of a spatial analysis of crosstalk events. Finally, we propose a simple model for the impact of crosstalk events on the Poissonian statistics of intrinsic dark counts that provides a qualitative explanation for the results of the inter-arrival time analysis for arrays with varying degrees of crosstalk.
Design and performance analysis of multilayer nested grazing incidence optics
Fuchang Zuo, Loulou Deng, Zhiwu Mei, et al.
We have developed X-ray grazing incidence optics with a single mirror. Although t can be used to demonstrate and test on the ground to verify the feasibility of X-ray detection system, it is unable to meet the requirements of X-ray pulsar navigation due to small effective area and large mass. There is an urgent need to develop multilayer nested grazing incidence optics, which consists of multilayer mirrors to form a coaxial and confocal system to maximize the use of space and increase the effective area. In this paper, aiming at the future demand of X-ray pulsar navigation, optimization and analysis of nested X-ray grazing incidence optics was carried out, the recurrence relations between the layers of mirrors were derived, reasonable initial structural parameters and stray light reduction method was given, and theoretical effective collection area was calculated. The initial structure and stray light eliminating structure are designed. The optical-mechanical-thermal numerical model was established using optical analysis software and finite element software for stray light analysis, focusing performance analysis, tolerance analysis, and mechanical analysis, providing evidence and guidance for the processing and alignment of nested X-ray grazing incidence optics.
Image reconstruction and optimization using a terahertz scanned imaging system
Due to the limited number of array detection architectures in the millimeter wave to terahertz region of the electromagnetic spectrum, imaging schemes with scan architectures are typically employed. In these configurations the interplay between the frequencies used to illuminate the scene and the optics used play an important role in the quality of the formed image. Using a multiplied Schottky-diode based terahertz transceiver operating at 340 GHz, in a stand-off detection scheme; the effect of image quality of a metal target was assessed based on the scanning speed of the galvanometer mirrors as well as the optical system that was constructed. Background effects such as leakage on the receiver were minimized by conditioning the signal at the output of the transceiver. Then, the image of the target was simulated based on known parameters of the optical system and the measured images were compared to the simulation. By using an image quality index based on χ2 algorithm the simulated and measured images were found to be in good agreement with a value of χ2 = 0 .14. The measurements as shown here will aid in the future development of larger stand-off imaging systems that work in the terahertz frequency range.
Non-contact measurement of an object’s angular position by means of laser goniometer
The report presents results of analysis and experimental research of the laser goniometer in the mode of operation – noncontact measurements of an object’s angular position. An important feature of this mode is an extremely large range of measurement with high accuracy. With the usual resolution of about 0,1 arcs the laser goniometer has in this mode of operation an essential advantage against photo-electric autocollimators with their rather small measuring range. Obtained results confirm that the laser dynamic goniometer using in the mode of non-contact measurement of an object’s angular position can be characterized by the range of angle measurements up to 15…20 deg and accuracy of constant angles on the level 0,05…0,1 arcs. The error of angles changing in time has additional components on the level of 0,2 arcs connected with influence of optical polygon face unflatness and difficulties of use the statistical averaging of measurement results.
Lidar/Ladar Sensing I
icon_mobile_dropdown
Long-range 3D single-photon imaging lidar system
Agata M. Pawlikowska, Roger M. Pilkington, Karen J. Gordon, et al.
We describe a re-configurable scanning lidar system which can accommodate either a single element detector operating in a scanning mode or a 32 x 32 array detector operating in a non-scanning mode. The system uses a time-of-flight approach in conjunction with the single-photon counting technique to produce 3D images of non-cooperative targets at ranges of greater than one kilometre. Results of data acquired with a single-element detector in a scanning mode at 2.9 km and 4.6 km are reported. The field of view (FoV) was illuminated through a transmitter in a bi-static mode using 125 kHz repetition rate laser pulses at a wavelength of 1550 nm with an average optical power of 0.5W.
Range resolution improvement of phase coded lidar system utilizing detector characteristics for short codes acquirement
Long Wu, Tianyuan Qiao, Jun Zhang, et al.
The conventional phase coded lidar systems require the collection of every returned laser pulse and are restricted in range resolution by sampling frequency and subpulse width. A phase coded lidar system with high range resolution is proposed with the accumulated m-sequence acquisition method by utilizing detector characteristics for signal detection. The detector accumulates kN-1 or kN+1 bits of the emitted laser sequence to deduce the a single bit of the sequence. The indoor experiment achieved 2 us resolution with the sampling period of 28 and 32 us by employing a 15-bit m-sequence. This method achieves the acquisition of m-sequence with narrow subpulse width whereas the sampling frequency is kept low. The experiment results showed an approach to implement the phase coded imaging lidar into practical application.
Lidar on small UAV for 3D mapping
Small UAV:s (Unmanned Aerial Vehicles) are currently in an explosive technical development phase. The performance of UAV-system components such as inertial navigation sensors, propulsion, control processors and algorithms are gradually improving. Simultaneously, lidar technologies are continuously developing in terms of reliability, accuracy, as well as speed of data collection, storage and processing. The lidar development towards miniature systems with high data rates has, together with recent UAV development, a great potential for new three dimensional (3D) mapping capabilities. Compared to lidar mapping from manned full-size aircraft a small unmanned aircraft can be cost efficient over small areas and more flexible for deployment. An advantage with high resolution lidar compared to 3D mapping from passive (multi angle) photogrammetry is the ability to penetrate through vegetation and detect partially obscured targets. Another advantage is the ability to obtain 3D data over the whole survey area, without the limited performance of passive photogrammetry in low contrast areas. The purpose of our work is to demonstrate 3D lidar mapping capability from a small multirotor UAV. We present the first experimental results and the mechanical and electrical integration of the Velodyne HDL-32E lidar on a six-rotor aircraft with a total weight of 7 kg. The rotating lidar is mounted at an angle of 20 degrees from the horizontal plane giving a vertical field-of-view of 10-50 degrees below the horizon in the aircraft forward directions. For absolute positioning of the 3D data, accurate positioning and orientation of the lidar sensor is of high importance. We evaluate the lidar data position accuracy both based on inertial navigation system (INS) data, and on INS data combined with lidar data. The INS sensors consist of accelerometers, gyroscopes, GPS, magnetometers, and a pressure sensor for altimetry. The lidar range resolution and accuracy is documented as well as the capability for target surface reflectivity estimation based on measurements on calibration standards. Initial results of the general mapping capability including the detection through partly obscured environments is demonstrated through field data collection and analysis.
Passive and active EO sensing close to the sea surface
Ove Steinvall, Rolf Persson, Folke Berglund, et al.
The present paper investigates the use of an eye-safe laser rangefinder at 1.5 μm and TV/IR imaging to obtain information on atmospheric properties at various paths close to the sea surface. On one day active/passive imaging NIR and SWIR systems were also used. The paper will describe the experimental equipment and the results from measurements of atmospheric backscatter as well as TV and IR images of test targets along a 1.8 km path close to the Baltic Sea. The site also contained a weather station and a scintillometer for logging weather and turbulence parameters. Results correlating the lidar attenuation with the imaging performance will be given and compared with models.
Synthetic aperture ladar concept for infrastructure monitoring
Simon Turbide, Linda Marchese, Marc Terroux, et al.
Long range surveillance of infrastructure is a critical need in numerous security applications, both civilian and military. Synthetic aperture radar (SAR) continues to provide high resolution radar images in all weather conditions from remote distances. As well, Interferometric SAR (InSAR) and Differential Interferometric SAR (D-InSAR) have become powerful tools adding high resolution elevation and change detection measurements. State of the art SAR systems based on dual-use satellites are capable of providing ground resolutions of one meter; while their airborne counterparts obtain resolutions of 10 cm. D-InSAR products based on these systems could produce cm-scale vertical resolution image products. Deformation monitoring of railways, roads, buildings, cellular antennas, power structures (i.e., power lines, wind turbines, dams, or nuclear plants) would benefit from improved resolution, both in the ground plane and vertical direction. The ultimate limitation to the achievable resolution of any imaging system is its wavelength. State-of-the art SAR systems are approaching this limit. The natural extension to improve resolution is to thus decrease the wavelength, i.e. design a synthetic aperture system in a different wavelength regime. One such system offering the potential for vastly improved resolution is Synthetic Aperture Ladar (SAL). This system operates at infrared wavelengths, ten thousand times smaller than radar wavelengths. This paper presents a laboratory demonstration of a scaled-down infrastructure deformation monitoring with an Interferometric Synthetic Aperture Ladar (IFSAL) system operating at 1.5 μm. Results show sub-millimeter precision on the deformation applied to the target.
Lidar/Ladar Sensing II
icon_mobile_dropdown
3DLASEM: simulation of three-dimensional flash Lidar for ocean imaging
Imaging flash LIDAR (LIght Detection and Ranging) is an effective method for airborne searches of the ocean surface and subsurface volume. The performance of ocean LIDAR depends strongly on the sea surface (e.g., waves, whitecaps, and flotsam), water turbidity, and the characteristics of the objects of interest. Cost-effective design of the LIDAR system and processing algorithms requires a modeling capability that can deal with the physics of light propagation through the air-water interface, into the ocean, and back to the LIDAR receiver. 3DLASE-M is a physics-based LIDAR simulator that yields high-fidelity images for three-dimensional algorithm development and performance predictions.
Underwater laser imaging experiments in the Baltic Sea
Underwater laser imaging is a useful tool for high resolution mapping and identification of threats in coastal and also turbid waters of harbors and ports. In the recent past, the French-German Research Institute of Saint-Louis (ISL) and the German Naval Research Department (WTD71-FWG) have performed different measurements in the Baltic Sea in the field of submarine laser imaging with the aim to evaluate the performance of laser gated viewing (LGV) and underwater laser scanning (ULS). Different scenarios were tested with respect to varying environmental conditions. Working near a harbor or on the open sea under sunny and calm or windy and rainy weather conditions, the measured turbidity, i.e. the attenuation coefficient of the water column, ranges from 0.4 m-1 to 3 m-1. The experiments and imaging results are discussed with respect to 2D and 3D image processing under the given environmental conditions.
Processing of airborne lidar bathymetry data for detailed sea floor mapping
Airborne bathymetric lidar has proven to be a valuable sensor for rapid and accurate sounding of shallow water areas. With advanced processing of the lidar data, detailed mapping of the sea floor with various objects and vegetation is possible. This mapping capability has a wide range of applications including detection of mine-like objects, mapping marine natural resources, and fish spawning areas, as well as supporting the fulfillment of national and international environmental monitoring directives. Although data sets collected by subsea systems give a high degree of credibility they can benefit from a combination with lidar for surveying and monitoring larger areas. With lidar-based sea floor maps containing information of substrate and attached vegetation, the field investigations become more efficient. Field data collection can be directed into selected areas and even focused to identification of specific targets detected in the lidar map. The purpose of this work is to describe the performance for detection and classification of sea floor objects and vegetation, for the lidar seeing through the water column. With both experimental and simulated data we examine the lidar signal characteristics depending on bottom depth, substrate type, and vegetation. The experimental evaluation is based on lidar data from field documented sites, where field data were taken from underwater video recordings. To be able to accurately extract the information from the received lidar signal, it is necessary to account for the air-water interface and the water medium. The information content is hidden in the lidar depth data, also referred to as point data, and also in the shape of the received lidar waveform. The returned lidar signal is affected by environmental factors such as bottom depth and water turbidity, as well as lidar system factors such as laser beam footprint size and sounding density.
3D laser gated viewing from a moving submarine platform
F. Christnacher, M. Laurenzis, D. Monnin, et al.
Range-gated active imaging is a prominent technique for night vision, remote sensing or vision through obstacles (fog, smoke, camouflage netting…). Furthermore, range-gated imaging not only informs on the scene reflectance but also on the range for each pixel. In this paper, we discuss 3D imaging methods for underwater imaging applications. In this situation, it is particularly difficult to stabilize the imaging platform and these 3D reconstruction algorithms suffer from the motion between the different images in the recorded sequence. To overcome this drawback, we investigated a new method based on a combination between image registration by homography and 3D scene reconstruction through tomography or two-image technique. After stabilisation, the 3D reconstruction is achieved by using the two upper-mentioned techniques. In the different experimental examples given in this paper, a centimetric resolution could be achieved.
Low-cost commodity depth sensor comparison and accuracy analysis
Timo Breuer, Christoph Bodensteiner, Michael Arens
Low cost depth sensors have been a huge success in the field of computer vision and robotics, providing depth images even in untextured environments. The same characteristic applies to the Kinect V2, a time-of-flight camera with high lateral resolution. In order to assess advantages of the new sensor over its predecessor for standard applications, we provide an analysis of measurement noise, accuracy and other error sources with the Kinect V2. We examined the raw sensor data by using an open source driver. Further insights on the sensor design and examples of processing techniques are given to completely exploit the unrestricted access to the device.
Lidar/Ladar Sensing III
icon_mobile_dropdown
New fiber laser for lidar developments in disaster management
C. Besson, B. Augere, G. Canat, et al.
Recent progress in fiber technology has enabled new laser designs along with all fiber lidar architectures. Their asset is to avoid free-space optics, sparing lengthy alignment procedures and yielding compact setups that are well adapted for field operations and on board applications thanks to their intrinsic vibration-resistant architectures. We present results in remote sensing for disaster management recently achieved with fiber laser systems. Field trials of a 3-paths lidar vibrometer for the remote study of modal parameters of buildings has shown that application-related constraints were fulfilled and that the obtained results are consistent with simultaneous in situ seismic sensors measurements. Remote multi-gas detection can be obtained using broadband infrared spectroscopy. Results obtained on methane concentration measurement using an infrared supercontinuum fiber laser and analysis in the 3-4 μm band are reported. For gas flux retrieval, air velocity measurement is also required. Long range scanning all-fiber wind lidars are now available thanks to innovative laser architectures. High peak power highly coherent pulses can be extracted from Er3+:Yb3+ and Tm3+ active fibers using methods described in the paper. The additional laser power provides increased coherent lidar capability in range and scanning of large areas but also better system resistance to adverse weather conditions. Wind sensing at ranges beyond 10 km have been achieved and on-going tests of a scanning system dedicated to airport safety is reported.
Automatic change detection using mobile laser scanning
M. Hebel, M. Hammer, M. Gordon, et al.
Automatic change detection in 3D environments requires the comparison of multi-temporal data. By comparing current data with past data of the same area, changes can be automatically detected and identified. Volumetric changes in the scene hint at suspicious activities like the movement of military vehicles, the application of camouflage nets, or the placement of IEDs, etc. In contrast to broad research activities in remote sensing with optical cameras, this paper addresses the topic using 3D data acquired by mobile laser scanning (MLS). We present a framework for immediate comparison of current MLS data to given 3D reference data. Our method extends the concept of occupancy grids known from robot mapping, which incorporates the sensor positions in the processing of the 3D point clouds. This allows extracting the information that is included in the data acquisition geometry. For each single range measurement, it becomes apparent that an object reflects laser pulses in the measured range distance, i.e., space is occupied at that 3D position. In addition, it is obvious that space is empty along the line of sight between sensor and the reflecting object. Everywhere else, the occupancy of space remains unknown. This approach handles occlusions and changes implicitly, such that the latter are identifiable by conflicts of empty space and occupied space. The presented concept of change detection has been successfully validated in experiments with recorded MLS data streams. Results are shown for test sites at which MLS data were acquired at different time intervals.
Investigation of frame-to-frame back projection and feature selection algorithms for non-line-of-sight laser gated viewing
In the present paper, we discuss new approaches to analyze laser gated viewing data for non-line-of-sight vision with a novel frame-to-frame back projection as well as feature selection algorithms. While first back projection approaches use time transients for each pixel, our new method has the ability to calculate the projection of imaging data on the obscured voxel space for each frame. Further, four different data analysis algorithms were studied with the aim to identify and select signals from different target positions. A slight modification of commonly used filters leads to powerful selection of local maximum values. It is demonstrated that the choice of the filter has impact on the selectivity i.e. multiple target detection as well as on the localization precision.
Passive Sensing and Processing I
icon_mobile_dropdown
Detection of people in military and security context imagery
Thomas M. L. Shannon, Emmet H. Spier, Ben Wiltshire
A high level of manual visual surveillance of complex scenes is dependent solely on the awareness of human operators whereas an autonomous person detection solution could assist by drawing their attention to potential issues, in order to reduce cognitive burden and achieve more with less manpower. Our research addressed the challenge of the reliable identification of persons in a scene who may be partially obscured by structures or by handling weapons or tools. We tested the efficacy of a recently published computer vision approach based on the construction of cascaded, non-linear classifiers from part-based deformable models by assessing performance using imagery containing infantrymen in the open or when obscured, undertaking low level tactics or acting as civilians using tools. Results were compared with those obtained from published upright pedestrian imagery. The person detector yielded a precision of approximately 65% for a recall rate of 85% for military context imagery as opposed to a precision of 85% for the upright pedestrian image cases. These results compared favorably with those reported by the authors when applied to a range of other on-line imagery databases. Our conclusion is that the deformable part-based model method may be a potentially useful people detection tool in the challenging environment of military and security context imagery.
Image quality of optical remote sensing data
Photogrammetry and remote sensing (RS) provide procedures for deriving geometric, radiometric and thematic information from image data. A variety of aircraft and space-borne sensors are available to capture image data. Different standards and specifications of quality assessment for optical remote sensing data are available. Due to the possibilities of absolute geometric and radiometric calibration digital sensors provide new promising opportunities to create value added products like digital elevation models, land-use maps etc. Such cameras combine the high geometric quality with the radiometric standards of earth observation systems. The determination of image quality of remote sensing data can be distinguished in (spectral) radiometric and geometric aspects. Standards contains different metrics for accuracy issues (spectral, radiometric and geometric accuracy) and for performance parameters like SNR, MTF. Image artefacts (caused e.g. by compression) are an additional important topic. The paper gives an overview of the current debate and the possibility of standardization.
Obtaining spectral information from infrared scenarios using hyper-spectral cameras and cameras with spinning filter wheel
In the past decades the Norwegian Defence Research Establishment (FFI) has recorded and characterized infrared scenarios for several application purposes, such as infrared target and background modeling and simulation, model validation, atmospheric propagation, and image segmentation and target detection for civilian and defence purposes. During the last year FFI has acquired several new systems for characterization of infrared radiation properties. In total, five new infrared cameras from IRCAM GmbH, Germany, have been acquired. These cameras cover both the longwavelength and extended medium-wavelength infrared spectral bands. The cameras are equipped with fast rotating filter wheels which can be used to study spectral properties and polarization effects within these wavelength bands. This option allows the sensors to operate in user-defined spectral bands. FFI has also acquired two HyperCam sensors from Telops Inc, Canada, covering the long-wavelength and extended medium-wavelength spectral bands, respectively. The combination of imaging detectors and Fourier Transform spectroscopy allows simultaneous spectral and spatial characterization of infrared scenarios. These sensors may optionally be operated as high-speed infrared cameras. A description of the new sensors and their capabilities are presented together with some examples of results acquired by the different sensors. In this paper we present a detailed comparison of images taken in different spectral bands, and also compare images taken with the two types of sensors. These examples demonstrate the principles of how the new spectral information can be used to separate certain targets from the background based on the spectral information.
Performance evaluation of image-based location recognition approaches based on large-scale UAV imagery
Nikolas Hesse, Christoph Bodensteiner, Michael Arens
Recognizing the location where an image was taken, solely based on visual content, is an important problem in computer vision, robotics and remote sensing. This paper evaluates the performance of standard approaches for location recognition when applied to large-scale aerial imagery in both electro-optical (EO) and infrared (IR) domains. We present guidelines towards optimizing the performance and explore how well a standard location recognition system is suited to handle IR data. We show on three datasets that the performance of the system strongly increases if SIFT descriptors computed on Hessian-Affine regions are used instead of SURF features. Applications are widespread and include vision-based navigation, precise object geo-referencing or mapping.
Geometric calibration of multi-sensor image fusion system with thermal infrared and low-light camera
Dragana Peric, Vojislav Lukic, Milana Spanovic, et al.
A calibration platform for geometric calibration of multi-sensor image fusion system is presented in this paper. The accurate geometric calibration of the extrinsic geometric parameters of cameras that uses planar calibration pattern is applied. For calibration procedure specific software is made. Patterns used in geometric calibration are prepared with aim to obtain maximum contrast in both visible and infrared spectral range - using chessboards which fields are made of different emissivity materials. Experiments were held in both indoor and outdoor scenarios. Important results of geometric calibration for multi-sensor image fusion system are extrinsic parameters in form of homography matrices used for homography transformation of the object plane to the image plane. For each camera a corresponding homography matrix is calculated. These matrices can be used for image registration of images from thermal and low light camera. We implemented such image registration algorithm to confirm accuracy of geometric calibration procedure in multi-sensor image fusion system. Results are given for selected patterns - chessboard with fields made of different emissivity materials. For the final image registration algorithm in surveillance system for object tracking we have chosen multi-resolution image registration algorithm which naturally combines with a pyramidal fusion scheme. The image pyramids which are generated at each time step of image registration algorithm may be reused at the fusion stage so that overall number of calculations that must be performed is greatly reduced.
Passive Sensing and Processing II
icon_mobile_dropdown
Mobile device geo-localization and object visualization in sensor networks
In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.
Poster Session
icon_mobile_dropdown
Remote screening and direct control of the bacterial infection of gardens
In last time gardens are often at the dangerous of viruses and bacteria infections. To preserve not only the coming harvest, but, in generally, to provide stability and growing horticultures the development of new generation of the analytical techniques for remote express screening vegetative state arrays and direct control of the appropriate infection if appearance of its maybe expected on the basis of previous surveys are very actually and important. For continuous monitoring we propose the application of the complex of the optical analytical devices as “Floratest” and “Plasmatest” (both produced in Ukraine) which is able to control step by step general situation with vegetable state and verify concrete situation with infection. General screening is accomplished on the control of the intensity of chlorophyll induction (IChF), namely, registration of so called Kautsky curve which testifies about physiological mechanisms of energy generation, accumulation and effective ways of its realization in cells. The measuring may be done by direct way on the number of individual vegetables and remote screening of massive with transferring registered signal direct in the laboratory. Next step of control connected with the application of the surface plasmon resonance (SPR) based immune biosensor which is able to determine concrete bacteria (for example, Erwinia amilovora) with the limit detection about 0.2 μg/ml, the overall time of the analysis within 30 min (5 min of the duration of one measurement). The traditional ELISA-method showed the sensitivity to this pathogen about 0.5 μg/ml, overall time of the analysis several hours and obligatory using additional expensive reagents.
Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing I
icon_mobile_dropdown
Hyperspectral data collection for the assessment of target detection algorithms: the Viareggio 2013 trial
Airborne hyperspectral imagery is valuable for military and civilian applications, such as target identification, detection of anomalies and changes within multiple acquisitions. In target detection (TD) applications, the performance assessment of different algorithms is an important and critical issue. In this context, the small number of public available hyperspectral data motivated us to perform an extensive measurement campaign including various operating scenarios. The campaign was organized by CISAM in cooperation with University of Pisa, Selex ES and CSSN-ITE, and it was conducted in Viareggio, Italy in May, 2013. The Selex ES airborne hyperspectral sensor SIM.GA was mounted on board of an airplane to collect images over different sites in the morning and afternoon of two subsequent days. This paper describes the hyperspectral data collection of the trial. Four different sites were set up, representing a complex urban scenario, two parking lots and a rural area. Targets with dimensions comparable to the sensor ground resolution were deployed in the sites to reproduce different operating situations. An extensive ground truth documentation completes the data collection. Experiments to test anomalous change detection techniques were set up changing the position of the deployed targets. Search and rescue scenarios were simulated to evaluate the performance of anomaly detection algorithms. Moreover, the reflectance signatures of the targets were measured on the ground to perform spectral matching in varying atmospheric and illumination conditions. The paper presents some preliminary results that show the effectiveness of hyperspectral data exploitation for the object detection tasks of interest in this work.
Combining spectral matching and anomalous change detection for target rediscovery in hyperspectral images
In surveillance applications, tracking a specific target by means of subsequent acquisitions over the monitored area is of great interest. Multitemporal HyperSpectral Images (HSIs) are particularly suitable for this application. Multiple HSIs of the same scene collected at different times can be exploited to detect changes using anomalous change detection (ACD) techniques. Moreover, spectral matching (SM) is a valuable tool for detecting the target spectrum within HSIs collected at different times (target rediscovery – TR). Depending on the monitored area and the specific target of interest, TR can be a challenging task. In fact, it may happen that the target has spectral features similar to those of uninteresting objects in the scene and the use of SM techniques without additional information can generate too many misleading detections. We introduce a new TR strategy aimed at mitigating the number of alarms encountered in complex scenarios. The proposed detection strategy combines the SM approach with the unsupervised ACD strategy. We focus on rediscovery of moving targets in airborne HSIs collected on the same complex area. False alarms mitigation is achieved by exploiting both the target spectral features and the temporal variations of its position. For this purpose, SM is performed only on those pixels that have undergone changes within multiple acquisitions. Results obtained applying the proposed scheme on real HSIs are presented and discussed. The results show the effectiveness of the fusion of spectral and multitemporal analysis to improve TR performance in complex scenarios.
Sun-glint false alarm mitigation in a maritime scenario
Alessandro Rossi, Aldo Riccobono, Stefano Landini
Airborne hyperspectral imaging can be exploited to detect anomalous objects in the maritime scenario. Due to the objects high contrast with respect to the sea surface, detection can be easily accomplished by means of local anomaly detectors, such as the well-known Reed-Xiaoli (RX) algorithm. During the development of a real-time system for the detection of anomalous pixels, it has been noticed that the performance of detection is deeply affected by the presence of sun-glint. The reflection on the sea surface of the solar radiation produces a high density of alarms, that makes challenging the task of detecting the objects of interest. In this paper, it is introduced a strategy aimed at discriminating the sun-glint false alarms from the effective alarms related to targets of potential interest. False alarms due to glint are mitigated performing a local spatio-spectral analysis on each alarm furnished by the anomaly detector. The technique has been tested on hyperspectral images collected during a measurement campaign carried out near Pisa, Italy. The Selex ES SIMGA hyperspectral sensor was mounted on board of an airplane to collect high spectral resolution images in both the VNIR and SWIR spectral channels. Several experiments were carried out, setting up scenarios with small man-made objects deployed on the sea surface, so as to simulate search and rescue operations. The results have highlighted the effectiveness of the proposed solution in terms of mitigation of false alarms due to sun-glints on the maritime scenario.
Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing II
icon_mobile_dropdown
Extraction of incident irradiance from LWIR hyperspectral imagery
The atmospheric correction of thermal hyperspectral imagery can be separated in two distinct processes: Atmospheric Compensation (AC) and Temperature and Emissivity separation (TES). TES requires for input at each pixel, the ground leaving radiance and the atmospheric downwelling irradiance, which are the outputs of the AC process. The extraction from imagery of the downwelling irradiance requires assumptions about some of the pixels’ nature, the sensor and the atmosphere. Another difficulty is that, often the sensor’s spectral response is not well characterized. To deal with this unknown, we defined a spectral mean operator that is used to filter the ground leaving radiance and a computation of the downwelling irradiance from MODTRAN. A user will select a number of pixels in the image for which the emissivity is assumed to be known. The emissivity of these pixels is assumed to be smooth and that the only spectrally fast varying variable in the downwelling irradiance. Using these assumptions we built an algorithm to estimate the downwelling irradiance. The algorithm is used on all the selected pixels. The estimated irradiance is the average on the spectral channels of the resulting computation. The algorithm performs well in simulation and results are shown for errors in the assumed emissivity and for errors in the atmospheric profiles. The sensor noise influences mainly the required number of pixels.
Non-linear sampling for efficient implementation of the projection-slice synthetic discriminant function filter
The Projection-Slice Synthetic Discriminant Function Filter has been generated using a sparse sampling technique that utilizes the inherent sprsity of the Projection-Slice theorem. The l1-norm has been utilized to optimize the information contents extracted from the representative class objects. In this work, the results of the usual PSDF without the benefit of convex optimization is compared with the results of the PSDF filter after utilization of convex optimization to assess the merits of the utilization of efficient information reconstruction within the construct of the PSDF.
Military Applications in Hyperspectral Imaging and High Spatial Resolution Sensing III
icon_mobile_dropdown
Automatic representation of urban terrain models for simulations on the example of VBS2
Dimitri Bulatov, Gisela Häufel, Peter Solbrig, et al.
Virtual simulations have been on the rise together with the fast progress of rendering engines and graphics hardware. Especially in military applications, offensive actions in modern peace-keeping missions have to be quick, firm and precise, especially under the conditions of asymmetric warfare, non-cooperative urban terrain and rapidly developing situations. Going through the mission in simulation can prepare the minds of soldiers and leaders, increase selfconfidence and tactical awareness, and finally save lives. This work is dedicated to illustrate the potential and limitations of integration of semantic urban terrain models into a simulation. Our system of choice is Virtual Battle Space 2, a simulation system created by Bohemia Interactive System. The topographic object types that we are able to export into this simulation engine are either results of the sensor data evaluation (building, trees, grass, and ground), which is done fully-automatically, or entities obtained from publicly available sources (streets and water-areas), which can be converted into the system-proper format with a few mouse clicks. The focus of this work lies in integrating of information about building façades into the simulation. We are inspired by state-of the art methods that allow for automatic extraction of doors and windows in laser point clouds captured from building walls and thus increase the level of details of building models. As a consequence, it is important to simulate these animationable entities. Doing so, we are able to make accessible some of the buildings in the simulation.
Underwater monitoring experiment using hyperspectral sensor, LiDAR and high resolution satellite imagery
Chan-Su Yang, Sun-Hwa Kim
In general, hyper-spectral sensor, LiDAR and high spatial resolution satellite imagery for underwater monitoring are dependent on water clarity or water transparency that can be measured using a Secchi disk or satellite ocean color data. Optical properties in the sea waters of South Korea are influenced mainly by a strong tide and oceanic currents, diurnal, daily and seasonal variations of water transparency. The satellite-based Secchi depth (ZSD) analysis showed the applicability of hyper-spectral sensor, LiDAR and optical satellite, determined by the location connected with the local distribution of Case 1 and 2 waters. The southeast coastal areas of Jeju Island are selected as test sites for a combined underwater experiment, because those areas represent Case 1 water. Study area is a small port (<15m) in the southeast area of the island and linear underwater target used by sewage pipe is located in this area. Our experiments are as follows: 1. atmospheric and sun-glint correction methods to improve the underwater monitoring ability; 2. intercomparison of water depths obtained from three different sensors. Three sensors used here are the CASI-1500 (Wide‐Array Airborne Hyperspectral VNIR Imager (0.38-1.05 microns), the Coastal Zone Mapping and Imaging Lidar (CZMIL) and Korean Multi-purpose Satellite-3 (KOMPSAT-3) with 2.8 meter multi-spectral resolution. The experimental results were affected by water clarity and surface condition, and the bathymetric results of three sensors show some differences caused by sensor-itself, bathymetric algorithm and tide level. It is shown that CASI-1500 was applicable for bathymetry and underwater target detection in this area, but KOMPSAT-3 should be improved for Case 1 water. Although this experiment was designed to compare underwater monitoring ability of LIDAR, CASI-1500, KOMPSAT-3 data, this paper was based on initial results and suggested only results about the bathymetry and underwater target detection.
Wideband radar imaging for space debris based on direct IF sampling signals
Yang Liu, Zengping Chen, Na Li, et al.
This paper investigates an imaging method for space debris by wideband radar. Because of the spinning of the space debris, the correlation of the adjacent high range resolution profile (HRRP) is undermined and the motion compensation method for dechirped echoes is invalid. Therefore, a wideband imaging method of space debris based on intermediate frequency sampling (DIFS) signals is proposed in this paper. The IF sampling technique has the advantage in maintaining the coherence of echo pulse, which eliminates the negative influence of the spin. Firstly, the accurate translational motion parameters of the target are estimated from the radar observations by using of polynomial fitting method. Then the translational motion compensation is carried out in frequency domain based on the target motion track. Finally, the improved back projection transform (BPT) method is used for image reconstruction, which transforms the echo from range-time domain to the scattering point distribution plane by coherent integral. A well-focused and high resolution image of the space debris without side lobe peaks can be obtained in the end. The simulation results indicate the validity of the proposed method in this paper.