Proceedings Volume 9460

Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications XII

cover
Proceedings Volume 9460

Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications XII

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 18 June 2015
Contents: 8 Sessions, 18 Papers, 0 Presentations
Conference: SPIE Defense + Security 2015
Volume Number: 9460

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9460
  • ISR: Vision, Mission, and Tactics
  • ISR: Passive and Active Sensing
  • ISR: Image Fusion/Enhancement
  • ISR: Image Processing and Tracking
  • ISR: Change Detection
  • ISR: Exploitation
  • ISR: Image Sequences/Full Motion Video
Front Matter: Volume 9460
icon_mobile_dropdown
Front Matter: Volume 9460
This PDF file contains the front matter associated with SPIE Proceedings Volume 9460, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
ISR: Vision, Mission, and Tactics
icon_mobile_dropdown
Hybrid consensus-based formation control of UAVs
In this paper, hybrid consensus based formation control for a team of Unmanned Aerial Vehicles (UAV’s) is considered. A hybrid consensus based formation controller is applied for UAV’s moving at fixed altitudes to drive them to a goal point while maintaining a specified formation. The proposed hybrid automaton consists of two discrete states, each with continuous dynamics: a regulation state and a formation keeping state. The controller in the regulation state uses local state information to achieve its objective while the formation controller utilizes the state and controller information of neighboring UAV’s. Consequently, the UAV’s switch between the control objectives of formation keeping and goal seeking in route to their goal points. The switching behavior creates hybrid dynamics from the interactions between the continuous and discrete states making the stability analysis of the system more complex than considering purely discrete or purely continuous. Therefore, the stability of the hybrid approach is proven by using multiple Lyapunov functions and also considers the switching conditions between the regulation and the formation states. The Lyapunov based approach demonstrates that the formation errors converge to a small bounded region around the origin and the size of the bound can be adjusted by using the switching conditions. Convergence to goal position while in formation is also demonstrated in the same Lyapunov analysis, and simulation results verify the theoretical conjectures.
ISR: Passive and Active Sensing
icon_mobile_dropdown
Results from an experiment that collected visible-light polarization data using unresolved imagery for classification of geosynchronous satellites
Andy Speicher, Mohammad Matin, Roger Tippets, et al.
In order to protect critical military and commercial space assets, the United States Space Surveillance Network must have the ability to positively identify and characterize all space objects. Unfortunately, positive identification and characterization of space objects is a manual and labor intensive process today since even large telescopes cannot provide resolved images of most space objects. The objective of this study was to collect and analyze visible-spectrum polarization data from unresolved images of geosynchronous satellites taken over various solar phase angles. Different collection geometries were used to evaluate the polarization contribution of solar arrays, thermal control materials, antennas, and the satellite bus as the solar phase angle changed. Since materials on space objects age due to the space environment, their polarization signature may change enough to allow discrimination of identical satellites launched at different times. Preliminary data suggests this optical signature may lead to positive identification or classification of each satellite by an automated process on a shorter timeline. The instrumentation used in this experiment was a United States Air Force Academy (USAFA) Department of Physics system that consists of a 20-inch Ritchey-Chrétien telescope and a dual focal plane optical train fed with a polarizing beam splitter. Following a rigorous calibration, polarization data was collected during two nights on eight geosynchronous satellites built by various manufacturers and launched several years apart. When Stokes parameters were plotted against time and solar phase angle, the data indicates that a polarization signature from unresolved images may have promise in classifying specific satellites.
Laser links for mobile airborne nodes
Wolfgang Griethe, Markus Knapek, Joachim Horwath
Remotely Piloted Aircrafts (RPA’s) and especially Medium Altitude Long Endurance (MALE) and High Altitude Long Endurance (HALE) are currently operated over long distances, often across several continents. This is only made possible by maintaining Beyond Line Of Side (BLOS) radio links between ground control stations and unmanned vehicles via geostationary (GEO) satellites. The radio links are usually operated in the Ku-frequency band and used for both, vehicle command & control (C2) - it also refers to Command and Non-Payload Communication (CNPC) - as well as transmission of intelligence data - the associated communication stream also refers to Payload Link (PL). Even though this scheme of communication is common practice today, various other issues are raised thereby. The paper shows that the current existing problems can be solved by using the latest technologies combined with altered intuitive communication strategies. In this context laser communication is discussed as a promising technology for airborne applications. It is clearly seen that for tactical reasons, as for instance RPA cooperative flying, Air-to-Air communications (A2A) is more advantageous than GEO satellite communications (SatCom). Hence, together with in-flight test results the paper presents a design for a lightweight airborne laser terminal, suitable for use onboard manned or unmanned airborne nodes. The advantages of LaserCom in combination with Intelligence, Surveillance and Reconnaissance (ISR) technologies particularly for Persistent Wide Area Surveillance (PWAS) are highlighted. Technical challenges for flying LaserCom terminals aboard RPA’s are outlined. The paper leads to the conclusion that by combining both, LaserCom and ISR, a new quality for an overall system arises which is more than just the sum of two separate key technologies.
Small SWAP 3D imaging flash ladar for small tactical unmanned air systems
The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.
EM modeling of far-field radiation patterns for antennas on the GMA-TT UAV
Anne I. Mackenzie
To optimize communication with the Generic Modular Aircraft T-Tail (GMA-TT) unmanned aerial vehicle (UAV), electromagnetic (EM) simulations have been performed to predict the performance of two antenna types on the aircraft. Simulated far-field radiation patterns tell the amount of power radiated by the antennas and the aircraft together, taking into account blockage by the aircraft as well as radiation by conducting and dielectric portions of the aircraft. With a knowledge of the polarization and distance of the two communicating antennas, e.g. one on the UAV and one on the ground, and the transmitted signal strength, a calculation may be performed to find the strength of the signal travelling from one antenna to the other and to check that the transmitted signal meets the receiver system requirements for the designated range. In order to do this, the antenna frequency and polarization must be known for each antenna, in addition to its design and location. The permittivity, permeability, and geometry of the UAV components must also be known. The full-wave method of moments solution produces the appropriate dBi radiation pattern in which the received signal strength is calculated relative to that of an isotropic radiator.
ISR: Image Fusion/Enhancement
icon_mobile_dropdown
Fusion of video and radar comparison to 3D ladar for activity recognition
Determining hostile or suspicious activities within a civilian population can be challenging. Incorporating automated techniques for classifying activities can significantly reduce the operator workload. Utilizing 3D sensor modalities such as ladar can provide a strong capability for recognizing dismount activities. However, fusing multiple modalities, such as video in conjunction with radar, could provide a cheaper alternative for wide-area coverage. This work utilizes a single point-of-view 3D imaging system to approximate ladar captured data. Activity classification is done on the full 3D extracted motion, achieving 86% correct classification. Simulation of video-only activity classification is done by reducing the radial motion resolution and increasing the radial velocity error, and shows good performance on a significant number of activities. Simulation of radar-only classification is done by reducing the angular resolution and increasing the angular velocity error and shows good performance on a roughly orthogonal set of activities. Fusing the simulated radar and video data together at different fusion levels and comparing to the 3D ladar system gives an estimate of the loss in classification capability when using the less expensive fusion system.
Real-time technology for enhancing long-range imagery
Many ISR applications require constant monitoring of targets from long distance. When capturing over long distances, imagery is often degraded by atmospheric turbulence. This adds a time-variant blurring effect to captured data, and can result in a significant loss of information. To recover it, image processing techniques have been developed to enhance sequences of short exposure images or videos in order to remove frame-specific scintillation and warping. While some of these techniques have been shown to be quite effective, the associated computational complexity and required processing power limits the application of these techniques to post-event analysis. To meet the needs of real-time ISR applications, video enhancement must be done in real-time in order to provide actionable intelligence as the scene unfolds. In this paper, we will provide an overview of an algorithm capable of providing the enhancement desired and focus on its real-time implementation. We will discuss the role that GPUs play in enabling real-time performance. This technology can be used to add performance to ISR applications by improving the quality of long-range imagery as it is collected and effectively extending sensor range.
Characterization of UAV hover patterns in support of super resolution research
Jeremy Straub, Ronald Marsh
Prior work has demonstrated the efficacy of a hierarchical super-resolution technique for enhancing image data similar to that collected by UAVs. This technology relies on sub-pixel movement between images (which was artificially created during the prior work). This paper characterizes a UAV’s hover movement pattern to evaluate whether this (1) may produce the requisite level of movement and (2) to characterize the distances from the target at which this hover pattern movement provide a suitable level of shifting. The impact of being able to utilize the hover movement to aid super-resolution is discussed and future work is considered.
ISR: Image Processing and Tracking
icon_mobile_dropdown
Aerial video mosaicking using binary feature tracking
Unmanned Aerial Vehicles are becoming an increasingly attractive platform for many applications, as their cost decreases and their capabilities increase. Creating detailed maps from aerial data requires fast and accurate video mosaicking methods. Traditional mosaicking techniques rely on inter-frame homography estimations that are cascaded through the video sequence. Computationally expensive keypoint matching algorithms are often used to determine the correspondence of keypoints between frames. This paper presents a video mosaicking method that uses an object tracking approach for matching keypoints between frames to improve both efficiency and robustness. The proposed tracking method matches local binary descriptors between frames and leverages the spatial locality of the keypoints to simplify the matching process. Our method is robust to cascaded errors by determining the homography between each frame and the ground plane rather than the prior frame. The frame-to-ground homography is calculated based on the relationship of each point’s image coordinates and its estimated location on the ground plane. Robustness to moving objects is integrated into the homography estimation step through detecting anomalies in the motion of keypoints and eliminating the influence of outliers. The resulting mosaics are of high accuracy and can be computed in real time.
Background image understanding and adaptive imaging for vehicle tracking
Burak Uzkent, Matthew J. Hoffman, Anthony Vodacek, et al.
We describe our effort to create an imaging-based vehicle tracking system that uses the principles of dynamic data driven applications systems to observe, model, and collect new within a dynamic feedback loop. Several unique aspects of the system include tracking of user-defined vehicles, the use of an adaptive sensor that can change modality, and a reliance on background image understanding to improve tracking and minimize error. We describe the system and show results demonstrated within the DIRSIG image simulation model that show improved tracking results for the system.
Enhanced performance for the interacting multiple model estimator with integrated multiple filters
Madeleine G. Sabordo, Elias Aboutanios
In this paper, we propose a new approach to target visibility for the Interacting Multiple Model (IMM) algorithm. We introduce the IMM Integrated Multiple Filters (IMF) to selectively engage a suitable filter appropriate for gated clutter density at each time step and investigate five model sets that model the dynamic motion of a manoeuvring target. The model sets are incorporated into the IMM-IMF tracker to estimate the behaviour of the target. We employ the Dynamic Error Spectrum (DES) to assess the effectiveness of the tracker with target visibility concept incorporated and to compare the performance of the model sets in enhancing tracking performance. Results show that the new version of target visibility significantly improves the performance of the tracker. Simulation results also demonstrate that the 2CV-CA-2CT model set proves to be the most robust at the cost of computational resource. The CV-CA model is the fastest tracker. However, it is the least robust in terms of performance. These results assist decision makers and researchers in choosing appropriate models for IMMtrackers. Augmenting the capability of the tracker improves the ability of the platform to identify possible threats and consequently, enhance situational awareness.
ISR: Change Detection
icon_mobile_dropdown
Improving change detection results with knowledge of registration uncertainty
Uncertainty in the registration between two images remains a problematic source of error in performing change detection between them. While a number of methods have been developed for reducing the impact of registration error in change detection, none of these methods are based upon a statistical characterization of the uncertainty in the estimate of the registration transformation. When utilizing a feature-point based registration algorithm, we can compute a Cramer-Rao lower bound (CRLB) on the estimate of the registration transformation based on an assumed covariance in the feature-point locations. This information can be used to predict the variance on the location at which pixels will appear in the registered image, which can be used to estimate the bias and variance introduced into the pixel intensities by registration uncertainty. Here, we use this information to improve change detection performance and verify this improvement with simulated and experimental results.
Change detection on UGV patrols with respect to a reference tour using VIS imagery
Autonomous driving robots (UGVs, Unmanned Ground Vehicles) equipped with visual-optical (VIS) cameras offer a high potential to automatically detect suspicious occurrences and dangerous or threatening situations on patrol. In order to explore this potential, the scene of interest is recorded first on a reference tour representing the 'everything okay' situation. On further patrols changes are detected with respect to the reference in a two step processing scheme. In the first step, an image retrieval is done to find the reference images that are closest to the current camera image on patrol. This is done efficiently based on precalculated image-to-image registrations of the reference by optimizing image overlap in a local reference search (after a global search when that is needed). In the second step, a robust spatio-temporal change detection is performed that widely compensates 3-D parallax according to variations of the camera position. Various results document the performance of the presented approach.
ISR: Exploitation
icon_mobile_dropdown
Pressing the sparsity advantage via data-based decomposition
Vahid R. Riasati, Laura Andress, Denis Grishin
Numerous ℓ1-norm reconstruction techniques have enabled exact data reconstruction with high probability from ‘k-sparse’ data. In this work, we utilize the adaptive Gram-Schmidt technique to test the limits of compressed sensing (CS) based reconstruction using total variation. The Projection-Slice Synthetic Discriminant Function (PSDF) filter naturally lends itself to compressive sensing techniques due to the inherent dimensionality reductions of the filter generated by the projection-slice theorem, or PST. In this brief study we utilize CS for the PSDF by constructing the PSDF impulse response while iteratively reducing the AGS error terms. The truncation prioritizes the vectors with regard to the error energy levels associated with the representation of the data in the Gram- Schmidt process.
ISR: Image Sequences/Full Motion Video
icon_mobile_dropdown
Context and quality estimation in video for enhanced event detection
John M. Irvine, Richard J. Wood
Numerous practical applications for automated event recognition in video rely on analysis of the objects and their associated motion, i.e., the kinematics of the scene. The ability to recognize events in practice depends on accurate tracking objects of interest in the video data and accurate recognition of changes relative to the background. Numerous factors can degrade the performance of automated algorithms. Our object detection and tracking algorithms estimate the object position and attributes within the context of a dynamic assessment of video quality, to provide more reliable event recognition under challenging conditions. We present an approach to robustly modeling the image quality which informs tuning parameters to use for a given video stream. The video quality model rests on a suite of image metrics computed in real-time from the video. We will describe the formulation of the image quality model. Results from a recent experiment will quantify the empirical performance for recognition of events of interest.
Automated FMV image quality assessment based on power spectrum statistics
Factors that degrade image quality in video and other sensor collections, such as noise, blurring, and poor resolution, also affect the spatial power spectrum of imagery. Prior research in human vision and image science from the last few decades has shown that the image power spectrum can be useful for assessing the quality of static images. The research in this article explores the possibility of using the image power spectrum to automatically evaluate full-motion video (FMV) imagery frame by frame. This procedure makes it possible to identify anomalous images and scene changes, and to keep track of gradual changes in quality as collection progresses. This article will describe a method to apply power spectral image quality metrics for images subjected to simulated blurring, blocking, and noise. As a preliminary test on videos from multiple sources, image quality measurements for image frames from 185 videos are compared to analyst ratings based on ground sampling distance. The goal of the research is to develop an automated system for tracking image quality during real-time collection, and to assign ratings to video clips for long-term storage, calibrated to standards such as the National Imagery Interpretability Rating System (NIIRS).
An automated analysis of wide area motion imagery for moving subject detection
Automated analysis of wide area motion imagery (WAMI) can significantly reduce the effort required for converting data into reliable decisions. We register consecutive WAMI frames and use false-color frame comparisons to enhance the visual detection of possible subjects in the imagery. The large number of WAMI detections produces the need for a prioritization of detections for further inspection. We create a priority queue of detections for automated revisit with smaller field-ofview assets based on the locations of the movers as well as the probability of the detection. This automated queue works within an operator’s preset prioritizations but also allows the flexibility to dynamically respond to new events as well as incorporating additional information into the surveillance tasking.