Proceedings Volume 9828

Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications XIII

cover
Proceedings Volume 9828

Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications XIII

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 24 June 2016
Contents: 6 Sessions, 15 Papers, 0 Presentations
Conference: SPIE Defense + Security 2016
Volume Number: 9828

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9828
  • ISR: Platforms and Missions
  • ISR: Optics and Stabilization
  • ISR: Sensors and Systems
  • ISR: Information Extraction/Quantification
  • ISR: Image Processing
Front Matter: Volume 9828
icon_mobile_dropdown
Front Matter: Volume 9828
This PDF file contains the front matter associated with SPIE Proceedings Volume 9828, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
ISR: Platforms and Missions
icon_mobile_dropdown
ISR systems: Past, present, and future
Intelligence, Surveillance, and Reconnaissance (ISR) systems have been in use for thousands of years. Technology and CONOPS have continually evolved and morphed to meet ever-changing information needs and adversaries. Funding sources, constraints and procurement philosophies have also evolved, requiring cost-effective innovation to field marketable products which maximize the effectiveness of the Tasking, Capture, Processing, Exploitation, and Dissemination (TCPED) information chain. This paper describes the TCPED information chain and the evolution of ISR (past, present, and future).
A framework for autonomous and continuous aerial intelligence, surveillance, and reconnaissance operations
Christopher Korpela, Philip Root, Jinho Kim, et al.
We propose a framework for intelligence, reconnaissance, and surveillance using an aerial vehicle with multiple sensor payloads to provide autonomous and continuous security operations at a fixed location. A control scheme and a graphical user interface between the vehicle and operator is strictly mandated for tasks requiring remote and unattended inspection. By leveraging existing navigation and path planning algorithms, the system can autonomously patrol large areas, automatically recharge when required, and relay on-demand data back to the user. This paper presents recent validation results of the system and its sensors using the proposed framework.
An aerial 3D printing test mission
This paper provides an overview of an aerial 3D printing technology, its development and its testing. This technology is potentially useful in its own right. In addition, this work advances the development of a related in-space 3D printing technology. A series of aerial 3D printing test missions, used to test the aerial printing technology, are discussed. Through completing these test missions, the design for an in-space 3D printer may be advanced. The current design for the in-space 3D printer involves focusing thermal energy to heat an extrusion head and allow for the extrusion of molten print material. Plastics can be used as well as composites including metal, allowing for the extrusion of conductive material. A variety of experiments will be used to test this initial 3D printer design. High altitude balloons will be used to test the effects of microgravity on 3D printing, as well as parabolic flight tests. Zero pressure balloons can be used to test the effect of long 3D printing missions subjected to low temperatures. Vacuum chambers will be used to test 3D printing in a vacuum environment. The results will be used to adapt a current prototype of an in-space 3D printer. Then, a small scale prototype can be sent into low-Earth orbit as a 3-U cube satellite. With the ability to 3D print in space demonstrated, future missions can launch production hardware through which the sustainability and durability of structures in space will be greatly improved.
ISR: Optics and Stabilization
icon_mobile_dropdown
Laser-based satellite communication systems stabilized by non-mechanical electro-optic scanners
Laser communications systems provide numerous advantages for establishing satellite-to-ground data links. As a carrier for information, lasers are characterized by high bandwidth and directionality, allowing for fast and secure transfer of data. These systems are also highly resistant to RF influences since they operate in the infrared portion of the electromagnetic spectrum, far from radio bands. In this paper we will discuss an entirely non-mechanical electro-optic (EO) laser beam steering technology, with no moving parts, which we have used to form robust 400 Mbps optical data connections through air. This technology will enable low cost, compact, and rugged free space optical (FSO) communication modules for small satellite applications. The EO beam-steerer at the heart of this system is used to maintain beam pointing as the satellite orbits. It is characterized by extremely low values for size, weight and power consumption (SWaP) – approximately 300 cm3, 300 g, and 5 W respectively, which represents a marked improvement compared to heavy, and power-consuming gimbal mechanisms. It is capable of steering a 500 mW, 1 mm short wave infrared (SWIR) beam over a field of view (FOV) of up to 50° x 15°, a range which can be increased by adding polarization gratings, which provide a coarse adjust stage at the EO beam scanner output. We have integrated this device into a communication system and demonstrated the capability to lock on and transmit a high quality data stream by modulation of SWIR power.
Piezo-based miniature high resolution stabilized gimbal
Nir Karasikov, Gal Peled, Roman Yasinov, et al.
Piezo motors are characterized by higher mechanical power density, fast response and direct drive. These features are beneficial for miniature gimbals. A gimbal based on such motors was developed. Diameter is 58 mm, weight is 190 grams. The gimbal carries two cameras: a Flir Quark and an HD day camera. The dynamic performance is as high as 3 rad/sec velocity and 100 rad/secΛ2 acceleration. A two axes stabilization algorithm was developed, yielding 80 micro radian stabilization. Further, a panoramic image capture, at a rate of six stabilized field of views per second, was developed. The manuscript reviews the gimbal structure and open architecture, allowing adaptation to other cameras (SWIR etc.), the control algorithm and presents experimental results of stabilization and of panoramic views taken on a vibration platform and on a UAV.
ISR: Sensors and Systems
icon_mobile_dropdown
The eyes of LITENING
Eric K. Moser
LITENING is an airborne system-of-systems providing long-range imaging, targeting, situational awareness, target tracking, weapon guidance, and damage assessment, incorporating a laser designator and laser range finders, as well as non-thermal and thermal imaging systems, with multi-sensor boresight. Robust operation is at a premium, and subsystems are partitioned to modular, swappable line-replaceable-units (LRUs) and shop-replaceable-units (SRUs). This presentation will explore design concepts for sensing, data storage, and presentation of imagery associated with the LITENING targeting pod. The "eyes" of LITENING are the electro-optic sensors. Since the initial LITENING II introduction to the US market in the late 90s, as the program has evolved and matured, a series of spiral functional improvements and sensor upgrades have been incorporated. These include laser-illuminated imaging, and more recently, color sensing. While aircraft displays are outside of the LITENING system, updates to the available viewing modules have also driven change, and resulted in increasingly effective ways of utilizing the targeting system. One of the latest LITENING spiral upgrades adds a new capability to display and capture visible-band color imagery, using new sensors. This is an augmentation to the system’s existing capabilities, which operate over a growing set of visible and invisible colors, infrared bands, and laser line wavelengths. A COTS visible-band camera solution using a CMOS sensor has been adapted to meet the particular needs associated with the airborne targeting use case.
ISR: Information Extraction/Quantification
icon_mobile_dropdown
Precision optical navigation guidance system
D. Starodubov, K. McCormick, P. Nolan, et al.
We present the new precision optical navigation guidance system approach that provides continuous, high quality range and bearing data to fixed wing aircraft during landing approach to an aircraft carrier. The system uses infrared optical communications to measure range between ship and aircraft with accuracy and precision better than 1 meter at ranges more than 7.5 km. The innovative receiver design measures bearing from aircraft to ship with accuracy and precision better than 0.5 mRad. The system provides real-time range and bearing updates to multiple aircraft at rates up to several kHz, and duplex data transmission between ship and aircraft.
Pattern of life analysis for diverse data types
Clay D. Spence, Ben Southall, Alex Tozzo, et al.
SRI has developed a system to automatically analyze the Pattern of Life (PoL) of ports, routes and vessels from a large collection of AIS data. The PoL of these entities are characterized by a set of intuitive and easy to query semantic attributes. The prototype system provides an interface to ingest other types of information such as WAAS (Wide Area Aerial Surveillance) and GDELT (Global Database of Events, Language, and Tone) to augment knowledge of the Area of Operations. It can interact with users by answering questions and simulating what-if scenarios to keep human in the processing loop.
Automated video quality measurement based on manmade object characterization and motion detection
Andrew Kalukin, Josh Harguess, A. J. Maltenfort, et al.
Automated video quality assessment methods have generally been based on measurements of engineering parameters such as ground sampling distance, level of blur, and noise. However, humans rate video quality using specific criteria that measure the interpretability of the video by determining the kinds of objects and activities that might be detected in the video. Given the improvements in tracking, automatic target detection, and activity characterization that have occurred in video science, it is worth considering whether new automated video assessment methods might be developed by imitating the logical steps taken by humans in evaluating scene content. This article will outline a new procedure for automatically evaluating video quality based on automated object and activity recognition, and demonstrate the method for several ground-based and maritime examples. The detection and measurement of in-scene targets makes it possible to assess video quality without relying on source metadata. A methodology is given for comparing automated assessment with human assessment. For the human assessment, objective video quality ratings can be obtained through a menu-driven, crowd-sourced scheme of video tagging, in which human participants tag objects such as vehicles and people on film clips. The size, clarity, and level of detail of features present on the tagged targets are compared directly with the Video National Image Interpretability Rating Scale (VNIIRS).
A study of the effects of degraded imagery on tactical 3D model generation using structure-from-motion
An emerging technology in the realm of airborne intelligence, surveillance, and reconnaissance (ISR) systems is structure-from-motion (SfM), which enables the creation of three-dimensional (3D) point clouds and 3D models from two-dimensional (2D) imagery. There are several existing tools, such as VisualSFM and open source project OpenSfM, to assist in this process, however, it is well-known that pristine imagery is usually required to create meaningful 3D data from the imagery. In military applications, such as the use of unmanned aerial vehicles (UAV) for surveillance operations, imagery is rarely pristine. Therefore, we present an analysis of structure-from-motion packages on imagery that has been degraded in a controlled manner.
ISR: Image Processing
icon_mobile_dropdown
Development of a real-time neural network estimator to improve defense capabilities of HEO satellites
Samuel Lightstone, Moshe Fink, Fred Moshary, et al.
The need to observe thermal targets from space is crucial for monitoring both natural events and hostile threats1. Satellite design must balance high spatial resolution with high sensitivity and multiple spectral channels2. Defense satellites ultimately choose high sensitivity with a small number of spectral channels. This limitation makes atmospheric contamination due to water vapor a significant problem that cannot be determined from the satellite itself. Using a neural network (NN) approach in conjunction with real time measurements or model predictions of sounding data, we show that an accurate estimation of band radiation and band transmission can be computed in near real time. To demonstrate accuracy, we compare the neural network outputs to both model atmospheres as well as the Modis data for a suitable water vapor band.
Automatic construction of aerial corridor for navigation of unmanned aircraft systems in class G airspace using LiDAR
Dengchao Feng, Xiaohui Yuan
According to the airspace classification by the Federal Aviation Agency, Class G airspace is the airspace at 1,200 feet or less to the ground, which is beneath class E airspace and between classes B-D cylinders around towered airstrips. However, the lack of flight supervision mechanism in this airspace, unmanned aerial system (UAS) missions pose many safety issues. Collision avoidance and route planning for UASs in class G airspace is critical for broad deployment of UASs in commercial and security applications. Yet, unlike road network, there is no stationary marker in airspace to identify corridors that are available and safe for UASs to navigate. In this paper, we present an automatic LiDAR-based airspace corridor construction method for navigation in class G airspace and a method for route planning to minimize collision and intrusion. Our idea is to combine LiDAR to automatically identify ground objects that pose navigation restrictions such as airports and high-rises. Digital terrain model (DTM) is derived from LiDAR point cloud to provide an altitude-based class G airspace description. Following the FAA Aeronautical Information Manual, the ground objects that define the restricted airspaces are used together with digital surface model derived from LiDAR data to construct the aerial corridor for navigation of UASs. Preliminary results demonstrate competitive performance and the construction of aerial corridor can be automated with much great efficiency.
High-performance image deinterlacing using optical flow and artifact post-processing on GPU/CPU for surveillance and reconnaissance tasks
The necessity to process interlaced images in surveillance, reconnaissance, or further computer vision areas should be a topic of the past. But, for different reasons, it is not. So, there are situations in practice, in which interlaced images have to be processed. Since a lot of algorithms strongly degrade when working with such images directly, a usual method is to double or interpolate image lines in order to discard one of the two enclosed image frames. This is efficient but leads to weak results, in which half of the original information is lost. Alternatively, a lot of valuable computation time has to be spent to solve the highly complex motion compensation task in order to improve the results significantly. In this paper, an efficient algorithm is presented to solve this dilemma. First, the algorithm solves the complex 2-D mapping problem using the best state-of-theart optical flow method that could be found for this purpose. But, of course, for different physical reasons there are regions which cannot properly be handled by optical flow by itself. Therefore, an efficient post-processing method detects and removes remaining artifacts afterwards, which is the main contribution of this paper. This method is based on color interpolation incorporating local image structure. The presented results document the overall performance of the approach with respect to obtained image quality and calculation time. The method is easy to implement and offers a valuable pre-processing for a lot of computer vision tasks.
Improving detection of low SNR targets using moment-based detection
Shannon R. Young, Bryan J. Steward, Michael Hawks, et al.
Increases in the number of cameras deployed, frame rate, and detector array sizes have led to a dramatic increase in the volume of motion imagery data that is collected. Without a corresponding increase in analytical manpower, much of the data is not analyzed to full potential. This creates a need for fast, automated, and robust methods for detecting signals of interest. Current approaches fall into two categories: detect-before-track (DBT), which are fast but often poor at detecting dim targets, and track-before-detect (TBD) methods which can offer better performance but are typically much slower. This research seeks to contribute to the near real time detection of low SNR, unresolved moving targets through an extension of earlier work on higher order moments anomaly detection, a method that exploits both spatial and temporal information but is still computationally efficient and massively parallelizable. It was found that intelligent selection of parameters can improve probability of detection by as much as 25% compared to earlier work with higherorder moments. The present method can reduce detection thresholds by 40% compared to the Reed-Xiaoli anomaly detector for low SNR targets (for a given probability of detection and false alarm).