Proceedings Volume 9076

Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications XI

cover
Proceedings Volume 9076

Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications XI

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 18 June 2014
Contents: 8 Sessions, 22 Papers, 0 Presentations
Conference: SPIE Defense + Security 2014
Volume Number: 9076

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9076
  • ISR Image Processing
  • ISR Video Processing
  • ISR Strategies and Training
  • ISR Optics and Gimbals
  • ISR Sensors I
  • ISR Sensors II
  • Poster Session
Front Matter: Volume 9076
icon_mobile_dropdown
Front Matter: Volume 9076
This PDF file contains the front matter associated with SPIE Proceedings Volume 9076, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
ISR Image Processing
icon_mobile_dropdown
High-performance electronic image stabilisation for shift and rotation correction
Steve C. J. Parker, D. L. Hickman, F. Wu
A novel low size, weight and power (SWaP) video stabiliser called HALO™ is presented that uses a SoC to combine the high processing bandwidth of an FPGA, with the signal processing flexibility of a CPU. An image based architecture is presented that can adapt the tiling of frames to cope with changing scene dynamics. A real-time implementation is then discussed that can generate several hundred optical flow vectors per video frame, to accurately calculate the unwanted rigid body translation and rotation of camera shake. The performance of the HALO™ stabiliser is comprehensively benchmarked against the respected Deshaker 3.0 off-line stabiliser plugin to VirtualDub. Eight different videos are used for benchmarking, simulating: battlefield, surveillance, security and low-level flight applications in both visible and IR wavebands. The results show that HALO™ rivals the performance of Deshaker within its operating envelope. Furthermore, HALO™ may be easily reconfigured to adapt to changing operating conditions or requirements; and can be used to host other video processing functionality like image distortion correction, fusion and contrast enhancement.
Identification of spatially corresponding imagery using content-based image retrieval in the context of UAS video exploitation
Stefan Brüstle, Daniel Manger, Klaus Mück, et al.
For many tasks in the fields of reconnaissance and surveillance it is important to know the spatial location represented by the imagery to be exploited. A task involving the assessment of changes, e.g. the appearance or disappearance of an object of interest at a certain location, can typically not be accomplished without spatial location information associated with the imagery. Often, such georeferenced imagery is stored in an archive enabling the user to query for the data with respect to its spatial location. Thus, the user is able to effectively find spatially corresponding imagery to be used for change detection tasks. In the field of exploitation of video taken from unmanned aerial systems (UAS), spatial location data is usually acquired using a GPS receiver, together with an INS device providing the sensor orientation, both integrated in the UAS. If during a flight valid GPS data becomes unavailable for a period of time, e.g. due to sensor malfunction, transmission problems or jamming, the imagery gathered during that time is not applicable for change detection tasks based merely on its georeference. Furthermore, GPS and INS inaccuracy together with a potentially poor knowledge of ground elevation can also render location information inapplicable. On the other hand, change detection tasks can be hard to accomplish even if imagery is well georeferenced as a result of occlusions within the imagery, due to e.g. clouds or fog, or image artefacts, due to e.g. transmission problems. In these cases a merely georeference based approach to find spatially corresponding imagery can also be inapplicable. In this paper, we present a search method based on the content of the images to find imagery spatially corresponding to given imagery independent from georeference quality. Using methods from content-based image retrieval, we build an image database which allows for querying even large imagery archives efficiently. We further evaluate the benefits of this method in the context of a video exploitation workflow on the basis of its integration into our video archive system.
Meta-image navigation augmenters for GPS denied mountain navigation of small UAS
Teng Wang, Koray Çelik, Arun K. Somani
We present a novel approach to use mountain drainage patterns for GPS-Denied navigation of small unmanned aerial systems (UAS) such as the ScanEagle, utilizing a down-looking fixed focus monocular imager. Our proposal allows extension of missions to GPS-denied mountain areas, with no assumption of human-made geographic objects. We leverage the analogy between mountain drainage patterns, human arteriograms, and human fingerprints, to match local drainage patterns to Graphics Processing Unit (GPU) rendered parallax occlusion maps of geo-registered radar returns (GRRR). Details of our actual GPU algorithm is beyond the subject of this paper, and is planned as a future paper. The matching occurs in real-time, while GRRR data is loaded on-board the aircraft pre-mission, so as not to require a scanning aperture radar during the mission. For recognition purposes, we represent a given mountain area with a set of spatially distributed mountain minutiae, i.e., details found in the drainage patterns, so that conventional minutiae-based fingerprint matching approaches can be used to match real-time camera image against template images in the training set. We use medical arteriography processing techniques to extract the patterns. The minutiae-based representation of mountains is achieved by first exposing mountain ridges and valleys with a series of filters and then extracting mountain minutiae from these ridges/valleys. Our results are experimentally validated on actual terrain data and show the effectiveness of minutiae-based mountain representation method. Furthermore, we study how to select landmarks for UAS navigation based on the proposed mountain representation and give a set of examples to show its feasibility. This research was in part funded by Rockwell Collins Inc.
Parallax visualization of full motion video using the Pursuer GUI
Christopher A. Mayhew, Mark B. Forgues
In 2013, the Authors reported to the SPIE on the Phase 1 development of a Parallax Visualization (PV) plug-in toolset for Wide Area Motion Imaging (WAMI) data using the Pursuer Graphical User Interface (GUI).1 In addition to the ability to PV WAMI data, the Phase 1 plug-in toolset also featured a limited ability to visualize Full Motion video (FMV) data. The ability to visualize both WAMI and FMV data is highly advantageous capability for an Electric Light Table (ELT) toolset. This paper reports on the Phase 2 development and addition of a full featured FMV capability to the Pursuer WAMI PV Plug-in.
ISR Video Processing
icon_mobile_dropdown
Effect of video decoder errors on video interpretability
The advancement in video compression technology can result in more sensitivity to bit errors. Bit errors can propagate causing sustained loss of interpretability. In the worst case, the decoder "freezes" until it can re-synchronize with the stream. Detection of artifacts enables downstream processes to avoid corrupted frames. A simple template approach to detect block stripes and a more advanced cascade approach to detect compression artifacts was shown to correlate to the presence of artifacts and decoder messages.
Multi-frame image processing with panning cameras and moving subjects
Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software’s operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.
Marine object detection in UAV full-motion video
Shibin Parameswaran, Corey Lane, Bryan Bagnall, et al.
Recent years have seen an increased use of Unmanned Aerial Vehicles (UAV) with video-recording capability for Maritime Domain Awareness (MDA) and other surveillance operations. In order for these e orts to be effective, there is a need to develop automated algorithms to process the full-motion videos (FMV) captured by UAVs in an efficient and timely manner to extract meaningful information that can assist human analysts and decision makers. This paper presents a generalizeable marine object detection system that is specifically designed to process raw video footage streaming from UAVs in real-time. Our approach does not make any assumptions about the object and/or background characteristics because, in the MDA domain, we encounter varying background and foreground characteristics such as boats, bouys and ships of varying sizes and shapes, wakes, white caps on water, glint from the sun, to name but a few. Our efforts rely on basic signal processing and machine learning approaches to develop a generic object detection system that maintains a high level of performance without making prior assumptions about foreground-background characteristics and does not experience abrupt performance degradation when subjected to variations in lighting, background characteristics, video quality, abrupt changes in video perspective, size, appearance and number of the targets. In the following report, in addition to our marine object detection system, we present representative object detection results on some real-world UAV full-motion video data.
A comparison of moving object detection methods for real-time moving object detection
Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.
Improved frame differencing based moving object detection using feet-step sound
Moving objects have been detected using various object detection techniques. Two categories for moving object detection techniques are frame differencing based and background subtraction based. These techniques are limited by camera scene complexity, light conditions, video type etc. Frame differencing based techniques process videos faster compared to background subtraction based techniques. Frame differencing based techniques detects only the boundary of the moving object and may fail for slow moving objects. These techniques for moving object detection can be improved by using sound data as most video recording cameras are equipped with a microphone. Sounds from human footsteps can be recorded with video and used with frame differencing techniques to improve moving object detection results. Camera microphones also record background noise with other background sound. This noisy data has been filtered out using the Fourier transform. When peak locations for each footstep sound are determined, and a Full Width at Half Maxima is computed for each peak, the number of frames within this width are counted, these frames are verify the presence of a moving object.
ISR Strategies and Training
icon_mobile_dropdown
Near-space airships against terrorist activities
Near-space is a region surrounding the earth which is too dense for a satellite to fly and also too thin for air breathing vehicles to fly. The near-space region which is located between 65,000 and 325,000 feet is really underutilized despite its unique potential. Near-Space airships can be used to exploit the potential of near space. Such a system can supply not only a great deal of information using ISR (Intelligence Surveillance Reconnaissance) sensors on board but also serve as a communication/data relay. Airships used in near space can cover a very wide footprint area for surveillance missions. Free of orbital mechanics these near-space assets can continue its mission for long period of time with a persistence of days and months. These assets can provide persistent intelligence for fight against terrorist activities. Terrorism is a non-state threat and doesn't have a static hierarchical structure. To fight against such an adversary an overwhelming intelligence activity must be applied. Therefore, intelligence collection and surveillance missions play a vital role in counter terrorism. Terrorists use asymmetric means of threat that require information superiority. In this study exploitation of near space by airships is analyzed for fight against terrorism. Near-space airships are analyzed according to the operational effectiveness, logistic structure and cost. Advantages and disadvantages of airships are argued in comparison with satellites and airplanes. As a result, by bridging the gap between the air and space, nearspace airships are considered to be the most important asset of warfighter especially with its operational effectiveness.
IITET and shadow TT: an innovative approach to training at the point of need
Andrew Gross, Favio Lopez, James Dirkse, et al.
The Image Intensification and Thermal Equipment Training (IITET) project is a joint effort between Night Vision and Electronics Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) and the Army Research Institute (ARI) Fort Benning Research Unit. The IITET effort develops a reusable and extensible training architecture that supports the Army Learning Model and trains Manned-Unmanned Teaming (MUM-T) concepts to Shadow Unmanned Aerial Systems (UAS) payload operators. The training challenge of MUM-T during aviation operations is that UAS payload operators traditionally learn few of the scout-reconnaissance skills and coordination appropriate to MUM-T at the schoolhouse. The IITET effort leveraged the simulation experience and capabilities at NVESD and ARI’s research to develop a novel payload operator training approach consistent with the Army Learning Model. Based on the training and system requirements, the team researched and identified candidate capabilities in several distinct technology areas. The training capability will support a variety of training missions as well as a full campaign. Data from these missions will be captured in a fully integrated AAR capability, which will provide objective feedback to the user in near-real-time. IITET will be delivered via a combination of browser and video streaming technologies, eliminating the requirement for a client download and reducing user computer system requirements. The result is a novel UAS Payload Operator training capability, nested within an architecture capable of supporting a wide variety of training needs for air and ground tactical platforms and sensors, and potentially several other areas requiring vignette-based serious games training.
ISR Optics and Gimbals
icon_mobile_dropdown
Optical line-of-sight steering using gimbaled mirrors
Satyam Satyarthi
As the resolution and throughput of optical sensors increase, they require higher line-of-sight slew rates and more precise stabilization. Furthermore, smaller and lighter sensor systems are also preferred because on-vehicle space is always at a premium. Consequently, mirror based line-of-sight control and stabilization systems have become more attractive as they are generally lighter and more compact than other systems. A general strategy for deriving the kinematic equations for mirror based imaging systems is established in this paper. Some of the most common mirror con gurations and their basic kinematic equations are also presented. Some challenges and design considerations of gimbaled mirrors line-of-sight steering and stabilization systems are also discussed.
Line-of-sight kinematics and corrections for fast-steering mirrors used in precision pointing and tracking systems
J. M. Hilkert, Gavin Kanga, K. Kinnear
Fast steering mirrors, or FSMs, have been used for several decades to enhance or augment the performance of electrooptical imaging and beam-steering systems in applications such as astronomy, laser communications and military targeting and surveillance systems. FSMs are high-precision, high-bandwidth electro-mechanical mechanisms used to deflect a mirror over a small angular displacement relative to the base it is mounted on which is typically a stabilized gimbal or other primary pointing device. Although the equations describing the line-of-sight kinematics derive entirely from the simple plane-mirror law of reflection, they are non-linear and axis-coupled and these effects increase as the FSM angular displacement increases. These inherent non-linearities and axis-coupling effects can contribute to pointing errors in certain modes of operation. The relevant kinematic equations presented in this paper can be used to assess the magnitude of the errors for a given application and make corrections as necessary.
Application of phase matching autofocus in airborne long-range oblique photography camera
Vladimir Petrushevsky, Asaf Guberman
The Condor2 long-range oblique photography (LOROP) camera is mounted in an aerodynamically shaped pod carried by a fast jet aircraft. Large aperture, dual-band (EO/MWIR) camera is equipped with TDI focal plane arrays and provides high-resolution imagery of extended areas at long stand-off ranges, at day and night. Front Ritchey-Chretien optics is made of highly stable materials. However, the camera temperature varies considerably in flight conditions. Moreover, a composite-material structure of the reflective objective undergoes gradual dehumidification in dry nitrogen atmosphere inside the pod, causing some small decrease of the structure length. The temperature and humidity effects change a distance between the mirrors by just a few microns. The distance change is small but nevertheless it alters the camera's infinity focus setpoint significantly, especially in the EO band. To realize the optics' resolution potential, the optimal focus shall be constantly maintained. In-flight best focus calibration and temperature-based open-loop focus control give mostly satisfactory performance. To get even better focusing precision, a closed-loop phase-matching autofocus method was developed for the camera. The method makes use of an existing beamsharer prism FPA arrangement where aperture partition exists inherently in an area of overlap between the adjacent detectors. The defocus is proportional to an image phase shift in the area of overlap. Low-pass filtering of raw defocus estimate reduces random errors related to variable scene content. Closed-loop control converges robustly to precise focus position. The algorithm uses the temperature- and range-based focus prediction as an initial guess for the closed-loop phase-matching control. The autofocus algorithm achieves excellent results and works robustly in various conditions of scene illumination and contrast.
ISR Sensors I
icon_mobile_dropdown
Automated multi-INT fusion for tactical reconnaissance
Thomas J. Walls, Andrew J. Boudreau, Michael L. Wilson, et al.
The capabilities of tactical intelligence, surveillance, and reconnaissance (ISR) payloads continue to expand from single sensor imagers to integrated systems of systems architectures. We describe here flight test results of the Sensor Management System (SMS) designed to provide a flexible central coordination component capable of managing multiple collaborative sensor systems onboard an aircraft or unmanned aerial system (UAS). The SMS architecture is designed to be sensor and data agnostic and provide flexible networked access for both data providers and data consumers. It supports pre-planned and ad-hoc missions, with provisions for on-demand tasking and updates from users connected via data links. The SMS system is STANAG 4575 compliant as a removable memory module (RMM) and can act as a vehicle specific module (VSM) to provide STANAG 4586 compliance (level-3 interoperability) to a noncompliant sensor system. The SMS architecture will be described and results from several flight tests that included multiple sensor combinations and live data link updates will be shown.
NV-CMOS HD camera for day/night imaging
T. Vogelsong, J. Tower, Thomas Sudol, et al.
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI’s NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
ISR Sensors II
icon_mobile_dropdown
Polarimetric sensor systems for airborne ISR
David Chenault, Joseph Foster, Joseph Pezzaniti, et al.
Over the last decade, polarimetric imaging technologies have undergone significant advancements that have led to the development of small, low-power polarimetric cameras capable of meeting current airborne ISR mission requirements. In this paper, we describe the design and development of a compact, real-time, infrared imaging polarimeter, provide preliminary results demonstrating the enhanced contrast possible with such a system, and discuss ways in which this technology can be integrated with existing manned and unmanned airborne platforms.
Real-time aerial multispectral imaging solutions using dichroic filter arrays
Eric V. Chandler, David E. Fish
The next generation of multispectral sensors and cameras needs to deliver significant improvements in size, weight, portability, and spectral band customization to support widespread commercial deployment for a variety of purposebuilt aerial, unmanned, and scientific applications. The benefits of multispectral imaging are well established for applications including machine vision, biomedical, authentication, and remote sensing environments – but many aerial and OEM solutions require more compact, robust, and cost-effective production cameras to realize these benefits. A novel implementation uses micropatterning of dichroic filters into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color camera image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. We demonstrate recent results of 4-9 band dichroic filter arrays in multispectral cameras using a variety of sensors including linear, area, silicon, and InGaAs. Specific implementations range from hybrid RGB + NIR sensors to custom sensors with applicationspecific VIS, NIR, and SWIR spectral bands. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches – including their passivity, spectral range, customization options, and development path. Finally, we report on the wafer-level fabrication of dichroic filter arrays on imaging sensors for scalable production of multispectral sensors and cameras.
9-band SWIR multispectral sensor providing full-motion video
Mary R. Kutteruf, Michael K. Yetzbacher, Michael J. Deprenger, et al.
Short wave infrared (SWIR) sensors are becoming more common in DoD imaging systems because of their haze penetration capabilities and spectral properties of materials in this waveband. Typical SWIR systems have provided either full motion video (FMV) with framing panchromatic systems or multi-spectral or hyperspectral imagery with line-scanning systems. The system described here bridges these modalities, providing FMV with nine discrete spectral bands. Nine pixel sized SWIR filters are arranged in a repeating 3x3 pattern and mounted on top of a COTS, 2D staring focal plane array (FPA). We characterize the spectral response of the filter and integrated sensor. Spot-scan measurements and data collected with this camera using narrow band sources reveals crosstalk induced nonlinearity in the sensor response. We demonstrate a simple approach to reduce the impact of this nonlinearity on collected imagery.
Poster Session
icon_mobile_dropdown
Robust real-time horizon detection in full-motion video
Grace B. Young, Bryan Bagnall, Corey Lane, et al.
The ability to detect the horizon on a real-time basis in full-motion video is an important capability to aid and facilitate real-time processing of full-motion videos for the purposes such as object detection, recognition and other video/image segmentation applications. In this paper, we propose a method for real-time horizon detection that is designed to be used as a front-end processing unit for a real-time marine object detection system that carries out object detection and tracking on full-motion videos captured by ship/harbor-mounted cameras, Unmanned Aerial Vehicles (UAVs) or any other method of surveillance for Maritime Domain Awareness (MDA). Unlike existing horizon detection work, we cannot assume a priori the angle or nature (for e.g. straight line) of the horizon, due to the nature of the application domain and the data. Therefore, the proposed real-time algorithm is designed to identify the horizon at any angle and irrespective of objects appearing close to and/or occluding the horizon line (for e.g. trees, vehicles at a distance) by accounting for its non-linear nature. We use a simple two-stage hierarchical methodology, leveraging color-based features, to quickly isolate the region of the image containing the horizon and then perform a more ne-grained horizon detection operation. In this paper, we present our real-time horizon detection results using our algorithm on real-world full-motion video data from a variety of surveillance sensors like UAVs and ship mounted cameras con rming the real-time applicability of this method and its ability to detect horizon with no a priori assumptions.
Fusion of thermal infrared and visible spectrum for robust pedestrian tracking
Tracking pedestrians is an area of computer vision that has attracted a lot of interest in recent years. Many of these work was conducted in the visible spectrum. Some work was also conducted in thermal infrared spectrum. The majority of the research work used one spectrum at a time. In this work, we present a fusion framework using thermal infrared and visible spectrums in order to robustly track the detected moving objects. The detected objects are then processed using HOG features in order to classify them as a pedestrian or a non-pedestrian using SVM. The tests were conducted in outdoor scenarios. The obtained results are promising and show the efficiency of the proposed framework.