Proceedings Volume 5787

Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications II

cover
Proceedings Volume 5787

Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications II

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 10 May 2005
Contents: 6 Sessions, 20 Papers, 0 Presentations
Conference: Defense and Security 2005
Volume Number: 5787

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • ISR Systems
  • Sensor Management, Information Storage, and Displays
  • Information Processing and System Enhancement
  • Welcome and Opening Remarks
  • Motion Tracking
  • Lasers, Sensors, and Information Transmission
ISR Systems
icon_mobile_dropdown
MANTIS-3T: a low-cost light-weight turreted spectral sensor
Joseph Dirbas, Tony Mireles, Adam Davies, et al.
PAR Government Systems Corporation (PAR) has developed a low-cost, low-weight, low-profile, mission-adaptable multispectral imaging system utilizing mass-produced commercial off-the-shelf (COTS) components, for the purpose of providing continuous real-time multispectral data collection for mine counter measures (MCM), intelligence, surveillance, and reconnaissance study applications aboard low-cost, light manned and unmanned aircraft platforms. The mission adaptable narrowband tunable imaging system (MANTIS) has been integrated into a small 5" turret currently employed on a variety of small UAV platforms. The turreted MANTIS (MANTIS-3T) provides remote operator control to adjust gain, exposure, and pointing commands. The MANTIS-3T sensor will be used to collect imagery over calibration and test targets. Integration strategies and planned data collections are presented.
The civil air patrol ARCHER hyperspectral sensor system
Brian Stevenson, Rory O'Connor, William Kendall, et al.
The Civil Air Patrol (CAP) is procuring Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) systems to increase their search-and-rescue mission capability. These systems are being installed on a fleet of Gippsland GA-8 aircraft, and will position CAP to gain realworld mission experience with the application of hyperspectral sensor and processing technology to search and rescue. The ARCHER system design, data processing, and operational concept leverage several years of investment in hyperspectral technology research and airborne system demonstration programs by the Naval Research Laboratory (NRL) and Air Force Research Laboratory (AFRL). Each ARCHER system consists of a NovaSol-designed, pushbroom, visible/near-infrared (VNIR) hyperspectral imaging (HSI) sensor, a co-boresighted visible panchromatic high-resolution imaging (HRI) sensor, and a CMIGITS-III GPS/INS unit in an integrated sensor assembly mounted inside the GA-8 cabin. ARCHER incorporates an on-board data processing system developed by Space Computer Corporation (SCC) to perform numerous real-time processing functions including data acquisition and recording, raw data correction, target detection, cueing and chipping, precision image geo-registration, and display and dissemination of image products and target cue information. A ground processing station is provided for post-flight data playback and analysis. This paper describes the requirements and architecture of the ARCHER system, including design, components, software, interfaces, and displays. Key sensor performance characteristics and real-time data processing features are discussed in detail. The use of the system for detecting and geo-locating ground targets in real-time is demonstrated using test data collected in Southern California in the fall of 2004.
System design of an airborne infrared measurement system
Laurent Rousset-Rouviere, Christophe Marchon, Sebastien Vendomele, et al.
Onera has designed and developed an scientific airborne infrared measurement system. This system is constituted of a supervisor computer and two scientific instruments (a cryogenic IR multiband camera and a cryogenic IR spectro-radiometer). This article presents the different elements of the system and focuses on the design of the cryogenic IR camera. The IR camera design involves instrument control, data acquisition, IRIG time stamping, target acquisition and tracking. This article highlights also the communication design using two Ethernet networks linking the elements of the experimental measurement chain.
Advanced airborne ISR demonstration system (USA)
Recon/Optical, Inc. (ROI) is developing an advanced airborne Intelligence, Surveillance, and Reconnaissance (ISR) demonstration system based upon the proven ROI technology used in the SHAred Reconnaissance Pod (SHARP) for the U.S. Navy F/A-18. The demonstration system, which includes several state-of-the-art technology enhancements for next-generation ISR, is scheduled for flight testing in the summer of 2005. The demonstration system contains a variant of the SHARP medium altitude CA-270 camera, comprising an inertially stabilized Visible/NIR 5Kx5K imager and MWIR 2Kx2K imager to provide simultaneous high resolution/wide area coverage dual-band operation. The imager has been upgraded to incorporate a LN-100G GPS/INS within the sensor passive isolation loop to improve the accuracy of the NITF image metadata. The Image Processor is also based upon the SHARP configuration, but the demo system contains several enhancements including increased image processing horsepower, Ethernet-based Command & Control, next-generation JPEG2000 image compression, JPEG2000 Interactive Protocol (JPIP) network data server/client architecture, bi-directional RF datalink, advanced image dissemination/exploitation, and optical Fibrechannel I/O to the solid state recorder. This paper describes the ISR demonstration system and identifies the new network centric CONOPS made possible by the technology enhancements.
Sensor Management, Information Storage, and Displays
icon_mobile_dropdown
A sensor management framework for autonomous UAV surveillance
Morgan Ulvklo, Jonas Nygards, Jorgen Karlholm, et al.
This paper presents components of a sensor management architecture for autonomous UAV systems equipped with IR and video sensors, focusing on two main areas. Firstly, a framework inspired by optimal control and information theory is presented for concurrent path and sensor planning. Secondly, a method for visual landmark selection and recognition is presented. The latter is intended to be used within a SLAM (Simultaneous Localization and Mapping) architecture for visual navigation. Results are presented on both simulated and real sensor data, the latter from the MASP system (Modular Airborne Sensor Platform), an in-house developed UAV surrogate system containing a gimballed IR camera, a video sensor, and an integrated high performance navigation system.
Control and display stations for simultaneous multiple dissimilar unmanned aerial vehicles
Dale C. Linne von Berg, Michael D. Duncan, John G. Howard, et al.
The NRL Optical Sciences Division has developed and demonstrated ground and airborne-based control, display, and exploitation stations for simultaneous use of multiple dissimilar unmanned aerial vehicle (UAV) Intelligence, Surveillance, and Reconnaissance (ISR) systems. The demonstrated systems allow operation on airborne and ground mobile platforms and allow for the control and exploitation of multiple on-board airborne and/or remote unmanned sensor systems simultaneously. The sensor systems incorporated into the control and display stations include visible and midwave infrared (EO/MWIR) panchromatic and visible through short wave infrared (VNIR-SWIR) hyperspectral (HSI) sensors of various operational types (including step-stare, push-broom, whisk-broom, and video). Demonstrated exploitation capabilities include real-time screening, sensor control, pre-flight and real-time payload/platform mission planning, geo-referenced imagery mosaicing, change detection, stereo imaging, moving target tracking, and networked dissemination to distributed exploitation nodes (man-pack, vehicle, and command centers). Results from real-time flight tests using ATR, Finder, and TERN UAV's are described.
Multi-sensor digital recorder systems for next generation maritime patrol aircraft and sensors
It is customary for Maritime Patrol Aircraft (MPA) to be fitted with a number of sensor systems used in multiple mission roles. Traditionally the sensors found onboard MPA are accompanied with a data recorder system that has been configured with a sensor unique analogue interface. In most cases the recorders themselves are of either a commercial or instrumentation standard. The interfaces to the sensor systems are what make the recorder unique and in many cases limited in use. It is not uncommon for a single aircraft to have up to a half-dozen different sensor types, along with a half-dozen of these unique recorder systems to support them. Each of these exclusive systems naturally has a corresponding ground replay system, as well as, distinctive media and the related support infrastructure to support them.
Mosaics from video with burned-in metadata
In this paper, we discuss some of the challenges of computing mosaics from practical aerial surveillance video, and how these challenges can be overcome. One particular challenge is "burned-in" metadata, which occurs when metadata from the sensor and aircraft are burned into the actual pixel data. Another obstacle is the presence of "black borders" that commonly appear on the edges of video frames, which may vary in size and location from system to system. The paper demonstrates methods of robustly aligning frames and compositing them so that the limitations just mentioned do not affect the final mosaic quality too severely.
Remote imagery for unmanned ground vehicles (RIUGV)
Philip A. Frederick, Robert Kania, Bernard Theisen, et al.
The combination of high-resolution multi-spectral satellite imagery and advanced COTS object-oriented image processing software provides for an automated terrain feature extraction and classification capability. This information, along with elevation data, infrared imagery, a vehicle mobility model and various meta-data (local weather reports, Zobler Soil map, etc...), is fed into automated path planning software to provide a stand-alone ability to generate rapidly updateable dynamic mobility maps for Manned or Unmanned Ground Vehicles (MGVs or UGVs). These polygon based mobility maps can reside on an individual platform or a tactical network. When new information is available, change files are generated and ingested into existing mobility maps based on user selected criteria. Bandwidth concerns are mitigated by the use of shape files for the representation of the data (e.g. each object in the scene is represented by a shape file and thus can be transmitted individually). User input (desired level of stealth, required time of arrival, etc...) determines the priority in which objects are tagged for updates. This technology was tested at Fort Knox, Kentucky October 11th-15th 2004. Satellite imagery was acquired in a near-real-time fashion for the selected test site. Portions of the resulting geo-rectified image were compared with surveyed range locations to assess the accuracy of the approach. The derived UGV Path Plans were ingested into a Stryker UGV and the routes were autonomously traversed. This paper will detail the feasibility of this approach based of the results of this testing.
Information Processing and System Enhancement
icon_mobile_dropdown
QUICKFIRE: a JPEG 2000/JPIP-enabled ISR screener application
S. Danny Rajan, Christopher Kavanagh, James Kasner, et al.
In this paper we present a JPEG2000-enabled ISR dissemination system that provides an airborne-based compression server and a ground-based screener client. This system makes possible direct dissemination of airborne collected imagery to users on the ground via existing portable communications. Utilizing the progressive nature of JPEG2000, the interactive capabilities of its associated JPIP streaming, and the on-the-fly mosaicing capability of the MIRAGE ground screener client application, ground-based users can interactively access large volumes of geo-referenced imagery from an airborne image collector. The system, called QUICKFIRE, is a recently developed prototype demonstrator. We present preliminary results from this effort.
Establishing a common coordinate view in multiple moving aerial cameras
Yaser Sheikh, Alexei Gritai, Imran Junejo, et al.
A camera mounted on an aerial vehicle provides an excellent means of monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. The underlying concept of such co-operative sensing is to use inter-camera relationships to give global context to 'locally' obtained information at each camera. It is desirable, therefore, that the data collected at each camera and the inter-camera relationship discerned by the system be presented in a coherent visualization. Since the cameras are mounted on UAVs, large swaths of areas may be traversed in a short period of time, coherent visualization is indispensable for applications like surveillance and reconnaissance. While most visualization approaches have hitherto focused on data from a single camera at a time, as a consequence of tracking objects across cameras, we show that widely separated mosaics can be aligned, both in space and color, for concurrent visualization. Results are shown on a number of real sequences, validating our qualitative models.
Performance of an EO/IR sensor system in marine search and rescue
Carrie L. Leonard, Michael J. DeWeert, Jonathan Gradie, et al.
This investigation centered on the most challenging search and rescue requirements: finding small targets in high seas. Our course of action was to investigate the capabilities of known hyperspectral and LWIR sensors in realistic conditions of target and environment to drive the design of a sensor system capable of substantially improving search efficiency and efficacy for these conditions. The relevant results from this study include demonstration of significant power in clutter rejection (e.g., whitewater and wave complexity) by the LWIR sensor. In addition, several factors combine to indicate that a modest implementation of HSI and IR sensors would provide significant improvement in search efficiency and efficacy for small targets in high seas. These factors include high PD, low PFA, and the untiring nature of the sensors when combined with the potential of real-time automatic target/background discrimination. This modest implementation would translate directly into faster, more complete coverage, at lower overall costs to the USCG, and a more likely probability of a successful search mission.
Welcome and Opening Remarks
icon_mobile_dropdown
Automatic motion detection in reconnaissance imagery and other applications of real time orthorectification
High speed orthorectification is an enabling technology which can shorten the delay between image acquisition and precise geolocation to almost zero. This paper describes recent advances in orthorectification processing speeds and shows how the speeds now possible can permit image fusion, change and motion detection, ground control collection and adjustment to be done in real time. It also shows how orthorectification can provide an alternative means of video compression.
Motion Tracking
icon_mobile_dropdown
Two-color infrared missile warning sensors
Current missile-warning sensors on aircraft mostly operate in the ultraviolet wavelength band. Aimed primarily at detecting short-range, shoulder-fired surface-to-air missiles, the detection range of the sensors is of the same order as the threat range, which is 3-5 km. However, this range is only attained against older missiles, with bright exhaust flames. Modern missile developments include the use of new propellants, which generate low-intensity plumes. These threats are detected at much shorter ranges by current ultraviolet warning sensors, resulting in short reaction times. Infrared sensors are able to detect targets at a much longer range. In contrast with the ultraviolet band, in which a target is observed against an almost zero background, infrared sensors must extract targets from a complex background. This leads to a much higher false-alarm rate, which has thus far prevented the deployment of infrared sensors in a missile warning system. One way of reducing false-alarms levels is to make use of the spectral difference between missile plumes and the background. By carefully choosing two wavelength bands, the contrast between missile plume and background can be maximised. This paper presents a method to search for the best possible combination of two bands in the mid-wave infrared, that leads to the longest detection ranges and that works for a wide range of missile propellants. Detection ranges predicted in the infrared will be compared with those obtained in the ultraviolet, to demonstrate the increased range and, therefore, the increased reaction time for the aircraft.
A real-time video mosaicking and target tracking system
This paper describes an automatic video target tracking system that operates on the panoramic image provided by an image mosaicking preprocessing stage. In the mosaic preprocessing stage, a feature-based algorithm is applied to obtain the underlying homography between consecutive frames in a video sequence. With the first frame in the sequence chosen as the base image plane, subsequent frames are warped and merged into a panoramic scene for the video tracking stage. The tracking algorithm calculates the motion vector for each block in a warped frame by comparing it with the panoramic image, and those exceeding the dominant background motion can be considered as blocks belonging to potential moving foreground objects. Image segmentation is then used to recover the boundaries of the foreground objects. After fusing the labeled boundaries with the motion vector information, the potential targets, as well as their feature vectors, are identified. The feature vectors include information pertaining to location, size, and optical characteristics, and are input into a sub-tracker for record keeping. The input to the proposed system is a video stream from a single camera.
Target tracking, moving target detection, stabilisation and enhancement of airborne video
The ability to automatically detect and track moving targets whilst stabilizing and enhancing the incoming video would be highly beneficial in a range of aerial reconnaissance scenarios. We have implemented a number of image-processing algorithms on our ADEPT hardware to perform these and other useful tasks in real-time. Much of this functionality is currently being migrated onto a smaller PC104 form-factor implementation that would be ideal for UAV applications. In this paper, we show results from both software and hardware implementations of our current suite of algorithms using synthetic and real airborne video. We then investigate an image processing architecture that integrates mosaic formation, stabilisation and enhancement functionality using micro-mosaics, an architecture which yields benefits for all the processes.
Lasers, Sensors, and Information Transmission
icon_mobile_dropdown
Performance requirements for the laser event recorder
Lasers for defense applications continue to grow in power and fill in new portions of the spectrum, expanding the laser eye safety hazard, particularly to aircrew and aviation safety. The Laser Event Recorder Program within Naval Air Systems Command (NAVAIR) seeks to develop a low cost, self-contained laser sensor able to detect, warn and record laser exposures that hazard aircrew vision. The spectral and temporal range of hazardous lasers (400 to 1600 nm and pulsed to continuous) has presented a challenge in the past. However, diffractive optics and imaging technologies have enabled a solution to this growing threat. This paper will describe the technical requirements for the Laser Event Recorder, which are based on ANSI Z136.1 laser safety standards and common to its use on any platform. To support medical and operational laser eye protection, the LER extracts and records laser wavelength, radiant exposure, exposure duration, pulse structure, latitude, longitude, altitude and time of laser exposure. Specific performance and design issues of the LER prototype will be presented in a companion paper. In this paper, fundamental challenges to the requirements encountered during the first two years of research, development and successful outdoor testing will be reviewed. These include discrimination against all outdoor light levels and the impact of atmospheric beam propagation on accuracy of the radiant exposure determination. Required accuracy and response time of the determination of whether a laser exposure exceeds the maximum permissible exposure (MPE) will be described. Ongoing efforts to coordinate laser exposure reporting and medical management will also be discussed.
Ground and air test performance of the laser event recorder
Craig R. Schwarze, Robert Vaillancourt, David Carlson, et al.
The primary objective of this effort is to develop a low-cost, self-powered, and compact laser event recorder and warning sensor for the measurement of laser events. Previously we reported on the technology and design of the Laser Event Recorder. In this paper we describe results from a series of ground and airborne tests of the Laser Event Recorder.
CCD sensor and camera for 100 Mfps burst frame rate image capture
Leonid Lazovsky, Daniel Cismas, Gary Allan, et al.
CCD sensor capable of 100 Mfps (millions frames per second) burst rate has been designed, fabricated and tested. ILT-architecture sensor can capture 16 successive frames with 64x64 pixels and down to 10 ns time resolution. Each pixel consists of a photosite and 16 storage elements arranged in two separate CCD shift registers of 8 elements each. The shift registers connect continuously (and serially) from pixel to pixel to form a column. During burst integration, charge from the photosite is read out alternatively upward and downward into storage elements. During readout time, the photosite is reset while previously integrated charge packets are transferred into horizontal CCD registers located at opposite sides of the sensor. Lag as low as 10% at 100Mfps burst frame rate has been demonstrated. To compensate for low fill factor, microlens array is attached to the die.
Self-sustainability of optical fibers in airborne communications
Anurag Dwivedi, Eric J. Finnegan
A large number of communications technologies co-exist today in both civilian and military space with their relative strengths and weaknesses. The information carrying capacity of optical fiber communication, however, surpasses any other communications technology in use today. Additionally, optical fiber is immune to environmental effects and detection, and can be designed to be resistant to exploitation and jamming. However, fiber-optic communication applications are usually limited to static, pre-deployed cable systems. Enabling the fiber applications in dynamically deployed and ad-hoc conditions will open up a large number of communication possibilities in terrestrial, aerial, and oceanic environments. Of particular relevance are bandwidth intensive data, video and voice applications such as airborne imagery, multispectral and hyperspectral imaging, surveillance and communications disaster recovery through surveillance platforms like Airships (also called balloons, aerostats or blimps) and Unmanned Aerial Vehicles (UAVs). Two major considerations in the implementation of airborne fiber communications are (a) mechanical sustainability of optical fibers, and (b) variation in optical transmission characteristics of fiber in dynamic deployment condition. This paper focuses on the mechanical aspects of airborne optical fiber and examines the ability of un-cabled optical fiber to sustain its own weight and wind drag in airborne communications applications. Since optical fiber is made of silica glass, the material fracture characteristics, sub-critical crack growth, strength distribution and proof stress are the key parameters that determine the self-sustainability of optical fiber. Results are presented in terms of maximum self-sustainable altitudes for three types of optical fibers, namely silica-clad, Titania-doped Silica-clad, and carbon-coated hermetic fibers, for short and long service periods and a range of wind profiles and fiber dimensions.