Show all abstracts
View Session
- Front Matter: Volume 8360
- Sensors and Systems I
- Sensors and Systems II
- Image Processing I
- Image Processing II
- Degraded Visual Environments (DVE) Requirements and Flight Systems
- Degraded Visual Environments (DVE) Symbology and Technologies I
- Degraded Visual Environments (DVE) Symbology and Technologies II
- Poster Session
Front Matter: Volume 8360
Front Matter: Volume 8360
Show abstract
This PDF file contains the front matter associated with SPIE Proceedings Volume 8360, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Sensors and Systems I
Foveated imager providing reduced time-to-threat detection for micro unmanned aerial system
Show abstract
A foveated imager providing a panoramic field of view with simultaneous region of interest optical zoom for use on a
micro unmanned aerial vehicle is described. The foveated imager reduces size, weight and power by imaging both wide
and telephoto fields onto a single detector. The balance of resolution between panoramic and zoom fields is optimized
against the goals of threat detection and identification with a small unmanned aerial system, resulting in a 3X reduction
in target identification time compared to conventional systems. A description of the design trades and the evaluation of a
prototype electro-optical system are provided.
Condor TAC: EO/IR tactical aerial reconnaissance photography system
Show abstract
Based on the experience gained with the Condor2 long-range oblique photography (LOROP) camera, ELOP is
expanding its airborne reconnaissance product line with the Condor TAC tactical photography system. The latter was
designed for overflight imaging of extended areas from a fighter or special mission aircraft, at day and night. The
Condor TAC is mounted in an aerodynamically shaped pod and can operate in wide envelope of flight altitude and
speed. Besides the camera, the pod contains mission management and video processing unit (MVU), solid state recorder
(SSR), wide-band data link (DL) for real-time imagery transmission, and two environmental control units (ECU).
Complex multi-segment optical windows were successfully developed for the system.
The camera system design is modular and highly flexible. Two independent imaging payload modules are mounted
inside a gimbal system. Each of the modules is equipped with a strap-down IMU, and may carry a cluster of cameras or
a single large camera with gross weight up to 35 kg. The payload modules are interchangeable, with an identical
interface to the gimbal. The modularity and open architecture of the system facilitate its adaptation to various
operational requirements, as well as allow easy and relatively non-expensive upgrades and configuration changes.
In the current configuration, both EO and IR payload modules are equipped with a combination of longer focal length
cameras for bi-directional panoramic scan at medium and high flight altitudes, and shorter focal length cameras for
fixed wide angle coverage at low altitudes. All the camera types are equipped with standard format, off-the-shelf area
detector arrays. Precise motion compensation is achieved by calibrated back-scan mirrors.
Airborne infrared hyperspectral imager for intelligence, surveillance, and reconnaissance applications
Show abstract
Persistent surveillance and collection of airborne intelligence, surveillance and reconnaissance information
is critical in today's warfare against terrorism. High resolution imagery in visible and infrared bands
provides valuable detection capabilities based on target shapes and temperatures. However, the spectral
resolution provided by a hyperspectral imager adds a spectral dimension to the measurements, leading to
additional tools for detection and identification of targets, based on their spectral signature. The Telops
Hyper-Cam sensor is an interferometer-based imaging system that enables the spatial and spectral analysis
of targets using a single sensor. It is based on the Fourier-transform technology yielding high spectral
resolution and enabling high accuracy radiometric calibration. It provides datacubes of up to 320×256
pixels at spectral resolutions as fine as 0.25 cm-1. The LWIR version covers the 8.0 to 11.8 μm spectral
range. The Hyper-Cam has been recently used for the first time in two compact airborne platforms: a belly-mounted
gyro-stabilized platform and a gyro-stabilized gimbal ball. Both platforms are described in this
paper, and successful results of high-altitude detection and identification of targets, including industrial
plumes, and chemical spills are presented.
UAV-based multi-spectral environmental monitoring
Thomas Arnold,
Martin De Biasio,
Andreas Fritz,
et al.
Show abstract
This paper describes an airborne multi-spectral imaging system which is able to simultaneously capture three
visible (400-670nm at 50% FWHM) and three near infrared channels (670-1000nm at 50% FWHM). The rst
prototype was integrated in a Schiebel CAMCOPTER®S-100 VTOL (Vertical Take-O and Landing) UAV
(Unmanned Aerial Vehicle) for initial test
ights in spring 2010. The UAV was
own over land containing
various types of vegetation. A miniaturized version of the initial multi-spectral imaging system was developed in
2011 to t into a more compact UAV. The imaging system captured six bands with a minimal spatial resolution
of approx. 10cm x 10cm (depending on altitude). Results show that the system is able to resist the high vibration
level during
ight and that the actively stabilized camera gimbal compensates for rapid roll/tilt movements of
the UAV. After image registration the acquired images are stitched together for land cover mapping and
ight
path validation. Moreover the system is able to distinguish between dierent types of vegetation and soil. Future
work will include the use of spectral imaging techniques to identify spectral features that are related to water
stress, nutrient deciency and pest infestation. Once these bands have been identied, narrowband lters will
be incorporated into the airborne system.
Sensors and Systems II
Characterization and discrimination of large caliber gun blast and flash signatures
Show abstract
Two hundred and one firings of three 152 mm howitzer munitions were observed to characterize firing signatures of a
large caliber gun. Muzzle blast expansion was observed with high-speed (1600 Hz) optical imagery. The trajectory of the
blast front was well approximated by a modified point-blast model described by constant rate of energy deposition.
Visible and near-infrared (450 - 850 nm) spectra of secondary combustion were acquired at ~0.75 nm spectral resolution
and depict strong contaminant emissions including Li, Na, K, Cu, and Ca. The O2 (X→b) absorption band is evident in
the blue wing of the potassium D lines and was used for monocular passive ranging accurate to within 4 - 9%. Timeresolved
midwave infrared (1800 - 6000 cm-1) spectra were collected at 100 Hz and 32 cm-1 resolution. A low
dimensional radiative transfer model was used to characterize plume emissions in terms of area, temperature, soot
emissivity, and species concentrations. Combustion emissions have ~100 ms duration, 1200 - 1600 K temperature, and
are dominated by H2O and CO2. Non-combusting plume emissions last ~20 ms, are 850 - 1050 K, and show significant
continuum (emissivity ~0.36) and CO structure. Munitions were discriminated with 92 - 96% classification accuracy
using only 1 - 3 firing signature features.
Fiber optic snapshot hyperspectral imager
Show abstract
OPTRA is developing a snapshot hyperspectral imager (HSI) employing a fiber optic bundle and dispersive
spectrometer. The fiber optic bundle converts a broadband spatial image to an array of fiber columns which serve as
multiple entrance slits to a prism spectrometer. The dispersed spatially resolved spectra are then sampled by a two-dimensional
focal plane array (FPA) at a greater than 30 Hz update rate, thereby qualifying the system as snapshot.
Unlike snapshot HSI systems based on computed tomography or coded apertures, our approach requires only the
remapping of the FPA frame into hyperspectral cubes rather than a complex reconstruction. Our system has high
radiometric efficiency and throughput supporting sufficient signal to noise for hyperspectral imaging measurements
made over very short integration times (< 33 ms). The overall approach is compact, low cost, and contains no moving
parts, making it ideal for unmanned airborne surveillance. In this paper we present a preliminary design for the fiber
optic snapshot HSI system.
Lidar flecks: modeling the influence of canopy type on tactical foliage penetration by airborne, active sensor platforms
Show abstract
Our research focuses on the Army's need for improved detection and characterization of targets beneath the
forest canopy. By investigating the integration of canopy characteristics with emerging remote data collection
methods, foliage penetration-based target detection can be greatly improved. The objective of our research was
to empirically model the effect of pulse return frequency (PRF) and flight heading/orientation on the success of
foliage penetration (FOPEN) from LIDAR airborne sensors. By quantifying canopy structure and understory
light we were able to improve our predictions of the best possible airborne observation parameters (required
sensing modalities and geometries) for foliage penetration. Variations in canopy openness profoundly influenced
light patterns at the forest floor. Sunfleck patterns (brief periods of direct light) are analogous to potential
"LIDAR flecks" that reach the forest floor, creating a heterogeneous environment in the understory. This
research expounds on knowledge of canopy-specific characteristics to influence flight geometries for prediction of
the most efficient foliage penetrating orientation and heading of an airborne sensor.
Image stabilization for moving platform surveillance
Show abstract
Line of sight jitter degrades the image/real time video quality of a high performance sighting system, resulting in reduced detection/recognition/identification ranges. Line of sight jitter results from residual dynamics on the sighting system from the host platform. A scheme for fine image/video stabilization, in presence of high magnitude line of sight jitter, has been presented in this paper. The proposed scheme is a combination of conventional gyro stabilization of payload line of sight (i.e. Mechanical Image Stabilization-MIS) and Digital Image Stabilization (DIS). Gyro stabilization technique is used to minimize the computation requirement of DIS technique. The proposed technique has been implemented and evaluated using standard hardware (SMT8039). Inclusion of DIS algorithms has given an additional disturbance isolation of at least 10 dB in our experiments, while image smoothness index also improved by a factor of four or more. The proposed method also indicated that higher image smoothness and disturbance isolation are possible at comparatively higher frequencies (limited by computation capability of the computing platform).
Enhanced intelligence through optimized TCPED concepts for airborne ISR
Show abstract
Current multinational operations show an increased demand for high quality actionable intelligence for different
operational levels and users. In order to achieve sufficient availability, quality and reliability of information, various ISR
assets are orchestrated within operational theatres. Especially airborne Intelligence, Surveillance and Reconnaissance
(ISR) assets provide - due to their endurance, non-intrusiveness, robustness, wide spectrum of sensors and flexibility to
mission changes - significant intelligence coverage of areas of interest.
An efficient and balanced utilization of airborne ISR assets calls for advanced concepts for the entire ISR process
framework including the Tasking, Collection, Processing, Exploitation and Dissemination (TCPED).
Beyond this, the employment of current visualization concepts, shared information bases and information customer
profiles, as well as an adequate combination of ISR sensors with different information age and dynamic (online) retasking
process elements provides the optimization of interlinked TCPED processes towards higher process robustness,
shorter process duration, more flexibility between ISR missions and, finally, adequate "entry points" for information
requirements by operational users and commands. In addition, relevant Trade-offs of distributed and dynamic TCPED
processes are examined and future trends are depicted.
Image Processing I
Shape-based topologies for real-time onboard image generation
Show abstract
The field steerable mirror (FSM) Infrared camera system used in Persistent Surveillance Systems provides wide
area coverage using smaller number of cameras. The mirror locations float in a-priori known manner through the
field of view and is supposed to be stitched together using image features. This is because the platform motion
between mirror positions makes it difficult to exploit a-prior knowledge of the mirror positions. The mosaic
generation mechanism developed at ITT Exelis utilizes a calibration step which uses elementary shapes that are
joined continuously to create complex topologies that capture platform movement. This shape topology process
can be extended to other platforms and systems. This paper presents the process by which the meta-data is used
in the calibration step that will ultimately allow for real-time Infrared image mosaic generation. By using the
geographic coordinates, found in the image meta-data, we are able to estimate the amount of overlap between
any two images to be stitched, preventing the need for unnecessary and expensive image feature extraction and
matching. This is achieved by using a polygon clipping approach to determine the vertex coordinates of the
captured images in order to estimate overlap and disconnection in the field of view.
Saliency region selection in large aerial imagery using multiscale SLIC segmentation
Show abstract
Advents in new sensing hardwares like GigE-cameras and fast growing data transmission capability create an imbalance
between the amount of large scale aerial imagery and the means at disposal for treating them. Selection of saliency
regions can reduce significantly the prospecting time and computation cost for the detection of objects in large scale
aerial imagery. We propose a new approach using multiscale Simple Linear Iterative Clustering (SLIC) technique to
compute the saliency regions. The SLIC is fast to create compact and uniform superpixels, based on the distances in both
color and geometric spaces. When a salient structure of the object is over-segmented by the SLIC, a number of
superpixels will follow the edges in the structure and therefore acquires irregular shapes. Thus, the superpixels
deformation betrays presence of salient structures. We quantify the non-compactness of the superpixels as a salience
measure, which is computed using the distance transform and the shape factor. To treat objects or object details of
various sizes in an image, or the multiscale images, we compute the SLIC segmentations and the salient measures at
multiple scales with a set of predetermined sizes of the superpixels. The final saliency map is a sum of the salience
measures obtained at multiple scales. The proposed approach is fast, requires no input of user-defined parameter,
produces well defined salient regions at full resolution and adapted to multi-scale image processing.
Context switching system and architecture for intelligence, surveillance, and reconnaissance
Show abstract
Given the increasing utilization and dependence on ISR information, operators and imagery analysts monitoring
intelligence feeds seek a capability to reduce processing overload in transformation of ISR data to actionable
information. The objective they seek is improvement in time critical targeting (TCT) and response time for mission
events. Existing techniques addressing this problem are inflexible and lack a dynamic environment for adaptation to
changing mission events. This paper presents a novel approach to ISR information collection, processing, and response,
called the ISR Context Switching System (ISR-CSS). ISR-CSS enables ground, sea, and airborne sensors to perform
preliminary analysis and processing of data automatically at the platform before transferring actionable information back
to ground-base operators and intelligence analysts. The on-platform processing includes a catalogue of filtering
algorithms concatenated with associated compression algorithms that are automatically selected based on dynamic
mission events. The filtering algorithms employ tunable parameters and sensitivities based on the original mission plan
along with associated Essential Elements of Information (EEI), data type, and analyst/user preferences. As a mission
progresses, ISR-CSS incorporates adaptive parameter updates (model-based, statistics-based, learning-based, and event-driven),
providing increased tactical relevant data. If a mission transforms dramatically, where unexpected manual
guidance is required, then ISR-CSS allows tactical end-user direct-to-sensor tasking. To address information overload,
ISR-CSS provides the provision to filters and prioritize data according to end-user preferences. ISR-CSS dispenses
mission-critical and timely actionable information for end-user utilization, enabling faster response to a greater range of
threats across the mission spectrum.
Image Processing II
Robust tracking and anomaly detection in video surveillance sequences
Hoover F. Rueda,
Luisa F. Polania,
Kenneth E. Barner
Show abstract
In this paper, the authors examine the problem of tracking people in both bright and dark video sequences. In
particular, this problem is treated as a background/foreground decomposition problem, where the static part
corresponds to the background, and moving objects to the foreground. Having this into account, the problem
is formulated as a rank minimization problem of the form X = L + S + E, where X is the captured scene,
L is the low-rank part (background), S is the sparse part (foreground) and E is the corrupting uniform noise
introduced in the capture process. Actually, low-rank and sparse structures are widely studied and some areas
such as Robust Principal Component Analysis (RPCA) and Matrix Completion (MC) have emerged to solve this
kind of problems. Here we compare the performance of three different methods in solving the RPCA optimization
problem for background separation: augmented lagrange multiplier method, Bayesian markov dependency
method, and bilateral random projections method. Furthermore, a preprocessing light normalization stage and
a mathematical morphology based post-processing stage are proposed to obtain better results.
A static architecture for compressive target tracking
Show abstract
Traditional approaches to persistent surveillance generate prodigious amounts of data, stressing storage, communication,
and analysis systems. As such, they are well suited for compressed sensing (CS) concepts. Existing
demonstrations of compressive target tracking have utilized time-sequences of random patterns, an approach
that is sub-optimal for real world dynamic scenes. We have been investigating an alternative architecture that
we term SCOUT-the Static Computational Optical Undersampled Tracker-which uses a pair of static masks
and a defocused detector to acquire a small number of measurements in parallel. We will report on our working
prototypes that have demonstrated successful target tracking at 16x compression.
Parallax visualization of UAV FMV and WAMI imagery
Show abstract
The US Military is increasingly relying on the use of unmanned aerial vehicles (UAV) for intelligence, surveillance, and
reconnaissance (ISR) missions. Complex arrays of Full-Motion Video (FMV), Wide-Area Motion Imaging (WAMI)
and Wide Area Airborne Surveillance (WAAS) technologies are being deployed on UAV platforms for ISR applications.
Nevertheless, these systems are only as effective as the Image Analyst's (IA) ability to extract relevant information from
the data.
A variety of tools assist in the analysis of imagery captured with UAV sensors. However, until now, none has been
developed to extract and visualize parallax three-dimensional information.
Parallax Visualization (PV) is a technique that produces a near-three-dimensional visual response to standard UAV
imagery. The overlapping nature of UAV imagery lends itself to parallax visualization. Parallax differences can be
obtained by selecting frames that differ in time and, therefore, points of view of the area of interest.
PV is accomplished using software tools to critically align a common point in two views while alternately displaying
both views in a square-wave manner. Humans produce an autostereoscopic response to critically aligned parallax
information presented alternately on a standard unaided display at frequencies between 3 and 6 Hz.
This simple technique allows for the exploitation of spatial and temporal differences in image sequences to enhance
depth, size, and spatial relationships of objects in areas of interest. PV of UAV imagery has been successfully
performed in several US Military exercises over the last two years.
Kalman filter outputs for inclusion in video-stream metadata: accounting for the temporal correlation of errors for optimal target extraction
Show abstract
A video-stream associated with an Unmanned System or Full Motion Video can support the extraction of ground
coordinates of a target of interest. The sensor metadata associated with the video-stream includes a time series of
estimates of sensor position and attitude, required for down-stream single frame or multi-frame ground point extraction,
such as stereo extraction using two frames in the video-stream that are separated in both time and imaging geometry.
The sensor metadata may also include a corresponding time history of sensor position and attitude estimate accuracy
(error covariance). This is required for optimal down-stream target extraction as well as corresponding reliable
predictions of extraction accuracy. However, for multi-frame extraction, this is only a necessary condition. The
temporal correlation of estimate errors (error cross-covariance) between an arbitrary pair of video frames is also
required. When the estimates of sensor position and attitude are from a Kalman filter, as typically the case, the
corresponding error covariances are automatically computed and available. However, the cross-covariances are not.
This paper presents an efficient method for their exact representation in the metadata using additional, easily computed,
data from the Kalman filter. The paper also presents an optimal weighted least squares extraction algorithm that
correctly accounts for the temporal correlation, given the additional metadata. Simulation-based examples are presented
that show the importance of correctly accounting for temporal correlation in multi-frame extraction algorithms.
Automated assessment of video image quality: implications for processing and exploitation
Show abstract
Several methods have been developed for quantifying the information potential of imagery exploited by a
human observer. The National Imagery Interpretability Ratings Scale (NIIRS) has proven to be a useful
standard for intelligence, surveillance, and reconnaissance (ISR) applications. Extensions of this approach to
motion imagery have yielded a body of research on the factors affecting interpretability of motion imagery
and the development of a Video NIIRS. Automated methods for assessing image interpretability can provide
valuable feedback for collection management and guide the exploitation and analysis of the imagery.
Prediction models that rely on image parameters, such as the General Image Quality Equation (IQE), are
useful for conducting sensor trade studies and collection planning. Models for predicting image quality after
image acquisition can provide useful feedback for collection management. Several methods exist for still
imagery. This paper explores the development of a similar capability for motion imagery. In particular, we
propose methods for predicting the interpretability of motion imagery for exploitation by an analyst. A
similar model is considered for automated exploitation.
Degraded Visual Environments (DVE) Requirements and Flight Systems
Operational requirements for short-term solution in visual display specifically for Degraded Visual Environment (DVE)
Thorsten W. Eger
Show abstract
This paper will provide an overview of operational requirements and selected examples of visual displays for
Degraded Visual Environment (DVE) In order to assist the pilot to safely operate, control and land the helicopter in
DVE, an integrated system of various technologies that will provide improved situational awareness (SA) with
minimal interpretation is essential. SA should include information about the landing site and height, heading speed,
drift, and rate of descent. The development an integrated system is a long-term goal. The current, legacy helicopters
do not provide a specific display. A short-term solution is the development of alternative visual displays for
operations in DVE.
BAE systems brownout landing aid system technology (BLAST) system overview and flight test results
Show abstract
Rotary wing aircraft continue to experience mishaps caused by the loss of visual situational awareness and spatial
disorientation due to brownout or whiteout in dusty, sandy or snowy conditions as the downwash of the rotor blades
creates obscurant clouds that completely engulf the helicopter during approaches to land. BAE Systems has
developed a "see-through" brownout landing aid system technology (BLAST) based on a small and light weight
94GHz radar with proven ability to penetrate dust, coupled with proprietary antenna tracking, signal processing and
digital terrain morphing algorithms to produce a cognitive real-time 3D synthetic image of the ground and proximate
surface hazards in and around the landing zone. A series of ground and flight tests have been conducted at the
United States Army's Yuma Proving Ground in Arizona that reflect operational scenarios in relevant environments
to progressively mature the technology. A description of the BLAST solution developed by BAE Systems and
results from recent flight tests is provided.
LandSafe precision flight instrumentation system: the DVE solution
Show abstract
Helicopter hover, landing, and take-offs in dust, fog, rain, snow, and high winds is an integral part of military and
commercial flight operations. OADS has developed and flight-tested an LDV-based optical sensor suite capable of
measuring height above ground, groundspeed, and air data at a FCS capable data rate from a helicopter platform under
all environmental and weather conditions. This paper presents capabilities and flight-test results of this high-resolution
standalone Precision Flight Instrumentation System.
Degraded Visual Environments (DVE) Symbology and Technologies I
Evaluation of DVE landing display formats
Show abstract
Within the recent years many different proposals have been published about the best design of display contents for
helicopter pilot assistance during DVE landing. The guidance cues are typically shown as an overlay possibly on top of
additional sensor or database imagery. This overlay represents the main information source for helicopter pilots for
landing. Display technology within this field applies two different principles: Multicolor head-down display (panel
mount), and monochrome head-up display (helmet-mounted). For both types the state-of-the-art imagery doesn't make
use of conformal symbol sets. They rather expose the pilots to mixed views (2D forward and bird's eye view). Even so
the trained pilots can easily interpret the presented data it doesn't seem to be the best design for head-up displays.
A study was realized to compare different proposed symbol sets (e.g. BOSS, DEVILA and JEDEYE). During approach
and landing trials in our helicopter simulator these different formats were presented to the pilots on head-down and
helmet-mounted displays. The evaluation of this study is based on measured flight guidance performance (objective
measures) and on questionnaires (subjective measures). The results can pave the way for the planned development of a
new conformal wide field of view perspective display for DVE landing assistance.
Use of 3D conformal symbology on HMD for a safer flight in degraded visual environment
Show abstract
Since the entry of coalition forces to Afghanistan and Iraq, a steep rise at the rate of accidents has
occurred as a result of flying and landing in Degraded Visual Environment (DVE) conditions.
Such conditions exist in various areas around the world and include bad weather, dust and snow landing
(Brownout and whiteout) and low illumination at dark nights.
A promising solution is a novel 3D conformal symbology displayed on head-tracked helmet mounted
display (HMD). The 3D conformal symbology approach provides space stabilized three-dimensional
symbology presented on the pilot helmet mounted display and has the potential of presenting a step
function in HMD performance. It
offers an intuitive way for presenting crucial information to the pilots in order to increase Situational
Awareness, lower the pilots' workload and thus enhancing safety of flight dramatically.
The pilots can fly "heads out" while the necessary flight and mission information is presented in intuitive
manner, conformal with the real world and in real-time. .
Several Evaluation trials had been conducted in the UK, US and Israel using systems that were developed
by Elbit Systems to prove the embodied potential of the system to provide a solution for DVE flight
conditions: technology, concept and the specific systems.
Developing an obstacle display for helicopter brownout situations
Show abstract
Project ALLFlight is DLR's initiative to diminish the problem of piloting helicopters in degraded visual conditions.
The problem arises whenever dust or snow is stirred up during landing (brownout/whiteout), eectively
blocking the crew's vision of the landing site. A possible solution comprises the use of sensors that are able
to look through the dust cloud. As part of the project display symbologies are being developed to enable the
pilot to make use of the rather abstract and noisy sensor data. In a rst stage sensor data from very dierent
sensors is fused. This step contains a classication of points into ground points and obstacle points. In a second
step the result is augmented with ground data bases and depicted in a synthetic head-down display. Regarding
the design, several variations in symbology are considered, including variations in color coding, continuous or
non-continuous terrain displays and dierent obstacle representations. In this paper we present the basic techniques
used for obstacle and ground separation. We choose a set of possibilities for the pilot display and detail
the implementation. Furthermore, we present a pilot study, including human factors assessment with focus on
usability and pilot acceptance.
High dynamic range fusion for enhanced vision
Show abstract
Fusing multispectral images, Enhanced Vision (EV) has been proven helpful to improve pilot's Situation Awareness
(SA) under Degraded Vision Environment (DVE), such as low visibility or adverse observation conditions, which caused
by fog, dust, weak light, backlighting, etc. Numerous methods are applied to enhance and fuse optical and infrared (IR)
images for visual details to provide pilot with enough information as far as possible. However, most existing optical and
IR imaging devices, for their inherent defects, fail to acquire wide span of light and only generate Low Dynamic Range
images (LDR, Dynamic Range: range between the lightest and darkest areas), which causes the loss of useful details.
Normal display devices can't reveal HDR details as well.
The proposed paper introduces and expands High Dynamic Range (HDR) technologies to fuse optical and IR images,
which has rarely been involved in the study of HDR Imaging to our knowledge, for Enhanced Vision to better pilot's
Situation Awareness. Two major problems should be discussed. (1) The way to generate fused image with HDR
information under DVE. (2) The method to effectively display fused HDR image with normal LDR monitors. Aiming at
application environment, HDR fusion scheme is proposed and relevant methods are explored. The experimental results
prove that our scheme is effective and would be beneficial to enhancing pilot's Situation Awareness under DVE.
Degraded Visual Environments (DVE) Symbology and Technologies II
Enhancement of vision systems based on runway detection by image processing techniques
N. Gulec,
N. Sen Koktas
Show abstract
An explicit way of facilitating approach and landing operations of fixed-wing aircraft in degraded visual environments is
presenting a coherent image of the designated runway via vision systems and hence increasing the situational awareness
of the flight crew. Combined vision systems, in general, aim to provide a clear view of the aircraft exterior to the pilots
using information from databases and imaging sensors. This study presents a novel method that consists of image-processing
and tracking algorithms, which utilize information from navigation systems and databases along with the
images from daylight and infrared cameras, for the recognition and tracking of the designated runway through the
approach and landing operation. Video data simulating the straight-in approach of an aircraft from an altitude of 5000 ft
down to 100 ft is synthetically generated by a COTS tool. A diverse set of atmospheric conditions such as fog and low
light levels are simulated in these videos. Detection and false alarm rates are used as the primary performance metrics.
The results are presented in a format where the performance metrics are compared against the altitude of the aircraft.
Depending on the visual environment and the source of the video, the performance metrics reach up to 98% for DR and
down to 5% for FAR.
Visual information, sparse decomposition, and transmission for multi- UAV visual navigation
Show abstract
In recent years, visual navigation of unmanned aerial vehicles (UAVs) has been an active area of research. There is a
large amount of visual information to be processed and transmitted with real-time requirements for the flight scenes
change rapidly. However, it has already become one of the major factors that block the cooperative communication in
multi-UAV visual navigation. The traditional video image orthogonal decomposition methods can not be well adapted to
the multi-UAV visual navigation system, because with the compression ratio increases, there is a sharp decline in video
image quality. This paper proposes a novel visual information sparse decomposition and transmission (VSDT)
framework for multi-UAV visual navigation.
In the framework, aiming at the visual information characteristics, firstly we pre-process the video images by introducing
a multi-scale visual information acquisition mechanism. Then a fast video image sparse decomposition is made for
transmission. It can greatly reduce the original video information amount, while the quality of visual information needed
for navigation is guaranteed. Finally, based on data correlations and feature matching, a real-time transmission scheme is
designed to make the receiver UAV can quickly reconstruct the flight scene information for navigation. The simulated
results are presented and discussed.
The main advantage of this framework lies in the ability to reduce the visual information transmission amount while
ensuring the quality of visual information needed for navigation and solve the cooperative communication problems such
as information lag, data conjunction and match error often encountered in multi-UAV visual navigation environment.
Poster Session
Part-template matching-based target detection and identification in UAV videos
Hyuncheol Kim,
Jaehyun Im,
Taekyung Kim,
et al.
Show abstract
Detecting and identifying targets in aerial images has been a challenging problem due to various types of image
distortion factors, such as motion of a sensing device, weather variation, scale changes, and dynamic viewpoint. For
accurate, robust recognition of objects in unmanned aerial vehicle (UAV) videos, we present a novel target detection and
identification algorithm using part-template matching. The proposed method for target detection partitions the target into
part-templates by efficient extraction method based on target part regions. We also propose distribution distance
measurement-based target identification using the target part-template.