Proceedings Volume 10642

Degraded Environments: Sensing, Processing, and Display 2018

cover
Proceedings Volume 10642

Degraded Environments: Sensing, Processing, and Display 2018

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 23 July 2018
Contents: 10 Sessions, 26 Papers, 19 Presentations
Conference: SPIE Defense + Security 2018
Volume Number: 10642

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10642
  • Displays and Human Performance I
  • Displays and Human Performance II
  • Systems and Processing I
  • Phenomenology and Sensing
  • Systems and Processing II
  • Displays and Human Performance III
  • GPS Denied Environments
  • MMW and DVE Phenomenology and Sensing: Joint Session with conferences 10642 and 10634
  • Poster Session
Front Matter: Volume 10642
icon_mobile_dropdown
Front Matter: Volume 10642
This PDF file contains the front matter associated with SPIE Proceedings Volume 10642, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Displays and Human Performance I
icon_mobile_dropdown
Determination of 255 just noticeable color gray level differences for improved color palette
Following research reported by the authors to SPIE (2015) and SID (2017), this paper pursues further psychophysical research to the determination of 255 Just Noticeable Color Differences (JNCDs). Given transmissive (e.g., Active Matrix Liquid Crystal Display (AMLCD)) displays shall continue to create color palette via additive color subpixel gray level; given the number of such gray levels shall continue to be 255 (plus black) for most avionic, vetronic and other commercial applications, the authors anticipate the requirement for a unique set of color gray levels, and to this end propose identifying a statistically reliable set of threshold luminance for transmissive display color channels. Additionally, the authors propose to demonstrate that once individual color primaries are established on a JND basis, any and all combination colors are also unique and distinguishable. Only in this way can 255 gray level transmissive displays be most efficient in delivering their advertised 16.5 million colors and most effective in creating useful results, e.g., color maps, etc. Method of research, to include procedure, equipment, stimuli and test subjects shall be cited. Results of test, to include test subject Fechtner fractions for red, green, and blue, and test subject Fechtner fractions for equivalent color brightness combinations of red, green, and blue, shall be reported.
HMD daylight symbology: color discrimination modeling
Thomas H. Harding, Jeffery K. Hovis, Clarence E. Rash, et al.
As the military increases its reliance upon and continues to develop Helmet Mounted Displays (HMDs), it is paramount that HMDs are developed that meet the operational needs of the Warfighter. During the development cycle, questions always arise concerning the operational requirements of the HMD. These include questions concerning luminance, contrast, color, resolution, and so on. When color is implemented in HMDs, which are eyes-out, see-through displays, visual perception issues become an increased concern. A major issue with HMDs is their inherent see-through (transparent) property. The result is color in the displayed image combines with color from the outside world, possibly producing a false perception of either one or both images. Last year at this meeting, we discussed the development of a color discrimination model. Here we extend this model to evaluate the discriminability of transparent symbology from a color normal and color deficient observer perspective.
Modeling the effect of macular pigment enhancement on vision in degraded visual environments (DVE)
The macular pigment (MP) is an accumulation of lutein and zeaxanthin, carotenoids derived from dietary sources, which is primarily in the central 15° of the human visual field. MP absorbs light in the 400 to 520 nm range. Consequently the MP is a spectral filter over the photoreceptors, reducing the effects of internally scattered light and attenuating the short wavelength component of natural sunlight. The between-subject average MP optical density (OD) is about 0.2 to 0.6 log units depending on the sample population, while the range of MPOD is reportedly 0 to 1.5 log units. Some people can increase their MPOD by increasing their consumption of lutein and zeaxanthin, and this may be important for vision in DVE. Specifically, nutritional interventions and dietary supplements have produced statistically significant enhancements under laboratory conditions in visual tasks such as visibility through haze, low contrast target detection, contrast sensitivity, glare resistance and recovery, photostress recovery, dark adaptation, mesopic sensitivity, and enhanced reaction times. The question is whether these enhancements are operationally meaningful or not. The present paper begins to address the question by modeling MPOD effects on the visibility to low contrast targets seen under a range of DVE over realistic distances that incorporate atmospheric filtering. Specific model parameters include luminance, target contrast, spectral content, and distance. The model can be extended to estimate the efficacy of MPOD effects on target detection, discrimination, and standoff distances.
Review of sensor-to-eye latency effects in degraded visual environment mitigations
Thomas Schnell, Thomas Münsterer
The effect of degraded visual environments (DVE) on aviation is profound. Under reduced-visibility conditions, operational tempo slows to a fraction of its unrestricted-visibility counterpart. Mission capability envelopes are greatly reduced, and some missions cannot be flown. An even more significant problem is the threat of controlled flight into terrain and obstacles, spatial disorientation, and loss of control in flight. Of the 383 Class A and B US military flight accidents between 2002 and 2015, 25% related to operations in DVE and accounted for 81% of the fatalities and $1 billion in lost materiel. In commercial air transport over the past 10 years, there were 1,648 fatalities in 50 accidents due to loss of control in flight. The common theme in these accidents was lack of external visual references.

Considerable progress has been made in the development and testing of DVE mitigations, but fielded, operational systems have been elusive. New technologies such as LiDAR, EO/IR, sensor fusion, and helmet-mounted displays (HMDs) are intended to improve pilot performance and expand mission effectiveness. However, much of the effort has focused on separate components. Very little has been devoted to integrated systems, and the human factors components have been all but neglected. This has a detrimental effect on the handling qualities and operational safety of the resulting systems.

This paper offers a review of DVE mitigation technology with a focus on human factors requirements for sensor-to-eye latency, monocular, biocular, and binocular display performance, and see-through vs. non-see-through HMD display concepts.
Color and impact to HMD design
Bob Foote, Mitchell Hoffmann
Desires by customers for color helmet mounted displays (HMD) that are suitable for day and night viewing are being explored by several companies. One major area of difficulty for designers is selecting a shade of blue that is bright enough to be seen and that can be discriminated as blue in a bright ambient environment. Customers have concerns on how rich the color of blue must be to satisfy the pilot, but this may sometimes become a conflicting requirement of discrimination. Rockwell Collins is designing the next generation HMD and believes that color, if not a requirement, will be an expectation in the future. To this end Rockwell Collins has conducted testing on a full color HMD display with a goal of determining the best color blue/cyan for our design. Our goal is to determine the u’v’ coordinates on the CIE chart that meets both goals of brightness and discrimination. Our testing used a fixed wing HMD with a full color, 24 bit, OLED display. Multiple symbols and colors were used during the testing, and subjects were not aware of the goal of the study. It was found during the study that a much brighter blue/cyan can be presented and discriminated from other colors. Our paper will discuss the testing approach, and a summary of data and results. A conclusion is presented at the end of the paper of the HMD design impacts and implications of focusing less on color and more on discrimination.
Predicting depth discrimination performance under hyperstereoscopic display conditions
Charles J. Lloyd, Marc Winterbottom, Eleanor O'Keefe, et al.
This paper describes an evaluation of the capability of two tests of vision, stereoacuity and fusion recovery range, to predict depth discrimination performance for subjects using a hyperstereoscopic display system. For the hyperstereo performance evaluation, 14 subjects completed a depth discrimination task presented at multiple positions and depths in a remote vision system (RVS) simulation similar to that used by air refueling operators on the KC-46 aircraft. Prior to performing the hyperstereo task, each subject completed automated tests of stereoacuity and binocular fusion recovery range. Evaluation results indicate that both stereoacuity (R2 = 0.64) and recovery range (R2 = 0.45) reliably predict (p ≤ 0.01) hyperstereo depth discrimination performance. The use of a two-factor model improves predictive capability (R2 = 0.73); however, the utility of including recovery range scores depended on viewing conditions. When easy viewing conditions (high contrast stimuli presented near the depth of the display) were used, performance was predicted by stereoacuity and the prediction was not improved using recovery range scores. Under difficult viewing conditions (low contrast stimuli, background clutter, crosstalk, and dipvergence), the prediction of hyperstereo performance was significantly improved by including recovery range scores. These results suggest that a binocular fusion recovery range test should be used in conjunction with a stereoacuity test to predict the performance of operators using hyperstereoscopic displays under the more difficult viewing conditions that can be expected in operational environments. Stereoscopic display design considerations and the importance of computer-based vision testing will be discussed in detail.
Displays and Human Performance II
icon_mobile_dropdown
AFRL alternative night/day imaging technologies (ANIT) program (Conference Presentation)
The AFRL Alternative Night/Day Imaging Technologies (ANIT) Program investigates lighter, replacements for traditional, fielded image intensifier tubes to enable additional desired capabilities such as all-source digital information fusion/sharing, augmented reality, multispectral sensing, and weapon cueing while providing six-dimensional environmental protection (laser, head impact, noise, nuclear, biological, and chemical). The ANIT S&T challenge includes establishment of a technology basis for integration of separate clear night, clear day, and degraded visual environment (DVE) helmet vision systems into one with acceptable mass properties. The ANIT components track focuses on the development of digital replacements for analog devices, including 5 Mpx sensors, embedded processors, algorithms, microdisplays, interconnects, trackers, visors, objectives, and oculars. The ANIT performance objectives include provision of: (1) 20/20 Snellen acuity for each 40 circular portion of field-of-view; (2) adaptive all-source image/information fusion; (3) photons-to-photons latency < 5ms; and (4) integration into a small form-factor 24-hr helmet/head mounted vision system. Advanced lens systems (visors, objectives, oculars) are based on diffractive holographic, freeform surface/prism, and gradient index design principles. This program addresses an AFLCMC Tier 1 Tech Need entitled Lightweight Night Imaging and a USAF Multi-MAJCOM priority entitled Digital Helmet Mounted Display (DHMD). The approach involves several S&T efforts in a components track aimed at the development of digital replacements for analog devices and a testbed track aimed at prototyping helmet-mounted augmented reality systems (HMARS) for pilots and dismounted operators. The status of some two-dozen funded efforts will be described. Roadmaps for technology maturation and transitions to DoD acquisition offices will be presented.
Visibility of color symbology in head-up and head-mounted displays in daylight environments
Color-coded symbology has the potential to enhance the performance of people using head-up and head-mounted displays (HUDs and HMDs). The distinguishing feature of these displays is the optical combiner that presents symbology combined with the forward real-world scene. The presence of high-ambient daylight can desaturate the symbol colors, making them difficult to recognize. We defined a set of colors for testing based on color-coding conventions, color symbology research, and results from our previous testing. We then conducted a series of experiments to test the visibility and naming of color symbols and the legibility of color text mixed with daylight. Results were statistically analyzed and also modeled using color-difference formulae. Specific attention was given the symbol color blue, and an alternative blue color was proposed that had much higher visibility.
Systems and Processing I
icon_mobile_dropdown
360-degree top view inside a helmet mounted display providing obstacle awareness for helicopter operations
This paper introduces a display concept for helicopter obstacle awareness and warning systems. The key feature of the concept is the integration of a 360-degree coplanar orthogonal top view in the egocentric perspective of a helmet mounted see-through display. The concept intends to provide obstacle awareness while pilots are looking outside. The concept should further improve the situational and spatial awareness as well as the workload of helicopter pilots when operating in challenging surroundings. The display concept is applied to two helicopter off-shore operations and its specific obstacle situation. The first operation represents a hoist operation at the lower access point of an off-shore wind turbine. The second regards an off-shore platform landing operation. The paper depicts the two use cases, related work concerning obstacle awareness and warning systems, and recapitulates situational awareness plus the properties of orthogonal coplanar in comparison to the properties of perspective representations. Thereafter the two main aspects of the developed HMI concept were presented, i.e., the combination of the exocentric orthogonal coplanar top view with the egocentric perspective view, and secondly three ways for the integration of the top view inside the helmet mounted display. The implemented HMI design represents work in progress, i.e., looking forward to develop an optimal holistic and balanced display concept featuring helicopter obstacle awareness and warning systems.
Synthetic vision on a head-worn display supporting helicopter offshore operations
Johannes M. Ernst, Lars Ebrecht, Stefan Erdmann
Helicopters play an important role during construction and operation of o shore wind farms. Most of the time helicopter offshore operations are conducted over open water and often in degraded visual environment. Such scenarios provide very few usable visual cues for the crew to safely pilot the aircraft. For instance, no landmarks exist for navigation and orientation is hindered by weather phenomena that reduce visibility and obscure the horizon. To overcome this problem, we are developing an external vision system which uses a non-see-through, head-worn display (HWD) to show fused sensor and database information about the surroundings. This paper focuses on one aspect of our system: the computer-generated representation of relevant visual cues of the water surface. Our motivation is to develop a synthetic view of the surroundings that is superior to the real out-the-window view. The moving water surface does not provide fixed references for orientation and sometimes even produces wrong motion cues. Thus, we replace it by a more valuable, computer-generated clear view. Since pilots estimate wind direction and speed by checking the movement characteristics of the water surface, our synthetic display also integrates this information. This paper presents several options for a synthetic vision display supporting offshore operations. Further, it comprises results from simulator trials, where helicopter pilots performed final approaches and landings on an offshore platform supported by our display. The results will contribute to the advancement of our HWD-based virtual cockpit concept. Additionally, our findings may be relevant to conventional, head-down synthetic vision displays visualizing offshore environments.
Real-time sonic boom prediction with flight guidance
Laura M. Smith-Velazquez, Erik Theunissen
Supersonic flight over land will require pilots to understand and manage sonic boom noise in real-time. For humans to understand the complex relationships of shock wave propagation in the atmosphere and where it impacts terrain, a perspective display of this information is a natural extension of current efforts using synthetic vision displays. In previous research a NASA developed algorithm was used to calculate sonic boom prediction, Mach cut-off, and sound pressure levels for current and modified flights plans. The algorithm information was transformed into georeferenced objects, presented on navigation and guidance displays and integrated with synthetic vision. We conducted a usability demonstration with experienced pilots to assess their ability to use the display to determine whether the flight plan avoids the generation of a sonic boom in noise-sensitive areas, their ability to modify their flight plan to resolve impact issues, and reviewed the implementation of a real-time guidance capability. This paper provides an overview of the usability demonstration and discusses the additional capability of providing a pilot alerting mechanism and automated impact evaluations.
Evaluating synthetic vision displays for enhanced airplane state awareness
Recent accident and incident data suggest that Spatial Disorientation (SD) and Loss-of-Energy State Awareness (LESA) for transport category aircraft are becoming an increasingly prevalent safety concern in domestic and international operations. A CAST study of 18 loss-of-control accidents determined that a lack of external visual references (i.e., darkness, instrument meteorological conditions, or both) was associated with a flight crew’s loss of attitude awareness or energy state awareness in 17 of these events. In response, CAST requested that the National Aeronautics and Space Administration (NASA) conduct research to support definition of minimum requirements for Virtual Day-Visual Meteorological Condition (VMC) displays, also known as Synthetic Vision Systems, to accomplish the intended function of improving flight crew awareness of airplane attitude. These research data directly inform the development of minimum aviation system performance standards (MASPS) for RTCA special committee (SC)-213, “Enhanced Flight Vision Systems and Synthetic Vision Systems.” An overview of NASA high-fidelity simulator research is provided that collected data specific to CAST and RTCA needs on the efficacy of synthetic vision technology to aid in attitude awareness and prevent entry into, and recovery from unusual attitudes. The paper highlights our research with low-hour, international flight crews.
Phenomenology and Sensing
icon_mobile_dropdown
Passive EO imaging sensor assessment methodology
R. L. Jones, G. Passey
To support the development of technologies that enable helicopters to operate safely and effectively in degraded visual environments (DVE), a set of visual information requirements (VIRs) for pilotage have been defined as an input to a sensor assessment. A sensor assessment methodology has been defined that utilises laboratory tests, field data gathering and aircrew assessments to fine-tune industry standard EO sensor models such that the performance of sensors and combinations of sensors can be evaluated against the VIRs. The results of such an assessment utilising passive EO sensors covering wavebands from Visible to LWIR have been used to derive performance requirements for a future DVE vision system concept.
Advanced low-SWAP lidar imager for degraded visual environments
Jason Seely, James T. Murray, Paul Eason, et al.
Areté Associates has developed a low-SWAP 3D imaging lidar to enhance rotorcraft pilot situational awareness in degraded visual environments (DVE). The lidar incorporates full waveform processing with an agile scanning system and variable pulse repetition rate (PRF) laser into a purpose-built imaging system optimized for operations in DVE. Full waveform processing robustly eliminates false contacts from dust and other obscurants; the agile scanner provides a dense scan pattern over user-defined, mission-based field of regard (FOR); and the laser PRF can be adjusted to provide unambiguous imaging over long ranges. Areté’s DVE Lidar is the culmination of a multi-year development effort that included multiple ground and flight tests in DVE scenarios and a repackaging effort to miniaturize processing and support electronics so they could be integrated into the sensor head. We present here a system overview of the Areté DVE Lidar and highlight some of the unique capabilities that make it the system of choice for DVE operations.
Ongoing work and improvements at the Sandia Fog Facility (Conference Presentation)
Degraded visual environments are a cause of problems for surveillance systems and other sensors due to the reduction in contrast, range, and signal. Fog is a concern because of the frequency of its formation along our coastlines; disrupting border security, shipping, surveillance, and sometimes causing deadly accidents. Fog reduces visibility by scattering ambient/active illumination light obscuring the environment and limiting operational capability. Sandia has created a fog facility for the characterization and testing of optical and other systems. This facility is a 180 ft. by 10 ft. by 10 ft. chamber with temperature control that can be filled with a fog-like aerosol using 64 agricultural spray nozzles. We will discuss the physical formation of fog and how that is affected by the environmental controls at our disposal. We have recently made several improvements to the facility including temperature control and will present the results of these improvements on the aerosol conditions. We will discuss the characterization of the fog and instrumentation used for the characterization. In addition, we will present preliminary results from work at Sandia, that leveraged this facility to investigate using polarized light to enhance the range of optical systems in fog conditions. This capability provides a platform for performing optical propagations experiments in a known, stable, and controlled environment where fog can be made on demand.
NIAG DVE flight test results of LiDAR based DVE support systems
Thomas Münsterer, Bernhard Singer, Michael Zimmermann, et al.
The paper discusses recent results of flight tests performed during the NIAG DVE flight test campaign in Manching, Germany and Alpnach, Switzerland in February 2017. The Hensoldt DVE system SFERION was mounted on two platforms in two different configurations. The first platform was a Swiss Airforce EC635 on which SFERION was mounted with the SferiSense 500 LiDAR. SFERION displayed 3D conformal symbology for in-flight and landing support purposes. The second platform was the German DLR owned and operated EC135 ACT/FHS test helicopter. There a system using SferiSense 300 LiDAR data supported the pilot during the final approach to a hover point by providing flight path monitoring, guidance and updating. In both systems the information was displayed on head-tracked helmet mounted displays (HMDs).

Specific LiDAR performance in the encountered real-life DVE conditions is discussed. A number of pilots flew the respective systems and results of pilot assessments on workload and capabilities as well as conclusions for the SFERION system are discussed.
Systems and Processing II
icon_mobile_dropdown
Integrating legacy ESVS displays in the Unity game engine
Current augmented reality (AR) and virtual reality (VR) displays are targeted at entertainment and home education purposes. However, these headsets can be used for prototyping and developing display concepts to be used in aviation. In previous papers we have demonstrated the use of helmet mounted enhanced and synthetic vision systems (ESVS) displays that have been implemented on commercially available VR displays. One of the most widely used engines for developing VR and AR applications is the Unity game engine. While it supports a broad range of display hardware it can be challenging to integrate legacy ESVS software, since its main purpose is the fast development of virtual worlds. To avoid a complete re-write of such displays we demonstrate techniques to integrate legacy software in Unity. In detail, we show how render plugins or texture buffers can be used to display existing ESVS output in a Unity project. We show advantages and drawbacks of these different approaches. Further, we detail problems in case the source software is written for a different platform, for example, when integrating OpenGL displays in a DirectX environment. While the demonstrated techniques are implemented and tested with the Unity game engine, they can be used for other target game and render engines, too.
Team-centric motion planning in unfamiliar environments (Conference Presentation)
Cory Hayes, Matthew Marge, Claire Bonial, et al.
Technological advances in artificial intelligence have created an opportunity for effective teaming between humans and robots. Reliable robot teammates could enable increased situational awareness and reduce the cognitive burden on their human counterparts. Robots must operate in ways that follow human expectations for effective teaming, whether operating near their human teammates or at a distance and out of sight. This ability would allow people to better anticipate robot behavior after issuing commands. In comparison to traditional human-agnostic and proximal human-aware path planning, our work addresses a relatively unexplored third area, team-centric motion planning: robots navigating remotely in an unfamiliar area and in a way that meets a teammate's expectations. In this paper, we discuss initial work towards encoding human intention to inform autonomous robot navigation. Our approach leverages the methodology and data collected in an ongoing series of natural dialogue experiments where naive participants provide navigation instructions to a remote robot situated in an unfamiliar environment. Participants are tasked with uncovering specific information about the environment via the remote robot through real-time mapping and snapshots, requiring fine- grained robot movement that meets the intention of a given command. This sensitivity often leads to clarification commands to augment the position and orientation of the robot in order to achieve the desired instructor intention; we seek to reduce or eliminate the need for these clarification commands for more efficient task completion. Our current efforts use known participant responses to executed commands to train a reinforcement learning policy for building awareness about unknown environments. Ultimately, this approach would lead to robot movement that maximizes the amount of relevant information relayed back to a human instructor while minimizing instructor burden.
Rotorcraft pinnacle landing situational awareness system
W. Brendan Blanton, Robert C. Allen, Katherine Gresko, et al.
One of the most hazardous landing scenarios for rotorcraft is the pinnacle landing. During pinnacle landings the ight crews approach areas where traditional landings are not possible and place a portion of the aircraft on the ground (e.g. skids or back wheels) in order to extract or drop off passengers in challenging terrain. These landings are used for such operations such as air assault and mountain search and rescue. In these situations, the flight crew must perform precise maneuvers in close proximity to hazards with minimal visibility. Typically, crew members provide aural cues to the pilot to guide the aircraft. These maneuvers are proven to be demanding and have led to many accidents. Boeing is developing a sensor based pilot situational capability to aid the ight crews during pinnacle landings. We will present the design trade space and operational parameters that informed our approach. In addition, we will discuss the system development, testing and human factor considerations.
Displays and Human Performance III
icon_mobile_dropdown
Feeling a little blue: problems with the symbol color blue for see-through displays and an alternative color solution
The color blue is problematic for inclusion in a symbology color set for use by operators with head-up and helmet-mounted displays (HUDs and HMDs), as well as with augmented reality (AR) displays. The distinguishing feature of these see-through displays is the optical combiner that presents color-coded symbology combined with the forward real-world scene. Unlike head-down displays that can use color fills, see-through displays are limited to lines and text. The presence of high-ambient daylight can desaturate symbol colors and make them difficult to recognize. This is especially true for blue symbols that become faint and colorless in moderate daylight as well as when mixed with green night-vision imagery. We propose the alternative color blue prime (blue')—a mix of 100% blue and 50% green that lies between blue and cyan—as an alternative to blue. The color blue’ is more resistant to daylight than blue, yet still retains the color name “blue.” Blue' lags other colors such as green and white in visibility, and its use needs to be moderated. We present experimental data to support the use of blue’ as a basic color code.
DEVS: providing dismounted 24/7 DVE capability and enabling the digital battlefield
The Digital Battlefield promises major improvements for the modern warfighter, and has been in development for several years. Many of the key components needed to collect, analyze, process, and disseminate that information have been developed and are now largely fielded. The component that has prevented the Digital Battlefield from being implemented is the Warfighter’s interface – presenting precise information directly to the Warfighter without engendering information overload. Several daytime systems have been developed that provide test beds for operator evaluation, but do not operate at night and in degraded visual environments (DVE). The Digital Enhanced Vision System (DEVS) now provides a device that works in all conditions – both day and night – and provides much greater performance than traditional image intensifier devices, while presenting real-time imagery to the operator at very low latency that meets threshold and approaches objective requirements.

DEVS is a compact helmet mounted night/day imaging system that provides low-light visual, SWIR and thermal viewing through a near-eye display. The system has a 50° D FOV, allowing the user to have a major FOV improvement compared to current I2 night vision goggles. System latency has been reduced to around 20ms – less than half the latency of the new F-35 goggles at 42ms. The system is designed to connect readily into the ‘Digital Battlefield” allowing improved SA to each dismounted operator. The TRL8 system is scheduled for completion in early ’19.
GPS Denied Environments
icon_mobile_dropdown
Relative visual localization (RVL) for UAV navigation
Most of today's UAVs make use of multi-sensor GNSS/INS fusion for localization during navigation. In such a context GNSS systems are used as a compact and cost-effective way to constrain the unbounded error induced by the INS sensors on the localization. Unfortunately, GNSS systems have been proven to be unreliable in multiple contexts. The drawback of such an approach resides in the radio communications necessary to acquire the localization data. Radio communication systems are prone to availability problems in some environments, to signal alteration and to interference. The root cause of the problem resides in the use of global information to solve a local problem. In this work, we propose the use of local visual information to perform relative localization in an unknown outdoor environment. The algorithm uses feature point methods to extract salient points from a set of images pertaining to possible matches during the navigation. The extracted features are matched with available visual data stored during previous navigation or from an aerial view map. Different feature extraction techniques were analyzed, and ORB was the one that gave the best mean absolute error. The estimated distance between the best match and ground-truth localization was within 70 meters on average at an altitude of 150 meters. Experimental tests were conducted on outdoor videos captured using a quadcopter. The obtained results are promising and show the possibility of using relative visual data in GPS/GNSS-denied environments to improve the robustness of UAVs navigation.
Location and head orientation tracking in GPS-denied environments
James E. Melzer, Ashutosh Morde
Wayfinding has been investigated by multiple researchers in the context of a Landmark-to-Route-to-Survey learning construct that relies on multiple and – across the population – varying cognitive mechanisms. The development of GPS-based cellphone navigation aids has made wayfinding easier for most situations, though for First Responders and Ground Warfighters, the need to navigate in GPS-denied environments without such assistance may compromise their ability to get their job done as well as their safety. We will discuss wayfinding research and look at bio-inspired navigation methods because they may point us to navigation solutions that do not rely on GPS. Finally, we will discuss a hybrid inertial and visual navigation and tracking system that we have integrated into an augmented reality ensemble for high stress, GPS-denied applications with some recommendations for future growth.
MMW and DVE Phenomenology and Sensing: Joint Session with conferences 10642 and 10634
icon_mobile_dropdown
Visualization requirements for DVE systems
One of the more challenging aspects of Degraded Visual Environment (DVE) solutions development, evaluation and qualification is the subjectivity of output imagery. Current civil and military DVE system specifications do not address visualization requirements; instead identifying what must be seen. Evaluation pilots, without visualization requirements, evaluate system imagery based on their own opinion of what is good enough and often compared to what they see with their natural vision. This leads to tremendous subjectivity that varies from pilot-to-pilot, day-to-day, and program-to-program. The lack of requirements leaves industry without a guiding direction for implementations, leading to increased risk and cost. This is not an oversight, as it is very difficult to identify specific qualitative/quantitative imagery performance objectives. Yet we, the DVE community, need to do better. This paper puts forth the case for establishing visualization requirements and makes a series of recommendations for moving DVE system evaluation from a subjective to a quantitative assessment.
DVE system capability classes
The characterization of a Degraded Visual Environment (DVE) System is often focused on the sensors and system architecture. This paper puts forth the case for developing a standard set of rotary-wing DVE system classes based on what the DVE System can do, rather than how it is implemented. This approach supports development of a standard set of safety criteria, and allowable intended uses for each particular class of DVE System. It also controls the relative cost of the DVE System, as the class drives safety criteria, system architecture, and certification costs. Industry can then develop or envision technical solutions to meet those standards, and users can better understand the performance versus cost tradeoffs of the various classes. The DVE industry currently has little or no definition of such classes and is largely left with an ad hoc approach. This concept borrows from the Instrument Landing System (ILS) categories, where the capability is defined by five categories (CAT I, CAT II, CAT IIIa, CAT IIIb, and CAT IIIc). Operators and pilots are not necessarily aware of the technical solution, such as number of antennas, receivers, or amount of redundancy, instead the categories clearly identify the capabilities of each category of system. This paper develops a set of DVE system classes to provide a context for future development of safety criteria and technical solutions.
Visibility in degraded visual environments (DVE)
John N. Sanders-Reed, Stephen J. Fenley
As visibility decreases, crew workload increases resulting in reduced operational capability. As a consequence it is useful to define standard operational levels defining operational constraints based on visibility range. For given atmospheric conditions, visibility varies with wavelength meaning that the operational level may vary depending on the sensor waveband. Results from a number of previous authors are combined to present an updated and integrated set of spectral attenuation and visibility curves from the visible through the millimeter wave regime. These curves show attenuation through standard and humid atmospheres and the effects of various levels of rain and fog. Obscurant particle size ranges are shown to help explain the observed phenomenology. In addition, work from a number of other authors is combined to relate standard meteorological measurements (densities or rates) to visibility. These results are compared with the spectral attenuation curves. The result is an ability to relate obscurant density or rate to attenuation or visibility and hence operational level at any wavelength from visible through the millimeter wave regime.
High fill factor RF aperture arrays for improved passive, real-time millimeter wave imaging
Sensors operating in the millimeter wave region of the electromagnetic spectrum provide valuable situational awareness in degraded visual environments, helpful in navigation of rotorcraft and fixed wing aircraft. Due to their relatively long wavelength, millimeter waves can pass through many types of visual obscurants, including smoke, fog, dust, blowing sand, etc. with low attenuation. Developed to take advantage of these capabilities, ourmillimeter wave imager employs a unique, enabling receiver architecture based on distributed aperture arrays and optical upconversion. We have reported previously on operation and performance of our passive millimeter wave imager, including field test results in DVE and other representative environments, as well as extensive flight testing on an H-1 rotorcraft. Herein we discuss efforts to improve RF and optical component hardware integration, with the goal to increase manufacturability and reduce c-SWaP of the system. These outcomes will allow us to increase aperture sizes and channel counts, thereby providing increased receiver sensitivity and overall improved image quality. These developments in turn will open up new application areas for the passive millimeter wave technology, as well as better serving existing ones.
Poster Session
icon_mobile_dropdown
Improving AVHRR-based NDVI data using a statistical technique for global climate studies
Md. Z. Rahman, Leonid Roytman, Abdel Hamid Kadik, et al.
The main objective of this report is to examine the Normalized Difference Vegetation Index (NDVI) stability in the NOAA/NESDIS Global Vegetation Index (GVI) data, which was collected from five NOAA series satellites. An empirical distribution function (EDF) was developed to decrease the long-term inaccuracy of the NDVI data derived from the AVHRR sensor on NOAA polar orbiting satellite. The instability of data is a consequence of orbit degradation, and from the circuit drifts over the life of a satellite. Degradation of NDVI over time and shifts of NDVI between the satellites were estimated using the China data set, because it includes a wide variety of different ecosystems represented globally. It was found that the data for six particular years, four of which were consecutive, are not stable compared to other years because of satellite orbit drift, AVHRR sensor degradation, and satellite technical problems, including satellite electronic and mechanical satellite systems deterioration. The data for paired years for the NOAA-7, NOAA-9, NOAA-11, NOAA-14, and NOAA-16 were assumed to be standard because the crossing time of the satellite over the equator (between 13:30 and 15:00 hours) maximized the value of the coefficients. These years were considered the standard years, while in other years the quality of satellite observations significantly deviated from the standard. The deficiency of data for the affected years were normalized or corrected by using the EDF method and compared with the standard years. These normalized values were then utilized to estimate new NDVI time series that show significant improvement of NDVI data for the affected years so that the dataset is useful in climate studies.
Stitching image using RDHW based on multivariate student's t-distribution (Conference Presentation)
Yingying Kong, Yingying Chen, Leung Henry
In order to create a seamless and seemingly natural panorama, we propose a novel stitching method when a panoramic scene contain two predominate planes. Firstly, compute each homography of per planes. Then, how to set the each of weight in dual-homography become an important step. The traditional method of setting weights is to directly calculate European distance between original image location pixel points and feature points. The disadvantage is the weights of singular points seriously impact the overall decision. In this paper, we proposed a static probability model of error matching to optimize weights by multivariate student’s t distribution. No only error matching probability, but also error amount and distance of feature points are all considered in the weight model. Finally, a renewal single homography is defined by establishing contact between dual-homography and weights. Experiments show the homography matrix is more robust and accurate to perform a nonlinear warping. The proposed method is easily generalized to multiple images, and allows one to automatically obtain the best perspective in the panorama.