Proceedings Volume 9470

Display Technologies and Applications for Defense, Security, and Avionics IX; and Head- and Helmet-Mounted Displays XX

cover
Proceedings Volume 9470

Display Technologies and Applications for Defense, Security, and Avionics IX; and Head- and Helmet-Mounted Displays XX

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 15 June 2015
Contents: 13 Sessions, 27 Papers, 0 Presentations
Conference: SPIE Defense + Security 2015
Volume Number: 9470

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9470
  • Head and Helmet Mounted Displays and Display Technologies and Applications for Defense, Security, and Avionics: Joint Session with Conferences 9470A and 9470B
  • Head-Mounted and Body-Worn Displays
  • Stereoscopic 3D Displays
  • Immersive Environments and Augmented Reality
  • Increased Resolution Via Virtual Pixel
  • Display Performance Research and Advances
  • Flexible Displays
  • Cornucopia
  • Head and Helmet Mounted Displays: A Retrospective
  • Testing and Human Factors of HMDs
  • Enabling Technologies
  • Poster Session
Front Matter: Volume 9470
icon_mobile_dropdown
Front Matter: Volume 9470
This PDF file contains the front matter associated with SPIE Proceedings Volume 9470, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Head and Helmet Mounted Displays and Display Technologies and Applications for Defense, Security, and Avionics: Joint Session with Conferences 9470A and 9470B
icon_mobile_dropdown
The impact of coloured symbology on cockpit eyes-out display effectiveness: a survey of key parameters
Maha Fares, Derek R. Jordan
Colour is becoming a baseline requirement in the avionic displays market. Implemented for decades in Head Down Displays (HDD), it is thought to enhance Situational Awareness and minimise errors in decision making. Even though a wide colour gamut can be achieved in eyes-out display devices, its application and its usefulness for symbology effectiveness remains debatable. Reconciling these two issues would significantly improve the standardisation of eyes-out displays, enhancing safety while reducing costs of ownership. However, designing a robust set of colour symbology, for all eyes-out display types and in all conditions of operation - in particular Degraded Visual Environments (DVE) - is less straightforward than in HDD. In fact, the transparency dimension of the display can cause a divergence between the intent of the coloured symbology and its recognition/discrimination by the user. The effectiveness of colour as an attention getter and the associated design constraints for real situations are investigated. This report summarises the main features to take into account when assigning colour to an eyes-out display, including a discussion on the Green/Amber/Red code. The approach suggested aims at developing a model that uses colour symbology effectively in both the aircraft space- and time- frames simultaneously. In colour eyes-out displays, mission performance is clearly dependent on: Display transparency, Information categorization, Colour perception by the pilot. The interaction between these elements is key to designing a coherent set of colour design rules and the paper ends with a set of recommendations for good practice in eyes-out symbology design.
Augmented reality technology for day/night situational awareness for the dismounted Soldier
Eric Gans, David Roberts, Matthew Bennett, et al.
This paper describes Applied Research Associates’ (ARA) recent advances in Soldier augmented reality (AR) technology. Our AR technology, called ARC4, delivers heads-up situational awareness to the dismounted warfighter, enabling non-line-of-sight team coordination in distributed operations. ARC4 combines compact head tracking sensors with advanced pose estimation algorithms, network management software, and an intuitive AR visualization interface to overlay tactical iconic information accurately on the user’s real-world view. The technology supports heads-up navigation, blue-force tracking, target handoff, image sharing, and tagging of features in the environment. It integrates seamlessly with established network protocols (e.g., Cursor-on-Target) and Command and Control software tools (e.g., Nett Warrior, Android Tactical Assault Kit) and interfaces with a wide range of daytime see-through displays and night vision goggles to deliver real-time actionable intelligence, day or night. We describe our pose estimation framework, which fuses inertial data, magnetometer data, GPS, DTED, and digital imagery to provide measurements of the operator’s precise orientation. These measurements leverage mountainous terrain horizon geometry, known landmarks, and sun position, enabling ARC4 to achieve significant improvements in accuracy compared to conventional INS/GPS solutions of similar size, weight, and power. We detail current research and development efforts toward helmet-based and handheld AR systems for operational use cases and describe extensions to immersive training applications.
Head-Mounted and Body-Worn Displays
icon_mobile_dropdown
Keeping display visibility in outdoor environment
Ariela Donval, Ido Dotan, Noam Gross, et al.
Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data. As a result, the technology functions by enhancing one’s current perception of reality. Artificial information about the environment and its objects can be overlaid on the real world, using a special optics and display. When using such a device at a very bright day, the display image risks vanishing due to the sun illumination. However, at a very cloudy day, one needs all the light to pass through the display to the user eye. The need to control the amount of sunlight passes through the AR device in a passive way was the trigger for our effort in developing Dynamic Sunlight Filter (DSF™). DSF™ is a passive solution which is dedicated to regulate sunlight overpower events.
Stereoscopic 3D Displays
icon_mobile_dropdown
3D display considerations for rugged airborne environments
The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator’s visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.
A guide for human factors research with stereoscopic 3D displays
In this work, we provide some common methods, techniques, information, concepts, and relevant citations for those conducting human factors-related research with stereoscopic 3D (S3D) displays. We give suggested methods for calculating binocular disparities, and show how to verify on-screen image separation measurements. We provide typical values for inter-pupillary distances that are useful in such calculations. We discuss the pros, cons, and suggested uses of some common stereovision clinical tests. We discuss the phenomena and prevalence rates of stereoanomalous, pseudo-stereoanomalous, stereo-deficient, and stereoblind viewers. The problems of eyestrain and fatigue-related effects from stereo viewing, and the possible causes, are enumerated. System and viewer crosstalk are defined and discussed, and the issue of stereo camera separation is explored. Typical binocular fusion limits are also provided for reference, and discussed in relation to zones of comfort. Finally, the concept of measuring disparity distributions is described. The implications of these issues for the human factors study of S3D displays are covered throughout.
Immersive Environments and Augmented Reality
icon_mobile_dropdown
Exploring immersive environments to aid urban intelligence, surveillance and reconnaissance operations
Jason Roll, Peter Venero, Donald Adkins, et al.
Intelligence, surveillance, and reconnaissance (ISR) operations in urban environments can be challenging due in part to the physical proximity and height of the buildings which can cause occlusions of sensors. A potential avenue to overcome the problems presented by the urban environment could be the exploitation of immersive environments. An immersive environment would increase an analyst’s situation awareness to correctly select and position sensors for intelligence gathering. In completing a sensor management task, the operator must perform three basic actions: navigate through the environment, select sensors in the environment, and manipulate those selected sensors. The goal of this experiment was to investigate the impact that different fields of view (FOV) and different control devices had on those operator actions. While the FOV did not show a significant impact, significant differences were seen between the two control devices tested.
Augmented reality enabling intelligence exploitation at the edge
Sue E. Kase, Heather Roy, Elizabeth K. Bowman, et al.
Today’s Warfighters need to make quick decisions while interacting in densely populated environments comprised of friendly, hostile, and neutral host nation locals. However, there is a gap in the real-time processing of big data streams for edge intelligence. We introduce a big data processing pipeline called ARTEA that ingests, monitors, and performs a variety of analytics including noise reduction, pattern identification, and trend and event detection in the context of an area of operations (AOR). Results of the analytics are presented to the Soldier via an augmented reality (AR) device Google Glass (Glass). Non-intrusive AR devices such as Glass can visually communicate contextually relevant alerts to the Soldier based on the current mission objectives, time, location, and observed or sensed activities. This real-time processing and AR presentation approach to knowledge discovery flattens the intelligence hierarchy enabling the edge Soldier to act as a vital and active participant in the analysis process. We report preliminary observations testing ARTEA and Glass in a document exploitation and person of interest scenario simulating edge Soldier participation in the intelligence process in disconnected deployment conditions.
Increased Resolution Via Virtual Pixel
icon_mobile_dropdown
A virtual pixel software and hardware technology to increase projector resolution
Increasing the resolution of the LCD (or similar) display used in projectors (in conjunction with increased light emissions, etc.) increases the resolution of the projected image and/or the distance that the projector can be from its screen. While increasing the size of the LCD panel represents one approach to producing increased resolution, this increases projector size and weight. This paper proposes the introduction of a mechanism to allow multiple pixels to be combined to create a higher resolution output image than the LCD (or similar) display used to create virtual pixels, increasing the effective resolution of the projector.
A virtual pixel technology to enhance the resolution of monitors and for other purposes
Current monitor and television displays utilize pixels to display an approximation of the real world collected by a camera or generated computationally. This paper proposes a virtual pixel technology which incorporates coloring LCD combination. Each physical pixel’s configuration is based on a weighted average of the virtual pixels it contributes to. This allows lower pixel density displays to produce the approximation of a higher pixel density, while lowering production cost. The paper provides an overview of the proposed technology, discusses its application to monitors and extension to other areas and concludes with a discussion of the next steps to its development.
Display Performance Research and Advances
icon_mobile_dropdown
Review of the evolution of display technologies for next-generation aircraft
Advancements in electronic display technologies have provided many benefits for military avionics. The modernization of legacy tanker transport aircraft along with the development of next-generation platforms, such as the KC-46 aerial refueling tanker, offers a timeline of the evolution of avionics display approaches. The adaptation of advanced flight displays from the Boeing 787 for the KC-46 flight deck also provides examples of how avionics display solutions may be leveraged across commercial and military flight decks to realize greater situational awareness and improve overall mission effectiveness. This paper provides a review of the display technology advancements that have led to today’s advanced avionics displays for the next-generation KC-46 tanker aircraft. In particular, progress in display operating modes, backlighting, packaging, and ruggedization will be discussed along with display certification considerations across military and civilian platforms.
A neuroergonomic quasi-experiment: Predictors of situation awareness and display usability while performing complex tasks
Steven D. Harbour, James C. Christensen
Situation awareness (SA) is the ability and capacity to perceive information and act on it acceptably. Head Up Display (HUD) versus Head Down Display (HDD) manipulation induced variation in task difficulty. HUD and HDD cockpit displays or display designs promoted or impaired SA. The quantitative research presented in this paper examines basic neurocognitive factors in order to identify their specific contributions to the formation of SA, while studying display usability and the effects on SA. Visual attentiveness (Va), perceptiveness (Vp), and spatial working memory (Vswm) were assessed as predictors of SA under varying task difficulty. The study participants were 19 tactical airlift pilots, selected from the Ohio Air National Guard. Neurocognitive tests were administered to the participants prior to flight. In-flight SA was objectively and subjectively assessed for 24 flights. At the completion of this field experiment, the data were analyzed and the tests were statistically significant for the three predictor visual abilities Vp, Va, and Vswm as task difficulty was varied, F(3,11) = 8.125, p = .008. In addition, multiple regression analyses revealed that the visual abilities together predicted a majority of the variance in SA, R2 = 0.753, p = .008. As validated and verified by ECG and EEG data, the HUD yielded a full ability and capacity to anticipate and accommodate trends were as the HDD yielded a saturated ability to anticipate and accommodate trends. Post-hoc tests revealed a Cohen’s f2 = 3.05 yielding statistical power to be 0.98. This work results in a significant contribution to the field by providing an improved understanding of SA and path to safer travel for society worldwide. PA 88ABW-2015-1282.
A practical definition of eye-limited display system resolution
Charles J. Lloyd, M. Winterbottom, J. Gaska, et al.
Over the past few decades the term “eye-limited resolution” has seen significant use. However, several variations in the definition of the term have been employed and estimates of the display pixel pitch required to achieve it differ significantly. This paper summarizes the results of published evaluations and experiments conducted in our laboratories relating to resolution requirements. The results of several evaluations employing displays with sufficient antialiasing indicate a pixel pitch of 0.5 to 0.93 arcmin will produce 90% of peak performance for observers with 20/20 or better acuity for a variety of visual tasks. If insufficient antialiasing is employed, spurious results can indicate that a finer pixel pitch is required due to the presence of sampling artifacts. The paper reconciles these findings with hyperacuity task performance which a number of authors have suggested may require a much finer pixel pitch. The empirical data provided in this paper show that hyperacuity task performance does not appear to be a driver of eye-limited resolution. Asymptotic visual performance is recommended as the basis of eye-limited resolution because it provides the most stable estimates and is well aligned with the needs of the display design and acquisition communities.
Just noticeable color difference: implications for display systems
Daniel D. Desjardins, Patrick Gardner
According to MIL-HDBK-87213, the goal in full color displays is to have the color primaries widely separated and/or provide filtering such that they will stay widely separated when exposed to, and mixed with, ambient light. Modern color displays boast a phenomenal number of colors, typically based on the number of luminance (gray) levels per color sub-pixel raised to a power determined by the number of distinct color sub-pixels. Because display color should (“must” per the given handbook) be evaluated to assure that it is fully usable and aesthetically acceptable, this paper reports preliminary findings in the determination of Just Noticeable Differences (JNDs) in luminance for the red, green, and blue color primaries over a given luminance range for a particular Display Under Test (DUT).
Flexible Displays
icon_mobile_dropdown
Recent progress in OLED and flexible displays and their potential for application to aerospace and military display systems
Organic light emitting diode (OLED) display technology has advanced significantly in recent years and it is increasingly being adapted in consumer electronics products with premium performance, such as high resolution smart phones, Tablet PCs and TVs. Even flexible OLED displays are beginning to be commercialized in consumer electronic devices such as smart phones and smart watches. In addition to the advances in OLED emitters, successful development and adoption of OLED displays for premium performance applications relies on the advances in several enabling technologies including TFT backplanes, pixel drive electronics, pixel patterning technologies, encapsulation technologies and system level engineering. In this paper we will discuss the impact of the recent advances in LTPS and AOS TFTs, R, G, B and White OLED with color filter pixel architectures, and encapsulation, on the success of the OLEDs in consumer electronic devices. We will then discuss potential of these advances in addressing the requirements of OLED and flexible displays for the military and avionics applications.
Cornucopia
icon_mobile_dropdown
Analog video to ARINC 818
Paul Grunwald
Many commercial and military aircraft still use analog video, such as RS-170, RS-343, or STANEG 3350. Though the individual digital components many be inexpensive, the cost to certify and retrofit an entire aircraft fleet may be prohibitively expensive. A partial or incremental upgrade program where analog cameras remain in use but data is converted and processed digitally can be an attractive option. This paper describes Great River Technology’s experience in converting multiple channels of RS-170 and multiplexing them through a concentrator to put them onto a single fiber or cable. The paper will also discuss alternative architectures and how ARINC 818 can be utilized with legacy systems.
Electro-textile garments for power and data distribution
Jeremiah R. Slade, Carole Winterhalter
U.S. troops are increasingly being equipped with various electronic assets including flexible displays, computers, and communications systems. While these systems can significantly enhance operational capabilities, forming reliable connections between them poses a number of challenges in terms of comfort, weight, ergonomics, and operational security. IST has addressed these challenges by developing the technologies needed to integrate large-scale cross-seam electrical functionality into virtually any textile product, including the various garments and vests that comprise the warfighter’s ensemble. Using this technology IST is able to develop textile products that do not simply support or accommodate a network but are the network.
Head and Helmet Mounted Displays: A Retrospective
icon_mobile_dropdown
A history of helmet mounted displays
In more than 40 years of development, the Helmet-Mounted Display (HMD) has become a key part of the equipment for fixed and rotary wing pilots and ground soldiers, proving to be a force multiplier and reducing user workload. Rockwell Collins has been a key player in the development of modern HMD technology and is currently fielding major HMDs supporting pilots around the world including the Joint Hemet Mounted Cueing System (JHMCS) and Strike Eye. This paper will outline the history of HMDs over the last 40 years for fixed wing, rotorcraft and soldiers and discuss Rockwell Collins’ role. We will discuss the development and testing required for introduction of HMDs into the modern pilot environment. Within the paper we will point out some of the misconceptions, facts and legends of HMDS.
The impact of human factors, crashworthiness and optical performance design requirements on helmet-mounted display development from the 1970s to the present
Thomas H. Harding, Clarence E. Rash, William E. McLean, et al.
Driven by the operational needs of modern warfare, the helmet-mounted display (HMD) has matured from a revolutionary, but impractical, World War I era idea for an infantry marksman’s helmet-mounted weapon delivery system to a sophisticated and ubiquitous display and targeting system that dominates current night warfighting operations. One of the most demanding applications for HMD designs has been in Army rotary-wing aviation, where HMDs offer greater direct access to visual information and increased situational awareness in an operational environment where information availability is critical on a second-to-second basis. However, over the past 40 years of extensive HMD development, a myriad of crashworthiness, optical, and human factors issues have both frustrated and challenged designers. While it may be difficult to attain a full consensus on which are the most important HMD design factors, certainly head-supported weight (HSW), exit pupil size, field-of-view, image resolution and physical eye relief have been among the most critical. A confounding factor has been the interrelationship between the many design issues, such as early attempts to use non-glass optical elements to lower HSW, but at the cost of image quality, and hence, pilot visual performance. This paper traces how the role of the demanding performance requirements placed on HMDs by the U.S. Army aviation community has impacted the progress of HMD designs towards the Holy Grail of HMD design: a wide field-of-view, high resolution, binocular, full-color, totally crashworthy system.
In the blink of an eye: head mounted displays development within BAE Systems
There has been an explosion of interest in head worn displays in recent years, particularly for consumer applications with an attendant ramping up of investment into key enabling technologies to provide what is essence a mobile computer display. However, head mounted system have been around for over 40 years and today’s consumer products are building on a legacy of knowledge and technology created by companies such as BAE Systems who have been designing and fielding helmet mounted displays (HMD) for a wide range of specialist applications. Although the dominant application area has been military aviation, solutions have been fielded for solider, ground vehicle, simulation, medical, racing car and even subsea navigation applications. What sets these HMDs apart is that they provide the user with accurate conformal information embedded in the users real world view where the information presented is intuitive and easy to use because it overlays the real world and enables them to stay head up, eyes out, - improving their effectiveness, reducing workload and improving safety. Such systems are an enabling technology in the provision of enhanced Situation Awareness (SA) and reducing user workload in high intensity situations. These capabilities are finding much wider application in new types of compact man mounted audio/visual products enabled by the emergence of new families of micro displays, novel optical concepts and ultra-compact low power processing solutions. This paper therefore provides a personal summary of BAE Systems 40 year’s journey in developing and fielding Head Mounted systems, their applications.
A review of head-worn display research at NASA Langley Research Center
Jarvis J. Arthur III, Randall E. Bailey, Steven P. Williams, et al.
NASA Langley has conducted research in the area of helmet-mounted/head-worn displays over the past 30 years. Initially, NASA Langley's research focused on military applications, but recently has conducted a line of research in the area of head-worn displays for commercial and business aircraft. This work has revolved around numerous simulation experiments as well as flight tests to develop technology and data for industry and regulatory guidance. The paper summarizes the results of NASA's helmet-mounted/head-worn display research. Of note, the work tracks progress in wearable collimated optics, head tracking, latency reduction, and weight. The research lends credence that a small, sunglasses-type form factor of the head-worn display would be acceptable to commercial pilots, and this goal is now becoming technologically feasible. The research further suggests that a head-worn display may serve as an “equivalent" Head-Up Display (HUD) with safety, operational, and cost benefits. “HUD equivalence" appears to be the economic avenue by which head-worn displays can become main-stream on the commercial and business aircraft flight deck. If this happens, NASA's research suggests that additional operational benefits using the unique capabilities of the head-worn display can open up new operational paradigms.
Testing and Human Factors of HMDs
icon_mobile_dropdown
Flight test of a head-worn display as an equivalent-HUD for terminal operations
Research, development, test, and evaluation of flight deck interface technologies is being conducted by NASA to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System (NextGen). Under NASA’s Aviation Safety Program, one specific area of research is the use of small Head-Worn Displays (HWDs) as a potential equivalent display to a Head-up Display (HUD). Title 14 of the US CFR 91.175 describes a possible operational credit which can be obtained with airplane equipage of a HUD or an “equivalent”' display combined with Enhanced Vision (EV). A successful HWD implementation may provide the same safety and operational benefits as current HUD-equipped aircraft but for significantly more aircraft in which HUD installation is neither practical nor possible. A flight test was conducted to evaluate if the HWD, coupled with a head-tracker, can provide an equivalent display to a HUD. Approach and taxi testing was performed on-board NASA’s experimental King Air aircraft in various visual conditions. Preliminary quantitative results indicate the HWD tested provided equivalent HUD performance, however operational issues were uncovered. The HWD showed significant potential as all of the pilots liked the increased situation awareness attributable to the HWD’s unique capability of unlimited field-of-regard.
Dynamic registration of an optical see-through HMD into a wide field-of-view rotorcraft flight simulation environment
Franz Viertler, Manfred Hajek
To overcome the challenge of helicopter flight in degraded visual environments, current research considers headmounted displays with 3D-conformal (scene-linked) visual cues as most promising display technology. For pilot-in-theloop simulations with HMDs, a highly accurate registration of the augmented visual system is required. In rotorcraft flight simulators the outside visual cues are usually provided by a dome projection system, since a wide field-of-view (e.g. horizontally > 200° and vertically > 80°) is required, which can hardly be achieved with collimated viewing systems. But optical see-through HMDs do mostly not have an equivalent focus compared to the distance of the pilot's eye-point position to the curved screen, which is also dependant on head motion. Hence, a dynamic vergence correction has been implemented to avoid binocular disparity. In addition, the parallax error induced by even small translational head motions is corrected with a head-tracking system to be adjusted onto the projected screen. For this purpose, two options are presented. The correction can be achieved by rendering the view with yaw and pitch offset angles dependent on the deviating head position from the design eye-point of the spherical projection system. Furthermore, it can be solved by implementing a dynamic eye-point in the multi-channel projection system for the outside visual cues. Both options have been investigated for the integration of a binocular HMD into the Rotorcraft Simulation Environment (ROSIE) at the Technische Universitaet Muenchen. Pros and cons of both possibilities with regard on integration issues and usability in flight simulations will be discussed.
Visibility of monocular symbology in transparent head-mounted display applications
M. Winterbottom, R. Patterson, B. Pierce, et al.
With increased reliance on head-mounted displays (HMDs), such as the Joint Helmet Mounted Cueing System and F-35 Helmet Mounted Display System, research concerning visual performance has also increased in importance. Although monocular HMDs have been used successfully for many years, a number of authors have reported significant problems with their use. Certain problems have been attributed to binocular rivalry when differing imagery is presented to the two eyes. With binocular rivalry, the visibility of the images in the two eyes fluctuates, with one eye’s view becoming dominant, and thus visible, while the other eye’s view is suppressed, which alternates over time. Rivalry is almost certainly created when viewing an occluding monocular HMD. For semi-transparent monocular HMDs, however, much of the scene is binocularly fused, with additional imagery superimposed in one eye. Binocular fusion is thought to prevent rivalry. The present study was designed to investigate differences in visibility between monocularly and binocularly presented symbology at varying levels of contrast and while viewing simulated flight over terrain at various speeds. Visibility was estimated by measuring the presentation time required to identify a test probe (tumbling E) embedded within other static symbology. Results indicated that there were large individual differences, but that performance decreased with decreased test probe contrast under monocular viewing relative to binocular viewing conditions. Rivalry suppression may reduce visibility of semi-transparent monocular HMD imagery. However, factors, such as contrast sensitivity, masking, and conditions such as monofixation, will be important to examine in future research concerning visibility of HMD imagery.
Visual fatigue induced by optical misalignment in binocular devices: application to night vision binocular devices
Maria Gavrilescu, Josephine Battista, Michael R. Ibbotson, et al.
The additional and perhaps unnatural eye-movements required to fuse misaligned binocular images can lead to visual fatigue and decreased task performance. The eyes have some tolerance to optical misalignment. However, a survey of the scientific literature reveals a wide range of recommended tolerances but offers little supporting experimental evidence. Most experimental studies are based on small numbers of participants exposed to brief periods of optical misalignment. Therefore, these published tolerance limits might have limited relevance for long-duration exposure to misaligned binocular devices. Prolonged use of binocular devices may cause visual fatigue irrespective of binocular alignment especially for complex tasks such as night vision flying. This study attempts to identify measures most sensitive to misalignment in order to establish relevant tolerance limits for in-service binocular night vision devices. Firstly, we developed a rugged and deployable test bench that can measure binocular alignment with a reproducibility error of less than 1 arcmin. The bench was used to identify and investigate major factors affecting the stability of the optical misalignment over time. Our results indicated that the optical misalignment of a given device changed over time as a function of the in-service usage and thermal history of the device. Secondly, participants were exposed to experimentally controlled levels of optical misalignment typical of those measured on in-service binocular night vision devices. The visual fatigue of each participant was assessed via a set of oculomotor parameters. The oculomotor parameters showing high sensitivity to optical misalignment were compared for subjects exposed to extended periods of misalignment in a baseline reading task and a task using an actual night vision device.
Enabling Technologies
icon_mobile_dropdown
Enhancing head and helmet-mounted displays using a virtual pixel technology
Head and helmet-mounted displays utilize pixels to display a digitized approximation of the real world. These displays must have a higher pixel density (as compared to a monitor or projected image) to create the same level of perceived resolution. This paper proposes a virtual pixel technology which incorporates a virtual pixel creation function. Each physical pixel’s configuration is based on the virtual pixels that it contributes to, allowing lower pixel density display hardware to produce the approximation of a higher pixel density. The paper provides an overview of the proposed technology and how it is applicable to head/helmet-mounted displays and considerations related thereto.
Poster Session
icon_mobile_dropdown
Development of a helmet/helmet-display-unit alignment tool (HAT) for the Apache helmet and display unit
William McLean, Jonathan Statz, Victor Estes, et al.
Project Manager (PM) Apache Block III contacted the U.S. Army Aeromedical Research Laboratory (USAARL), Fort Rucker, Alabama, requesting assistance to evaluate and find solutions to a government-developed Helmet Display Unit (HDU) device called the Mock HDU for helmet alignment of the Apache Advanced Integrated Helmet (AAIH). The AAIH is a modified Head Gear Unit No. 56 for Personnel (HGU-56/P) to replace the current Integrated Helmet and Sighting System (IHADSS). The current flashlight-based HDU simulator for helmet/HDU alignment was no longer in production or available. Proper helmet/HDU alignment is critical to position the right eye in the small HDU eye box to obtain image alignment and full field of view (FOV). The initial approach of the PM to developing a helmet/HDU fitting device (Mock HDU) was to duplicate the optical characteristics of the current tactical HDU using less complex optics. However, the results produced questionable alignment, FOV, and distortion issues, with cost and development time overruns. After evaluating the Mock HDU, USAARL proposed a cost effective, less complex optical design called the Helmet/HDU Alignment Tool (HAT). This paper will show the development, components, and evaluations of the HAT compared to the current flashlight HDU simulator device. The laboratory evaluations included FOV measurements and alignment accuracies compared to tactical HDUs. The Apache helmet fitter technicians and Apache pilots compared the HAT to the current flashlight based HDU and ranked the HAT superior.