Conference 11765

Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) II

Digital Forum: On-demand now
View Session ∨
  • Sunday Morning Poster Session
  • Sunday Afternoon Poster Session
  • 1: Display Engine Architectures for AR, VR, and Smart Glasses
  • 2: Optical Combiner Architectures for Smart Glasses
  • 3: Custom AR Optics Production Techniques
  • 4: Novel Materials for AR/MR Optics
  • 5: User Experience with AR / Smart Glasses
  • 6: Novel Display Architectures Improving Visual Comfort
  • 7: Sensing Technologies Improving the AR Experience
  • Poster Session
Session LIVE: Sunday Morning Poster Session
28 March 2021 • 8:00 AM - 9:00 AM PDT
The poster session will be hosted on the Remo platform, allowing visitors to move freely between presentations, meet the authors, and ask questions about their research. Take this opportunity to meet your colleagues and coauthors online.
Session LIVE: Sunday Afternoon Poster Session
28 March 2021 • 5:00 PM - 6:00 PM PDT
The poster session will be hosted on the Remo platform, allowing visitors to move freely between presentations, meet the authors, and ask questions about their research. Take this opportunity to meet your colleagues and coauthors online.
Session 1: Display Engine Architectures for AR, VR, and Smart Glasses
11765-1
Author(s): Franz Fidler, Anna Balbekova, Louahab Noui, Sebastian Anjou, TriLite Technologies GmbH (Austria); Thomas Werner, TriLite Technologies GmbH (Germany); Jörg Reitterer, TriLite Technologies GmbH (Austria)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Display performance is a major criteria in the quest to deliver consumer-ready, high-quality extended reality (XR) glasses. Laser beam scanners (LBS) in comparison to other display technologies such as LCOS, OLED or micro-LED are among the most promising high-dynamic range full-colour display engines, e.g., because the size of these devices remains unchanged when increasing display resolution and field of view. This paper addresses the potential benefits and pitfalls of using LBS in XR and gives a comparison of next-gen laser beam scanning devices in comparison with other display technologies.
11765-2
Author(s): Oleg Petrak, Fabian Schwarz, Leon Pohl, Marcel Reher, Christian Janicke, Jan Przytarski, OQmented GmbH (Germany); Frank Senger, Jörg Albers, Thorsten Giese, Lars Ratzmann, Fraunhofer-Institut für Siliziumtechnologie ISIT (Germany); Peter Blicharski, Stephan Marauska, Thomas von Wantoch, Ulrich Hofmann, OQmented GmbH (Germany)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Presented is a high-resolution AR micro display based on Laser Beam Scanning (LBS) applying a two-dimensional resonantly operated vacuum packaged MEMS mirror with large mirror diameter, high scan frequencies, high Q-factor and large field-of-view (FoV). The image is projected to the retina using a diffractive waveguide leading to a comfortably large eyebox. Advanced control algorithms and image processing methods are implemented to accurately drive, sense and control the biaxial resonant MEMS mirror as well as to optimize image projection quality. Due to a sufficiently large mirror diameter this micro display does not need any beam expansion optics between MEMS mirror and waveguide enabling an ultra-compact projection unit. Resonant operation of the MEMS mirror in both axes and exploiting the significant advantage of a hermetic vacuum package effectively reduces energy loss by damping and thus minimizes drive voltage and power consumption. The display setup demonstrates the successful realization of a small form factor high resolution micro projector that meets important requirements for enabling fashionable AR smartglasses.
11765-3
Author(s): Jörg Reitterer, Zhe Chen, Anna Balbekova, Gerhard Schmid, Gregor Schestak, Faraj Nassar, Manuel Dorfmeister, Matthias Ley, TriLite Technologies GmbH (Austria)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Laser beam scanners (LBS) are an emerging micro-display technology for augmented reality (AR) head-mounted displays (HMD), enabling small-form-factor and low-power display units with large field of view (FOV) and daylight-bright luminance, that are compatible with a large range of optical combiner technologies such as waveguide or holographic combiners. We have developed an ultra-compact and lightweight LBS comprising an integrated laser module, a single 2D micro-electro-mechanical systems (MEMS) mirror, and a molded interconnect device (MID). The compact integrated laser module contains red, green, and blue (RGB) semiconductor laser diodes (LDs) and a common system of microlenses for beam collimation, all enclosed in a single hermetically sealed package. The three LDs are mounted onto a single submount using a novel high-precision laser die bonding technique. This high-precision LD placement allows the use of collimation lenses that collimate all three laser beams simultaneously in contrast to separate lenses with additional active alignment steps for each color. No additional optical components such as mirrors and dichroic beam combiners are required—instead, the color channels are overlapped on a pixel-by-pixel basis by a “software beam combination” laser pulse timing algorithm. Both laser module and MEMS mirror are assembled on an MID with printed circuit board (PCB), which is connected to a driver board including video interface. We also give an outlook to future generations of fully mass manufacturable LBS systems with even smaller form factor.
11765-4
Author(s): Nicholas W. Melena, Joshua T. Wiersma, CP Display (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Diffractive waveguides are the most commonly used AR/MR combiner technology due to their low cost, attractive form factor, and large achievable FOV. We show that current and anticipated diffractive combiners only support étendues up to approximately 6.2 mm²sr, equivalent to no larger than a 0.34” display operating at f/2. At this display diagonal, maintaining 60 PPD for retinal resolution over a 50°-diagonal FOV requires a 3-µm pixel pitch. We illustrate how future combiners with larger FOVs and higher resolutions as well as the rise of inorganic microLEDs (micro-iLEDs) will require even smaller pixel pitches.
11765-5
Author(s): Pablo Benítez, Juan C. Minano, Univ Politecnica de Madrid (Spain); Dejan Grabovickic, Julio Chaves, Milena Nikolic, Pablo Zamora, Marina Buljan, Ruben Mohedano, Eduardo Sanchez, Limbak (Spain); Juan C. Gonzalez, Univ Politecnica de Madrid (Spain)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Reducing the size of Virtual Reality head-mounted displays is of main interest to improve the comfort of users, which is a particularly complex design problem due to the very large field of view needed to feel the immersion. High compactness with high transmission efficiency and high contrast can be achieved by multichannel optics, whose design for high performance is carried out at LIMBAK introducing intensively freeform optical surfaces, increased resolution via variable magnification, dynamic mapping control and super-sampling via pixel interlacing. This presentation will cover the growing variety of geometries, how to address their challenges and envision their future.
Session 2: Optical Combiner Architectures for Smart Glasses
11765-6
Author(s): Jianghao Xiong, Guanjun Tan, Tao Zhan, Shin-Tson Wu, CREOL, The College of Optics and Photonics, Univ. of Central Florida (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Traditional waveguide displays face an upper limit of FOV of 60° due to the restraint of propagating angle in the total internal reflection process. In this paper, we propose a new display, named as scanning waveguide display, that achieves a large eye box and FOV simultaneously. The display uses an off-axis lens array with extremely low f/# as the out-coupler. The lens array is fabricated by cholesteric liquid crystal polarization holography. We demonstrate a diagonal FOV of 100°, which far exceeds the theoretical limit of waveguide displays.
11765-7
Author(s): Louahab Noui, Jörg Reitterer, Michael Schöffmann, TriLite Technologies GmbH (Austria)
Digital Forum: On-demand
Show Abstract + Hide Abstract
The quest for user acceptance of Augmented Reality (AR) displays in terms of performance and comfort is continuing. One of the technical challenges is the optical design of compact and light weight optics capable of projecting an augmented image onto the user line of sight with comfort. In this paper we present an insight on how a next generation Laser Beam Scanner (LBS) developed by Trilite Technologies can be integrated with different combiners and implemented for different AR displays and smart glasses architectures.
11765-8
Author(s): Ting-Wei Huang, Ya-Chi Chung, Yi-Chen Lai, Ying-Hsien Sung, Yun-Shou Hsu, National Taipei Univ. of Technology (Taiwan); Wen-Chang Hung, ASUSTeK COMPUTER INC, Taiwan (Taiwan); Chien-Yi Huang, ASUSTeK COMPUTER INC (Taiwan); Yu-Chieh Cheng, National Taipei Univ. of Technology (Taiwan)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Diffractive gratings, one of the most promising near-eye display designs, has been challenging to satisfy the essential features without making sacrifices in output efficiency or the direction of incidence due to the low diffraction efficiency of higher modes at normal incidence. Here, we propose dielectric metagratings that support light deflect to a larger angle with high efficiency within the field of view of 54 degrees. In this paper, through the proposed model of the eye-imaging system, we present optimal designs of metagratings for diffractive total internal reflection combiner or diffractive exit pupil expanders.
11765-9
Author(s): Valter Drazic, Oksana Shramkova, Bobin Varghese, Laurent Blondé, Valérie Allié, InterDigital Inc (France)
Digital Forum: On-demand
Show Abstract + Hide Abstract
There are multiple challenges to realize waveguide based surface relief grating (SRG) for combiners in augmented realty applications: fabricability, efficiency, diffraction uniformity. Interdigital develops SRG based on a near field phenomenon using Edge Waves (EW) to design highly efficient gratings with a high angular robustness. Our design features shallow symmetrical structures optimized for coupling very high field of views into first or second order modes.
11765-10
Author(s): Vladimir N. Borisov, Nikolay V. Muravyev, Roman A. Okun, Aleksandr E. Angervaks, Gavril N. Vostrikov, Mikhail V. Popov, SAMSUNG R&D Institute Russia (Russian Federation)
Digital Forum: On-demand
Show Abstract + Hide Abstract
We report on a novel state-of-the-art diffraction optical elements based waveguide architecture for augmented reality display with increased field of view and method for analytical design of such an architecture. The architecture uniqueness, its configuration, analytical derivations of the diffraction optical elements parameters, and modeling results are discussed. The architecture satisfies market demands for the form-factor, size and weight, as well as allows up to four times increase of the field of view size in comparison with the conventional solutions.
Session 3: Custom AR Optics Production Techniques
11765-11
Author(s): Marc A. Verschuuren, Jeroen Visser, Rob Voorkamp, SCIL Nanoimprint Solutions (Netherlands)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Substrate Conformal Imprint Lithography (SCIL) solves the limitations of PDMS soft-stamp NIL techniques (resolution, pattern deformation, overlay) and allows low-pressure wafer scale conformal contact and sub-10 nm resolution using a novel silicone rubber stamp. SCIL showed direct replication of sub-50nm patterns in silica over 200mm wafers with stamp lifetimes over 500 imprints, for AR, NIL resist with an index of up to n=1.96 and overcoat layers of up to n=2.1. Replication of slanted grating patterns in multiple orientations over the wafer are possible. First results of full 300mm wafer imprints will be shared.
11765-15
Author(s): Vincent Einck, Jim Watkins, Amir Arbabi, Andrew McClung, Mahsa Torfeh, Mahdad Mansouree, Univ. of Massachusetts Amherst (United States)
Digital Forum: On-demand
11765-16
Author(s): Xavier Rottenberg, Denis Marcon, Roelof Jansen, Bruno Figeys, Kristof Lodewijks, Anabel De Proft, Philippe Helin, Veronique Rochus, Cedric Rolin, Robert Gehlhaar, Jan Genoe, Haris Osman, Paul Heremans, Philippe Soussan, Sandeep Saseendran, Aleksandrs Marinins, Bart Vereecke, Nga Pham, Deniz S. Tezcan, imec (Belgium)
Digital Forum: On-demand
Show Abstract + Hide Abstract
The coming of age of AR, VR and MR applications and usage scenarios relies on the development of ever-improved advanced light management systems, both for sensing (camera) and actuation (display), e.g., solid-state dToF or FMCW scanning or flash LiDAR, polarimetric imaging or resettable structured light illumination for 3D mapping, directional imager for light field registration, plasmonic or dielectric color filters and directors for efficient spectroscopic information acquisition. Indeed, optics remains the dominant user interface modality while large portions of required information can be retrieved in optical domain. These systems rely on the emergence of mature mass-manufacturing integrated photonics platforms in near infrared and visible wavelength ranges. This presentation introduces developments at imec of diffractive components for reflective, transmissive and guided applications on opaque (Si/CMOS) and transparent (quartz) substrates, relying on sub-wavelength nano-patterning techniques (from DUV dry and wet (immersion) lithography through 200mm wafer-scale e-beam, nano-imprint lithography, block-co-polymer to EUV), novel CMOS-compatible material toolbox beyond Si and SiN (passive, active, resettable and tunable) and high-aspect ratio re-filling to enable stacking of optical features to define complex functional system. In particular, we will report on pixel-integrated Fresnel phase plates for local eQE optimization, on process complexity trade-off enabled by optical meta-materials, aspherical and non-cylindrical optical components for directed light, tunable structured light scanners, plasmonic and dielectric-based color filters and directors, optical beamformer in near infrared, sub-wavelength spatial light modulator in the visible and finally novel developments for 2D optical waveguides.
Session 4: Novel Materials for AR/MR Optics
11765-17
Author(s): Martin Eibelhuber, Christine Thanner, Dominik Treiblmayr, EV Group (Austria); Sebastien Pochon, Oxford Instruments Plasma Technology Ltd. (United Kingdom); W. Frost, Oxford Instruments Plasma Technology (United Kingdom); Joao Ferreira, David Pearson, Stephanie Baclet, Oxford Instruments Plasma Technology Ltd. (United Kingdom)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Nanoimprint lithography (NIL) has emerged to a volume proven manufacturing method utilized by many companies pioneering augmented reality (AR), optical sensors and biomedical chips. Notably the permanent imprinting for functional polymer layers of complex (nano-)structures has been explored extensively. Thus, for photonic applications, in particular for AR waveguides the refractive index of the functional layer is in the focus of many devices. Up to now the nanoimprinting polymer layers has proven to refractive index 1.9. However, to further improve the waveguides nanopattered layers with a refractive index of 2.0 and above are desired and related developments are discussed in this work.
11765-18
Author(s): Peter C. Guschl, Pixelligent Technologies LLC (United States); Grace McClintock, Selina Monickam, Robert Wiacek, Pixelligent Technologies, LLC (United States); Serpil Gonen Williams, Pixelligent Technologies LLC (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
High refractive index formulations are the key to enabling the widest FoV waveguides. Films made with Pixelligent PixClear® TiO2 titanium dioxide nanocrystals with mean particle diameter of 20 nm dispersed in an acrylate-based binder demonstrate refractive index values as high 1.96 at 589 nm. In addition, these films maintain high transparency in the visible light spectrum (400 – 700 nm) and demonstrate low haze and low absorbance. These products can be applied using nanoimprint lithography (NIL) and ink jet processes.
11765-19
Author(s): Friedrich-Karl Bruder, Johannes Frank, Sven Hansen, Alexander Lorenz, Christel Manecke, Richard Meisenheimer, Covestro AG (Germany); Jack Mills, Covestro LLC (United States); Lena Pitzer, Igor Pochorovski, Thomas Rölle, Covestro AG (Germany)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Bayfol® HX photopolymer films prove themselves as easy-to-process recording materials for volume holographic optical elements (vHOEs) and are available at industrial scale. Their full-color (RGB) recording and replay capabilities are two of their major advantages. Bayfol® HX is compatible to plastic processing techniques like thermoforming, film insert molding and casting. Therefore, Bayfol® HX made its way in applications in the field of augmented reality such as Head-up-Displays (HUD) and Head-mounted-Displays (HMD), in free-space combiners, in plastic optical waveguides, and in transparent screens. Bayfol® HX can be adopted for a variety of applications. To offer access to more applications, we address the sensitization into the Near Infrared Region (NIR) and increase the achievable index modulation n1 beyond 0.06. In this paper, we will report on our latest developments in these fields.
11765-20
Author(s): Stephan Prinz, Markus Brehm, Isabel Pilottek, Patrick Heissler, DELO Industrie Klebstoffe GmbH & Co. KGaA (Germany)
Digital Forum: On-demand
Show Abstract + Hide Abstract
UV- and heat-curing polymers are essential building blocks of state-of-the-art electronics assembly. Optical light guides benefit from their high refractive index, optical stability and form stability in high volume nano-imprinting. Smallest sensor assemblies and reflow compatible packages for 3D sensing can be realized with polymers with tuned thermo-mechanical as well as optical properties. Conductive polymers are enabling mass manufacturing of small form-factor MicroLED displays. Eventually, latest adhesive technology facilitates innovative design possibilities for appealing augmented and virtual reality headset formfactors by bonding all the individual building blocks
11765-12
Author(s): Frederik Bachhuber, Berthold Lange, Ruediger Sprengard, Stefan Weidlich, Olaf Claussen, Ute Woelfel, Bianca Schreder, Clemens Ottermann, Simone Ritter, Zhengyang Lu, SCHOTT AG (Germany)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Waveguide technology is widely believed to constitute the most promising approach to realize affordable and fully immersive Augmented Reality (AR) / Mixed Reality (MR) devices. For all major technology platforms (diffractive, reflective, or holographic), specialty grade high index optical glass is the central component to achieve some of the key features of AR devices, such as field of view, MTF, or weight. We will provide insights into SCHOTT’s roadmap for dedicated glass development for the AR sector and discuss the latest achievement with high relevance for the industry. It is a game of trade-offs between the desired properties to produce an optical glass which enables the entry of AR devices into the consumer market.
Session 5: User Experience with AR / Smart Glasses
11765-22
Author(s): Brandon Hellman, Ted Lee, The Univ. of Arizona (United States); Jae-Hyeung Park, Inha Univ. (Korea, Republic of); Yuzuru Takashima, The Univ. of Arizona (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Desirable fields-of-view, angular resolutions, and form factors of near-to-eye AR/VR/MR displays require order-of-magnitude increases in pixel count and pixel density of spatial light modulators (SLM). We present an in-plane angular-spatial light modulation technique to increase the independent output display pixels of a DMD by three orders of magnitude to achieve gigapixel output from a sub-megapixel device. Pulsed illumination synchronized to a DMD’s micromirror actuation realizes pixel-implemented and diffraction-based angular modulation, and fine source array control increases angular selectivity. The gigapixel output is demonstrated in a 1440-perspective display, each perspective having the DMD’s full native XGA resolution, across a 43.9°×1.8° FOV viewing angle. 8-bit multi-perspective videos at 30 FPS are demonstrated, and pixel-implemented multi-focal-plane image generation is realized. Implications for near-to-eye displays are discussed.
11765-23
Author(s): Dmitry Shmunk, Almalence Inc. (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
The key aspect of AR/VR is immersivity. Immersivity occurs when all the senses are engaged. When designing a near-eye display to supply immersivity to the most important sensory system – the human visual system - the challenge is to obtain both high imaging quality and compactness. Conventional optical designs are unable to resolve the mutually contradictory requirements for modern AR/VR systems, such as achieving low weight / small footprint / low cost while at the same time providing higher resolution and reduced optical aberrations. Eye-tracking real-time measurements can be used to modify the near-eye display visual data and to augment optical system performance, reducing the distortions caused by the physical constraints of AR/VR systems. In this paper, we describe typical AR/VR optical system deficiencies and present methods to overcome them with the help of eye-simulation and eye-tracking. These methods provide a higher effective image resolution with reduced optical aberrations, resulting in improved image quality and a more immersive user experience.
11765-24
Author(s): John David Prieto Prada, Cheol Song, Daegu Gyeongbuk Institute of Science & Technology (Korea, Republic of)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Decreasing hand tremor is crucial for sensitive micromanipulation during micro-surgery. Virtual reality (VR) technology is playing an important role in many biomedical applications. These applications enable the subject to gain valuable experience in accurate tasks. This study proposes a VR-based system of a handheld gripper combined with a long short-term memory (LSTM) architecture. Our VR-based system shows an image of forceps in a virtual space merged with an LSTM model to precisely track the tool’s position. We applied the LSTM as sensor fusion between a VR controller and an inertial measurement unit. Also, this study compared the LSTM model with similar models such as the gated recurrent units (GRU) and VR controller raw data. The trained models used a linear motor attached to a stage as reference data. The training data used different velocities and accelerations provided by the linear motor control. Experimental results indicate that the LSTM can provide better precision in both stationary and dynamic scenarios.
Session 6: Novel Display Architectures Improving Visual Comfort
11765-26
Author(s): Xuan Wang, Hong Hua, The Univ. of Arizona (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Integral imaging (InI) display method is considered as one of the most promising 3D display techniques due to its relatively low hardware requirement. However, one main limitation conventional micro-InI display method suffer from is the tradeoff between the spatial resolution and view number. We proposed a time-multiplexed structure to overcome this limitation. By utilizing a switchable aperture array or a programmable backlight array, an InI display with large view numbers and high spatial resolution can be achieved in time-multiplexed fashion. The new structure can render as many as 4 by 4 views with in a 6mm eye box while providing an angular resolution about 1.2 arcmin.
11765-27
Author(s): Ran Gabai, Gil Cahana, Meni Yehiel, Gady Yearim, Telman Yusupov, Adi Baram, Matan Naftali, Maradin Ltd. (Israel)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Depth perception is an important building block for user experience in AR/VR/MR applications. It is based on visual cues. Current Head Mount Displays (HMD) cannot provide enough visual cues to create a convincing 3D perception. This work presents a novel display design and an algorithm allows one to combine two types of visual cues to improve 3D perception. The method provides the ability to modify both the local image resolution and details level and the projected image focal plane. The projector is based on a Laser-based scanning by MEMS mirrors. The presented system has a small form-factor and simplified optical configuration.
11765-28
Author(s): Suyeon Choi, Yifan Peng, Stanford Univ. (United States); Jonghyun Kim, Stanford Univ. (United States), NVIDIA Corp. (United States); Gordon Wetzstein, Stanford Univ. (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Holographic displays have recently shown remarkable progress in the research field. However, images reconstructed from existing display systems using phase-only spatial light modulators (SLMs) are with noticeable speckles and low contrast due to the non-trivial diffraction efficiency loss. In this work, we investigate a novel holographic display architecture that uses two phase-only SLMs to enable high-quality, contrast-enhanced display experiences. Our system builds on emerging camera-in-the-loop optimization techniques that capture both diffracted and undiffracted light on the image plane with a camera and use this to update the hologram patterns on the SLMs in an iterative fashion. Our experimental results demonstrate that the proposed display architecture can deliver higher-contrast and holographic images with little speckle without the need for extra optical filtering.
11765-29
Author(s): Tomas Sluka, Alexander Kvasov, Tomas Kubes, Jonathan Masson, Alexandre Fotinos, Grégoire Smolik, Grigore Suruceanu, Selman Ergunay, Alexis Michoud, Gregoire Hirt, Patrick Kabengera, Joel Comminot, CREAL SA (Switzerland)
Digital Forum: On-demand
Show Abstract + Hide Abstract
The state-of-the-art VR/AR hardware fails to deliver satisfying visual experience due to missing or conflicting focus cues. The absence of natural focal depth in digital 3D imagery causes the so-called vergence-accommodation conflict, focal rivalry, and possibly damages the eye-sight, especially during prolonged viewing of virtual objects within the arm’s reach. It remains one of the most challenging and market-blocking problems in the VR/AR arena today. This talk will introduce CREAL’s unique near-to-eye light-field projection system that provides high-resolution 3D imagery with fully natural focus cues. The system operates without eye-tracking or severe penalty on image quality, rendering load, power consumption, data bandwidth, form-factor, production cost, or complexity.
11765-45
Author(s): Tatjana Pladere, Artis Luguzis, University of Latvia (Latvia); Roberts Zabels, Rendijs Smukulis, LightSpace Technologies (Latvia); Viktorija Barkovska, Linda Krauze, Vita Konosonoka, Aiga Svede, Gunta Krumina, University of Latvia (Latvia)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Inconsistency between the binocular and focus cues in stereoscopic augmented reality overburdens the visual system leading to its stress. In this study, we have shown that individuals with low convergent fusional reserves and receded near point of convergence misjudge the spatial layout in augmented reality to a higher extent in comparison to others when the depth cues are contradictory. However, the perceptual judgments improve in the consistent-cues condition. We suggest that the binocular function measures might be used as the predictors of user gain in the comparative assessment of new visualization systems.
Session 7: Sensing Technologies Improving the AR Experience
11765-30
Author(s): Janne Simonen, Tommi Björk, Tommi Nikula, Optofidelity Oy (Finland); Kalle Ryynänen, OptoFidelity, Inc. (Finland)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Reliable world locking in AR/MR head-mounted devices (HMDs) is critical for stable anchoring of virtual objects in the real world. We describe a method for measuring the positioning accuracy of an HMD and its virtual images.  A 3D pattern of unique virtual identification markers is captured in real time using a six-degrees-of-freedom motion system with high precision encoders and a time-synchronized imaging system. In addition to world-locking accuracy, temporal parameters such as motion-to-photon latency, jitter and drift can be computed. The results can be used to improve or recalibrate the positioning software and hardware of the HMD.
11765-31
Author(s): Pawel E. Malinowski, Jiwon Lee, Yunlong Li, Epimitheas Georgitzikis, Vladimir Pejovic, Itai Lieberman, Tom Verschooten, Steven Thijs, imec (Belgium); Orges Furxhi, imec USA - Florida (United States); Paul Heremans, David Cheyns, imec (Belgium)
Digital Forum: On-demand
Show Abstract + Hide Abstract
In this paper, we present short-wave infrared (SWIR) image sensors with high pixel density. Quantum dot (QD) photodiode stack is monolithically integrated on custom, 130 nm node CMOS readout circuit. State-of-the-art pixel pitch of 1.82 µm is demonstrated in focal plane arrays sensitive at eye-safe region above 1400 nm wavelength. Thin-film photodiode (TFPD) technology will facilitate realization of ultra-compact SWIR sensors for future XR applications, including eye-safe tracking systems and enhanced vision.
Session PS: Poster Session
11765-33
Author(s): Roberts Zabels, Rendijs Smukulis, Lightspace Technologies, SIA (Latvia); Ralfs Fenuks, Lightspace Technologies (Latvia); Andris Kuciks, Hansamatrix Innovation, SIA (Latvia); Elza Linina, Lightspace Technologies (Latvia); Krišs Osmanis, Ilmars Osmanis, Lightspace Technologies, SIA (Latvia)
Digital Forum: On-demand
Show Abstract + Hide Abstract
It is foreseen that the most convenient hardware for depiction of augmented reality (AR) will be optical see-through head-mounted displays. Currently such systems are utilizing single focal plane and are inflicting vergence-accommodation conflict to the human visual system – limiting wide acceptance. In this work, we analyze an optical see-through AR head-mounted display prototype which has four focal planes operating in time-sequential mode thus mitigating limitation of single focal plane devices. Nevertheless, optical see-through nature implies requirement of very short motion-to-photon latency not to cause noticeable misalignment between the digital content and real-world scene. The utilized prototype display relies on commercial visual-SLAM spatial tracking module (Intel realsense T265) and within this work we analyzed factors improving motion-to-photon latency with the provided hardware setup. The performance analysis of the T265 module revealed slight translational and angular jitter – on the order of <1 mm and <15 arcseconds, and velocity readout of few cm/s from a completely still IMU. The experimentally determined motion-to-photon latency and render-to-photon latency was 46±6 ms and 38 ms respectively. To overcome IMU positional jitter, pose averaging with variable width of the averaging window was implemented. Based on immediate acceleration and velocity data the size of the averaging window was adjusted. To perform pose prediction a basic rotational-axis offset model was verified. Based on prerecorded head movements, a training model reduced the error between the predicted and actual recorded pose. The optimization parameters were corresponding offset values of the IMU’s rotational axis, translational and angular velocity as well as angular acceleration. As expected, the highest weight for the most accurate predictions was observed for velocities following angular acceleration. The role of offset values wasn’t significant. For improved perceived experience and motion-to-photon latency reduction we consider further investigation of simple trained neural networks for more accurate real-time pose prediction as well as investigation of content-driven adaptive image output overriding default order of image plane output in a time-sequential sequence.
11765-34
Author(s): Xianlin Song, Nanchang Univ. (China); Jianshuang Wei, Huazhong Univ. of Science and Technology (China); Lingfang Song, Nanchang Normal Univ. (China)
Digital Forum: On-demand
Show Abstract + Hide Abstract
We used holographic method to generate a fringe pattern whose period and phase can be easily modulated. First, a black and white fringe pattern with a certain spatial frequency is generated according to the cosine structured light period and phase. A prism phase is applied to the black part of the black and white stripes, so that the light incident on the black area deviates from the optical axis to the first order of diffraction. In this way, the deviated beam is bright and dark, and finally structured illumination is obtained. Theories and experiments verify the effectiveness of the method.
11765-36
Author(s): Tobias Steinel, Dmitrijs Opalevs, Ferdinand Deger, Roland H. Schanz, Martin Wolf, Instrument Systems Optische Messtechnik GmbH (Germany)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Novel display technologies for AR/VR/MR devices require advanced imaging systems to control quality during assembly and to calibrate the device. Such calibration yields an optimal match of left and right eye and allows color management for an accurate match of real world and virtual objects. For accurate measurement, such an imaging system should be based on a lens system that mimics the human eye system and have a similar-sized entrance pupil in front of the lens. Traditionally two approaches have been proposed: sampling spot measurements and camera based solutions and both have their advantages. Spot measurements allow goniometric sampling of the retina, which is not an imaging plane, but a hollow sphere. In this paper we present a novel lens design with a two-stage external aperture to accurately measure the pupil-fill factor, especially at the verge of the eye box. Imaging systems have other advantages and allow one-shot measurement, which makes a new set of evaluations possible, e.g. temporal and the evaluation of spatial distortions.
11765-38
Author(s): Ted L. Lee, Chuan Luo, Brandon Hellman, Yuzuru Takashima, The Univ. of Arizona (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
The recently reported “Angular Spatial Light Modulator” (ASLM) light engine, using pulsed illumination synchronized to a Digital Micromirror Device (DMD), shows significant promise to enhance pixel counts of Near-to-Eye Displays (NED) without increasing package volume, but requires an uncommon illumination driver. We present a field effect transistor based constant-current driver that is fast, compact, and scalable to RGB illumination. The digital-to-analog convertor modulates intensity on-the-fly for illumination-based multiplexing. The driver outputs 100 ns pulses, up to 24 kHz repetition rate. The circuit is demonstrated for two laser diodes and for two LEDs in an ASLM-enhanced pixel count display.
11765-39
Author(s): Tobias Bro, Brian Bilenberg, Andrej Mironov, Jesper Fly, Srdjan Acimovic, Grigory Skoblin, Ankit Bisht, Niklas Hansson, Vladimir Miljkovic, NIL Technology ApS (Denmark)
Digital Forum: On-demand
Show Abstract + Hide Abstract
To meet the demand for high quality augmented reality displays with larger field of view, large eye box and better image quality, large area diffraction gratings are needed. Across the industry different types of surface relief gratings for in-coupling and out-coupling are used in the waveguide designs to achieve the optimum performance of the waveguide. Typical gratings are slanted, blazed, binary and multi-level gratings. NIL Technology offers solutions for all of the above-mentioned types of gratings meeting the demand for high quality and size of in particular the output gratings from the market.
11765-40
Author(s): Hiroki Kase, Shizuoka Univ. (Japan), ANSeeN Inc. (Japan); Junichi Nishizawa, Shizuoka Univ. (Japan); Katsuyuki Takagi, Toru Aoki, Shizuoka Univ. (Japan), ANSeeN Inc. (Japan)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Augmented Reality (AR) is a new form of human-machine interaction that superimposes digital information on the real-world environment. AR technology has the ability to organize much of the digital information from X-ray CT imaging. This paper proposes a new system that user can project 3D-Xray AR CT image on the screen of the device such as smartphone and tablets. In the future, the system will be combined with pseudo 3D color display technology by photon counting X-ray CT imaging.
11765-41
Author(s): Anastasiia Ivaniuk, Bauman Moscow State Technical University (Russian Federation); Anastasiia Kalinina, Moscow Institute of Physics and Technology (Russian Federation)
Digital Forum: On-demand
Show Abstract + Hide Abstract
People with visual impairments rarely use augmented reality displays without prescription optics. This fact makes using AR devices limited. In this paper, we demonstrated a customized AR display design that considers the user’s prescription and improves visual comfort in case of myopia, hyperopia, astigmatism, presbyopia. AR display has a waveguide-based architecture with an embedded reflective combiner for virtual image transferring. Both, waveguide substrate and reflective combiner are designed with standard type surfaces (sphere/aspheric). That makes this design convenient for mass production and adoption in society. The proposed design has field of view 42.75° degrees diagonally and thickness less than 6 mm.
11765-42
Author(s): Chen Gao, State Key Lab. of Modern Optical Instrumentation (China); Yifan Peng, Stanford Univ. (United States); Haifeng Li, Xu Liu, Zhejiang Univ. (China)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Recently, glass-free light field displays of multi-layer architecture have gradually entered the commercial stage. However, for near-eye displays, light field rendering still suffers from expensive computational costs. It can hardly achieve an acceptable framerate for real-time displays. This work develops a novel light field display pipeline that uses two gaze maps to reconstruct display patterns of foveated vision effect. With the acceleration of GPU and the emerging eye-tracking technique, the gaze cone can be updated instantaneously. The experimental results demonstrate that the proposed display pipeline can support near-correct retinal-blur with foveated vision and high framerate at low-computation.
11765-44
Author(s): Heeseh Lee, SAMSUNG Electronics Co., Ltd., (Korea, Republic of); Junhwan Lee, Sung-Hoon Hong, SAMSUNG Electronics Co., Ltd. (Korea, Republic of)
Digital Forum: On-demand
Show Abstract + Hide Abstract
We previously developed an augmented reality (AR) 3D head-up display (HUD) system[1] for vehicles that can match a 3D arrow object with roads in the real world at a distance from 3.7 m to 70 m. Current autostereoscopic[2] (glasses-free) 3D displays[3] suffer from the 3D crosstalk problem, in which optical phenomena such as light bleeding incompletely separate stereo images[4]. As a result, accurate AR graphics are not irradiated to both eyes, and the user does not perceive a 3D stereoscopic effect. There are two existing methods for reducing crosstalk as user experience post-processing, blurring the image or lowering the brightness; both reduce image quality. In contrast, we solve the problem without reducing image quality or HUD brightness by covering the 3D crosstalk area with a newly generated image (a crosstalk concealer) that depends on the distance of the arrow object, outdoor luminance, and brightness of the HUD. The width of the crosstalk concealer is determined by the change in disparity according to the distance of the object in the HUD virtual screen. The opacity of the crosstalk concealer is adjusted according to the external brightness and HUD brightness. The environmental conditions considered in this study include the external light-source brightness, HUD brightness, arrow object distance, and the arrow object size, and the system was optimized to maintain HUD brightness and clarity while eliminating crosstalk.
Conference Chair
Microsoft Corp. (United States)
Conference Chair
Sony Corp. (Japan)
Program Committee
Univ. of California, Berkeley (United States)
Program Committee
Univ. Politécnica de Madrid (Spain)
Program Committee
Univ. of Rochester (United States)
Program Committee
SA Photonics, Inc. (United States)
Program Committee
Facebook Technologies, LLC (United States)
Program Committee
Microsoft Research Cambridge (United Kingdom)
Program Committee
College of Optical Sciences, The Univ. of Arizona (United States)
Program Committee
Openwater (United States)
Program Committee
Centro de Investigaciones en Óptica, A.C. (Mexico)
Program Committee
Microsoft Corp. (United States)
Program Committee
Stanford Univ. (United States)
Program Committee
The Institute of Optics (United States)
Program Committee
Harvard Univ. (United States)