Improving image quality of 360-degree viewable holographic display system by applying a speckle reduction technique and a spatial filtering
Paper 10676-20
Time: 6:00 PM - 7:30 PM
Author(s): Yongjun Lim, Keehoon Hong, Hayan Kim, Minsik Park, Jin-Woong Kim, Electronics and Telecommunications Research Institute (Korea, Republic of)
Recently, we proposed a 360-degree viewable holographic display system [1]. Based on this recent work, we are to apply a speckle reduction technique and a spatial filtering method to improve the image quality of 360 degree viewable holographic display system. After capturing the reconstructed holograms of a specially designed pattern, we try to show the functionality of the speckle reduction technique and the spatial filter.
Design of a full color large field-of-view see-through near to eye display
Paper 10676-100
Time: 6:00 PM - 7:30 PM
Author(s): Yang Jianming, Patrice Twardowski, Philippe Gérard, Joël Fontaine, Univ. de Strasbourg (France)
Design of a freeform gradient-index prism for mixed reality head mounted display
Paper 10676-101
Time: 6:00 PM - 7:30 PM
Author(s): Anthony J. Yee, Wanyue Song, Nicholas Takaki, Tianyi Yang, Yang Zhao, Yun Hui Ni, S. Yvonne Bodell, Jannick P. Rolland, Julie L. Bentley, Duncan T. Moore, Univ. of Rochester (United States)
Freeform prism systems are commonly used for head mounted display systems for augmented and virtual reality. They have a wide variety of applications from scientific uses for medical visualization to defense for flight helmet information. The advantage of the freeform prism design over other designs is their ability to have a very large field of view and low f-number while maintaining a small and light weight form factor. This work looks at implementing a freeform gradient-index (GRIN) into the freeform prism design to improve performance, increase field of view (FOV), and/or decrease package size/form factor by the use of 3D printable polymers and glasses. Current designs typically employ a homogeneous material such as polymethyl methacrylate (PMMA). Using a GRIN material gives the designer extra degrees-of-freedom by allowing a variable material refractive index within 8uuthe prism. For example, previous work has shown that a PMMA/Polystyrene plastic GRIN can be used to correct residual lateral color in an eyepiece while maintaining the same weight as homogeneous PMMA. The addition of the GRIN material also allows for light to bend within the material instead of only reflecting off of the surfaces. With the bending and different refractive index of materials, the total internal reflection (TIR) surface requirement of the system can be more easily achieved. This addition of GRIN can improve the packaging constraints of the system and make the system easier to manufacture and increase the FOV which is able to fit in a comparable size. A prism design with freeform GRIN is designed with a FOV of greater than 40°, eye relief of 18.25 mm, eyebox of 8 mm, and performance greater than 10% at 30 lps/mm.
Holographic glasses with dynamic eyebox
(Canceled)
Paper 10676-102
Time: 6:00 PM - 7:30 PM
Author(s):
Show Abstract
We propose a novel near-eye display, which is capable of:
1. Providing per pixel focus cues within a large depth range,
2. Providing eyebox by using a novel pupil steering HOE,
3. Retaining compact form factor.
By simulation, the system can provide FOV of 60 degrees, high resolution (in theory, diffraction limited), high brightness, and 7mm x 7mm of eyebox size, with retaining light and compact form factor. We expect the largest component would be the SLM, except the image combiner, which is essentially a thin film. The proposed system is inspired by some preceding works, especially Maimone’s work on holographic display (2017) and Jang’s work on pupil-tracked light field generation (2017). We overcome drawbacks of both systems by combining the advantages of them. Also, we propose novel PSHOE that can be a breakthrough of current holographic optical element design. We expect the proposed design will suggest a way to elevate the performance of next generation AR display.
Optical design, assembly, and characterization of a holographic head mounted display
Paper 10676-103
Time: 6:00 PM - 7:30 PM
Author(s): Anne Gärtner, Ernst-Abbe-Hochschule Jena (Germany); Ralf Häussler, SeeReal Technologies GmbH (Germany); Burkhard Fleck, Ernst-Abbe-Hochschule Jena (Germany); Hagen Stolle, SeeReal Technologies GmbH (Germany)
Here we present the development and investigation of a holography-based HMD that uses the proprietary Viewing-Window (VW) technology of the company SeeReal Technologies to generate a holographic scene. The VW technology only reconstructs those parts of the wave front that matches the eye pupil. Consequentially a VW is reconstructed. Each object point of the holographic scene is encoded by subholograms in limited areas of the display. The size of the SHs depend on the locations of the object points and the size of the VW. All information outside the SH are not encoded, because they do not reach the VW, and are thus not required.
Considering various specification requirements, such as field of view (FOV) and resolution, the development of the HMD system was implemented by means of the optical design software Zemax. A prototypical HMD was set up in the laboratory and several tests were conducted e.g. resolution limit, field of view (FOV), and spatial resolution of the holographic reconstruction to investigate the performance.
The HMD system reaches a resolution limit of 2.2 cycles/mm at a distance of 1500 mm between observer and image plane. The magnification of the system was determined to be 7.2x and the FOV is 4.1° in the horizontal and 2.3° in the vertical direction. By tuning the focus of the camera it was demonstrated that the holographic reconstruction is spatially resolved in three dimensions.
Taking into account all technical specifications and restrictions, the image quality of the HMD was evaluated with a good result. Using holographic imaging technique has been demonstrated to be suitable to avoid the conflict between accommodation and convergence of the eye and is thus a promising way in future HMD technology.
Mitigating vergence-accommodation conflict for near-eye displays via deformable beamsplitters
Paper 10676-104
Time: 6:00 PM - 7:30 PM
Author(s): David Dunn, Praneeth Chakravarthula, Qian Dong, Henry Fuchs, The Univ. of North Carolina at Chapel Hill (United States)
Deformable beamsplitters have been shown as a means of creating a wide field of view, varifocal, optical see-through, augmented reality display. Current systems suffer from degraded optical quality at far focus and are tethered to large air compressors or pneumatic devices which prevent small, self-contained systems. We present an optimization on the shape of the curved beamsplitter as it deforms to different focal depths and show methods for manufacturing a membrane to achieve those shapes. Our design also demonstrates a step forward in reducing the form factor of the overall system.
Designing of a monocular see-through smart glass imaging system
Paper 10676-105
Time: 6:00 PM - 7:30 PM
Author(s): Тatiana A. Koneva, Galina E. Romanova, ITMO Univ. (Russian Federation)
Augmented reality systems are becoming very popular nowadays. One of the examples of such system is a monocular see-through smart glass display. Smart glasses look like simple eyeglasses but can way simplify people’s life being. It can quickly find and visualize information, show a map and navigate you in a real time, and do many more different useful things.
In such type of a system microdisplay is used as an image generator. We have used amlcd microdisplay with 640x480 resolution, 7.2x5.4 mm display size. The size of the pixel is 11.25x11.25 microns. This provides required angular pixel resolution of maximum 1.5 arcmin and diagonal field of view of minimum 20 degrees. To forward the image from microdisplay to the eye we have used several mirrors.
We have considered a monocular see-through smart glass imaging system with the amlcd microdisplay as a source of imposed image. Thus, the main object of the research is to design and analyze an optical architecture for the monocular smart glass display and to provide required characteristics.
Multiple reflection bug-eye desigh
Paper 10676-106
Time: 6:00 PM - 7:30 PM
Author(s): Alexis Benamira, Institut d'Optique Graduate School (France)
Adaptative focus for AR glasses based on eye-tracking and/or eye's lens analysis, and real-time image processing
(Canceled)
Paper 10676-107
Time: 6:00 PM - 7:30 PM
Author(s):
Show Abstract
One of the main issues of the AR/VR systems is to fit the displayed image to the accommodation and the vergence of the user's eyes. Indeed the quality of a 3D immersion relies on three parameters:
-the stereoscopic vision
-the impression of depth by the recovering of different objects (the closer objects recover the furthest) or by the angular size of the same object that the user sees. That impression is due to parallax.
-and the focus of the eyes.
This third point is quite complex to render in AR and VR systems. Indeed, the device needs to acquire the perfect orientation of the user's eyes and their accommodation and then to recalculate the displayed image in real time in function of these data. I found two ways to realize the acquisition of the user's eye's accommodation and orientation:
-eye tracking
-eye's lens analysis
and three ways to render a focused image:
-image processing by intelligent blurring
-image processing by FFT operations
-optical auto-focus embedded on the AR device upstream the image projection.
All these solutions will be laid out during the presentation as well as their performances and complexity. Then I'll present my design, the technologies I selected, and why I chose them.
Design and optimization of a large FOV and low-distortion head mounted display
Paper 10676-108
Time: 6:00 PM - 7:30 PM
Author(s): Young Liu, Yi Zhuang, John He, China Jiliang Univ. (China); Changlun Hou, Hangzhou Dianzi Univ. (China); Haiyan Qi, China Jiliang Univ. (China)
We present a compact,wide-angle,lightweight,optical see-through head-mounted display system integrating freeform elements here.The use of freeform elements can broaden the FOV,which is the critical problem in existing HMD designs that use rotationally symmetric optics.The optimized system can achieve a field of view(FOV)of 40*20 and angular resolution is less than 0.16 arcmin with DMD display channels.With integrated design,portability and reliability of this display design were achieved,which makes mass production possible.
A reflective prism for augmented reality with large field of view
Paper 10676-109
Time: 6:00 PM - 7:30 PM
Author(s): Bo Chen, Univ. Stuttgart (Germany)
Design of a spatially multiplexed light field display on curved surfaces for VR HMD applications
Paper 10676-110
Time: 6:00 PM - 7:30 PM
Author(s): Tianyi Yang, Nicholas Kochan, Samuel J. Steven, Greg Schmidt, Julie L. Bentley, Duncan T. Moore, Univ. of Rochester (United States)
See-through smart glass with adjustable focus
Paper 10676-111
Time: 6:00 PM - 7:30 PM
Author(s): Hossein Shahinian, Todd Noste, Nicholas Sizemore, Clark Hovis, Prithiviraj Shanmugam, Nicholas Horvath, Dustin Gurganos, The Univ. of North Carolina at Charlotte (United States)
The design proposed in this abstract, for a monocular see through smart glass, i.e. Design Challenge #1, leverages a varifocal lens to accommodate human eyes with different focusing abilities. The eyepiece is made of two separate segments. The first segment, hereby called EP-1, is very similar to a cube beam splitter with two main distinctions; (1) the surface of which light will be exiting has the prescription of an Alvarez lens, and (2) the surface which is parallel to the exiting surface has a concave shape. An LED based projector is used to emit chromatic light into EP-1. The beam splitter guides the projected image towards the freeform Alvarez surface upon which the rays will be moving toward the eye. Intermediate between the eye and the beam splitter, a complimentary Alvarez lens, is placed and is hereby called EP-2. The EP-2 lens combined with the effect of the first freeform surface will provide the focus variability. EP-2 is connected to the third surface of EP-1, a planar surface perpendicular to the surfaces that light travels through, using a thin rectangular flexure. The flexure will provide small motions horizontally for EP-2 relative to EP-1. The other end of EP-2 is attached to a linear actuator which provides the motion for tuning the focus of the system. The concave surface of the eyepiece is prescribed as such to align far-field rays with that of the projector rays reflected off the prism. The combination of EP-1 and EP-2, called EP, will have a total aperture size of 10 mm. EP is fixed to the glass frame in such way that, it will be kept at the lower portion of the user’s field of view.
Design of augmented reality system based on DMD
Paper 10676-112
Time: 6:00 PM - 7:30 PM
Author(s): Gong Shaobin, Zhejiang Univ. (China)
Regarding the characteristics of limited structure and weight of smart glasses, a large field of view high resolution imaging optical system is invented based on DMD. With zemax software, the optical system is designed with following features: exit pupil diameter 6mm, distance of exit pupil 25mm, field of view 40°X 30°, back focal length >5mm, resolution ratio at 60lp/mm Modulation Transfer Function (MTF) >0.3. The design turns out to fully meet the requirements of design index, and have smaller volume and lighter quality.
Compact see-through near-eye display with depth adaption
(Canceled)
Paper 10676-113
Time: 6:00 PM - 7:30 PM
Author(s):
Show Abstract
We propose a compact optical design for waveguide-based AR exploiting the recently developed polarizing diffractive optical elements. It also enables the depth control of the display content. The exit-pupil expansion can be realized through polarization management instead of gradient efficiency, allowing uniform ambient transmittance. We believe this work will make good impact to the current waveguide-based AR devices.
Design of a gradient index waveguide for improved augmented reality systems
(Canceled)
Paper 10676-114
Time: 6:00 PM - 7:30 PM
Author(s):
Show Abstract
In recent years, many design forms have been investigated for augmented reality (AR) systems. One of these design forms consists of a waveguide structure that directs light in such a way as to generate a virtual image superimposed on the real environment. Even among the AR systems using the waveguide approach, there are many design forms. Each of these waveguide designs face unique challenges ranging from color non-uniformity to limited field of view. This paper focuses on the design of a gradient index (GRIN) waveguide component for AR systems. A GRIN waveguide utilizes an axial index variation to guide the light in a more direct path towards the eye. By carefully designing the gradient index profile, it is possible to address multiple challenges that waveguide based AR systems face. For example, one of the current AR design forms utilizes multiple waveguides with each waveguide optimized to transmit a specific color to the eye. These multi-waveguide designs are difficult and expensive to manufacture. However, by utilizing the unique chromatic properties of a GRIN material, it is possible to eliminate the need for multiple waveguides and achieve the same color separation using a single GRIN waveguide. Additionally, the chromatic properties of GRIN materials could be used to alleviate some of the color non-uniformity issues exhibited in other waveguide design forms. Other potential benefits from a GRIN waveguide include improved transmission and field of view. Current designs are limited in these areas due to the total internal reflection requirements necessary to contain the light within the waveguide and the number of reflections the light undergoes on its path to the eye. With a GRIN waveguide, the number of reflections is decreased or even eliminated, which could have a positive impact on the transmission and field of view. In this paper we investigate and report on the impact GRIN would have on improving transmission, increasing the field of view, and eliminating the need for multiple waveguides for AR systems.
Design of a head mounted display nased on freeform surfaces prism and dual display sources
(Canceled)
Paper 10676-115
Time: 6:00 PM - 7:30 PM
Author(s): Siyue Feng, Yunbing Ji, Xiaoheng Wang, Hongliang Wang, Changchun Institute of Optics, Fine Mechanics and Physics (China)
Show Abstract
In recent years, virtual reality technology has become a hot spot in the field of optical design. The head-mounted display is the key device in the virtual reality technology, which undertakes the simulation of visual information. For the convenience of wearing, the head-mounted display must have a smaller size and lighter weight. Head-mounted display based on freeform surfaces prism is favored by many businesses and users due to their compact size and light weight. At present, the head-mounted display’s field of view is small so that it is difficult to bring a strong immersion to the users for it is limited by the freeform surfaces type. We designed a new freeform surfaces prism head-mounted display for z-axis symmetry. The head-mounted display consists of dual light sources and a prism. The dual light sources a1, a2 are symmetrical about the z axis. The prism consists of the freeform surfaces b1, b2, c1, c2, d1, d2. The surfaces b1, c1, d1 of the prism and the surfaces b2, c2, d2 are symmetrical about the z axis. The single prism of the new head-mounted display is equivalent to two normal prisms whose central axes are tilted ± 25 degrees from the z axis respectively. Thus, the field of view of the new prism head-mounted display has twice field of view that the conventional freeform surfaces prism has. The light beams emitted by the dual light sources a1 and a2 are refracted into the prism through the surfaces b1 and b2, respectively. Then the beams are totally reflected on the surfaces d1 and d2 by the internal reflection surfaces c1 and c2. Finally, the light passes through the surfaces c1 and c2 and enter the human pupil. The maximum field of view of the optical system reaches 100 °, which is further greater than the field of view of the current freeform surfaces prism system, and it provides a more realistic experience for the users. Except for merits of the high image quality, compactness and light weight that the ordinary prism head-mounted display has, the freeform surfaces prism also has the advantages of expanding the field of view of the current head-mounted display and it provides a new idea for the future development of the head-mounted display.
Design of a compact near-eye display system with wide field and high resolution
Paper 10676-116
Time: 6:00 PM - 7:30 PM
Author(s): Tao Xiao, Zhejiang Univ. (China)
Ultrathin full color visor with large field of view based on multilayered metasurface design
Paper 10676-117
Time: 6:00 PM - 7:30 PM
Author(s): Ori Avayu, Ran Ditcovski, Tal Ellenbogen, Tel Aviv Univ. (Israel)
Meta-resonance waveguide gratings as highly wavelength-selective optical combiners for augmented reality
(Canceled)
Paper 10676-118
Time: 6:00 PM - 7:30 PM
Author(s):
Show Abstract
Resonant Waveguide Gratings (RWGs) are dielectric structures where the incident waves interfere with the coupled modes to produce narrowband reflection anomalies. Because of the easiness of fabrication and of their unique properties such as the possibility of working with uncoherent white light, they have been industrialized in many fields as optical security, biochemical sensing, solar cells and signal processing, to name a few. Here, we report a novel implementation of these elements as optical combiner for near-eye displays. In particular the engineering of new patterns of RWGs, that we call Meta Resonant Waveguide Gratings (Meta-RWGs), makes possible to create high wavelength and angular selective diffraction structure with excellent transparency, as typically obtained with volume hologram technology. On the other hand, Meta-RWGs go beyond the fabrication limits of volume holograms, since they are compatible with up-scalable fabrication processes such as roll-to-roll nanoimprint lithography or hot-embossing replication at high speeds on thin supportive films. Meta-RWGs can therefore be used to make optical combiners which can be integrated on standard prescription glasses with curved lenses. Because of the high wavelength and angular selectivity, optical combiners based on Meta-RWGs have high transparency and are, therefore, better see-through devices compared to surface relief gratings or metasurfaces-based combiners. We will show that these structures can be designed for large steering angles, allowing large field of view and are compatible with holographic displays or light-field displays.
A vergence accommodation conflict-free virtual reality wearable headset
Paper 10676-119
Time: 6:00 PM - 7:30 PM
Author(s): Simon Charrière, Louis Duveau, Institut d'Optique Graduate School (France)
The Vergence Accomodation Conflict (VAC) is a major visual discomfort that happens when one looks too long at a virtually displayed scene. Our brain is used to the real world where the accommodation distance and vergence distance are always the same. However, in virtual reality, the 3D effect is created by varying vergence, and demands to dissociate accommodation and vergence. This may cause headaches or even nausea after a long time exposure. Yet, virtual reality displays could be very helpful in a large set of applications such as surgery, unless the surgeon cannot operate longer than a dozen of minutes a day
There are two approaches to solve the VAC: static or dynamic systems.
We propose a dynamic VAC free virtual reality display that is light and small enough to be worn with the largest field of view, best angular resolution and quality possible.
Our approach is to mimic what an eye does. The idea is to follow user’s sight through the image in order to recreate as close from reality as possible the behavior of our stereoscopic vision. An eye tracker is used to know the part of the scene that is observed. Using a varifocal system, we conjugate the image at the vergence distance and blur out of focus objects to put the user into realistic conditions.
Recent technological advances allow the use of varifocal methods that were impossible to do because of response time of several components just a few years ago. The main idea is not new (virtual reality displays were invented in the 1960’s, the VAC was theoretically solved in the 1990’s, the basis of our work was proven efficient in the 2010’s), our proposition groups several technical solutions and focus on how to miniaturize the system in order to make it wearable.
Hopefully, soon enough, surgeons will finally be able to operate far away from the hospital thanks to a virtual reality head-set without any nauseous effects…
Ultrathin optical combiner with microstructure mirrors in augmented reality
Paper 10676-120
Time: 6:00 PM - 7:30 PM
Author(s): Miaomiao Xu, Hong Hua, The Univ. of Arizona (United States)
In this submission, we will demonstrate the design of an augmented reality (AR) system with an ultra-thin light guide as the optical combiner. It will have a glasses-like form factor and mainly consists two parts: the image collimating system and the light guide. The collimating system is a monolithic freeform prism consisting of multiple freeform surfaces. On one hand, it produces front illumination for a reflective liquid-crystal-on-silicon (LCoS) microdisplay. On the other hand, it is used to collimate the image from the microdisplay to infinity, which ensures that the light from the microdisplay is efficiently coupled into the light guide with parallel incidence.
The light guide will contain a wedge shape in-coupling part to let the rays TIR and propagate within the substrate. On the out-coupling region, there are an array of microstructure mirrors that is arranged periodically with small flat transparent surfaces in between each tapered micro-mirrors. The collimated image is coupled into the eye pupil by micro-mirrors, whereas outdoor scene can be seen through the flat surfaces within the eyebox, so that the virtual image and the see-through scene can be combined with this half-reflection-half-transmission out-coupling structure.
Our innovation was to improve the image quality and uniformity over a wide field of view and eyebox with more compact freeform design, while keeping the image brightness and see-through efficiency. The microdisplay will be an LCoS with LED backlight with the diagonal size of 0.4 inch and resolution of 1280 by 960. The system field of view can be about ±15 degree, with exit pupil diameter at least 4mm, which yields an angular resolution of about 1.4 arc minutes per pixel. The thickness of the light guide can be within 3mm. With only one freeform prism and one waveguide piece, the overall system can be light weighted and super compact, and have a glasses-alike appearance with high quality image performance.
Wide field-of-view waveguide displays enabled by polarization-dependent metagratings
Paper 10676-121
Time: 6:00 PM - 7:30 PM
Author(s): Zhujun Shi, Wei Ting Chen, Federico Capasso, Harvard Univ. (United States)
We proposed a 2D/3D switchable display design based on polarization-dependent metasurfaces. Metasurfaces are ultrathin planar optical devices patterned with subwavelength nanostructures. We design the metasurfaces such that they can simultaneously deflect right-hand circularly polarized (RCP) light to an angle and transmit left-hand circularly polarized (LCP) light to the normal direction. Combined with an active polarization rotator, the device can be switched between high resolution 2D display mode and multiview 3D display mode. Proof-of-principle metasurface designs are demonstrated. The far field radiation patterns in the 2D and 3D mode are simulated and analyzed. The effects of spectral bandwidth and beam directionality are also discussed. Compared with liquid crystal lenses, which is the key element in previous 2D/3D switchable displays, metasurfaces 1) deliver more precise phase profile control, thus less aberrations and higher image quality; 2) offer additional degrees of freedom in polarization manipulation; and 3) can be adapted to much smaller sizes.
Over-designed and under-performing: design and analysis of a freeform prism via careful use of orthogonal surface descriptions
Paper 10676-122
Time: 6:00 PM - 7:30 PM
Author(s): Nicholas Takaki, Wanyue Song, Anthony J. Yee, Julie L. Bentley, Duncan T. Moore, Jannick Rolland, Univ. of Rochester (United States)
One of the solutions to the augmented reality (AR) problem is the freeform prism. The freeform prism design is robust during use because it features fewer independently-moving elements. However, this also means fewer surfaces are available to improve optical performance.
Thus, for AR design in general (and freeform prisms specifically), designers need every tool available to achieve high field of view, low f/number, small package, and high performing systems. For a freeform prism, one key choice is the mathematical basis describing the freeform surface. Descriptions range from XY polynomials to the broadly-used Zernike polynomials to application-specific characterizations such as the freeform Q-polynomials introduced by Greg Forbes. In this work, we explore and compare designs with different mathematical surfaces descriptions.
Just as important is how that basis is used. Because of increased complexity of both freeform surface descriptions and of freeform aberration theory, designers may lean more heavily on the optical design software to improve nominal performance. This can lead to systems which are over-designed and very sensitive to small changes in parameters. Often, it is hard to undo such decisions; even the best tolerancing analysis cannot improve a system which is over-designed, overly sensitive, or poorly understood.
Therefore, this work focuses on design for analysis, manufacture, and assembly. In particular, we design a freeform prism with a diagonal FOV of greater than 40°, eye relief of 18.25 mm, eyebox of 8 mm, and performance greater than 10% at 30 lps/mm. We use multiple surfaces representations, and examine how different representations affect the system, including optical performance, analysis of aberrations, and a statistical analysis of tolerancing and sensitivity. Freeform aberration theory, including Nodal Aberration Theory, is used to understand the performance of the system. We also extend freeform aberration analysis to chromatic aberrations. We analyze tolerancing, manufacturability, and sensitivity.
Improving immersion of head mounted displays through optical design optimizations
(Canceled)
Paper 10676-123
Time: 6:00 PM - 7:30 PM
Author(s):
Show Abstract
The emergence of new Virtual Reality (VR) and Augmented Reality (AR) technologies has changed the gaming and entertainment industry but also have shown great potential in engineering and medical applications. This has increased the importance to improve the immersive experience in the most natural way possible. Some of the hurdles like Vergence-Accommodation Conflict (VAC) and limited Field of View (FOV) need to be addressed. The aim of our work is to adapt the optics of a head mounted display (HMD) in order to achieve an improved perceptional experience. Furthermore, we want to simulate and present the 3D environment in the most comprehensible way.
Shape Scanning Displays: Tomographic Decomposition of 3D Scenes
Paper 10676-124
Time: 6:00 PM - 7:30 PM
Author(s): Seungjae Lee, Youngjin Jo, Dongheon Yoo, Jaebum Cho, Dukho Lee, Byoungho Lee, Seoul National Univ. (Korea, Republic of)
During the history of display fields, there have been a desire to implement ideal three-dimensional (3D) displays. The ideal 3D displays may provide full of physiological cues including binocular disparity, motion parallax, and focus cues. Although several technologies have been studied to realize the ideal 3D displays, it is still challenging to satisfy commercial demands in resolution, depth of field, form factor, eye-box, field of view, and frame rate. Here, we propose tomographic near-eye displays that may have extremely large depth of field (8.5 cm – infinity) without loss of frame rate or resolution, and enough eye-box (10 mm) with moderate field of view (30°). According to the optical design of the SMB, tomographic near-eye display may support up to 100 planes within the addressable depth of field. Tomographic near-eye displays consist of a tunable lens, a display panel (e.g. liquid crystal panel), and a SMB. The tunable lens is operated by triangle wave at 60 Hz so that it sweeps the focal range within single frame. Within the single frame, the backlight is spatially modulated about 16-100 times while the display panel shows static 2D images. The SMB independently illuminates pixels of the display panel so that each pixel is floated on the desired depth. In summary, we introduce a novel 3D display technology called tomographic near-eye displays that present superior performance in resolution, depth of field, and focus cue reproduction. This approach has a lot of potential to be applied for various field in 3D displays including head-up displays, tabletop displays, as well as head-mounted displays. It could be efficient solution for vergence-accommodation conflict as providing accurate focus cues.
Polarization-dependent metasurfaces for 2D/3D switchable displays
Paper 10676-125
Time: 6:00 PM - 7:30 PM
Author(s): Zhujun Shi, Federico Capasso, Harvard Univ. (United States)
We proposed a 2D/3D switchable display design based on polarization-dependent metasurfaces. Metasurfaces are ultrathin planar optical devices patterned with subwavelength nanostructures. We design the metasurfaces such that they can simultaneously deflect right-hand circularly polarized (RCP) light to an angle and transmit left-hand circularly polarized (LCP) light to the normal direction. Combined with an active polarization rotator, the device can be switched between high resolution 2D display mode and multiview 3D display mode. Proof-of-principle metasurface designs are demonstrated. The far field radiation patterns in the 2D and 3D mode are simulated and analyzed. The effects of spectral bandwidth and beam directionality are also discussed. Compared with liquid crystal lenses, which is the key element in previous 2D/3D switchable displays, metasurfaces 1) deliver more precise phase profile control, thus less aberrations and higher image quality; 2) offer additional degrees of freedom in polarization manipulation; and 3) can be adapted to much smaller sizes.
High-performance integral-imaging-based light field augmented reality display
Paper 10676-126
Time: 6:00 PM - 7:30 PM
Author(s): Hekun Huang, Hong Hua, The Univ. of Arizona (United States)
Design and stray light analysis of a lenslet-array-based see-through light-field near-eye display
Paper 10676-127
Time: 6:00 PM - 7:30 PM
Author(s): Cheng Yao, Dewen Cheng, Yongtian Wang, Beijing Institute of Technology (China)
This study proposes an optical-see-through light-field near-eye display (OST LF-NED) based on integral imaging (InI) using a discrete lenslet array (DLA). A real-time rendered light-field image based on OpenGL is used as the image source. A special microdisplay array built on a transparent substrate is used as the screen. A DLA is used as a spatial light modulator (SLM) to generate dense light field of the 3-D scene inside the eyebox of the system and provide correct focus cues to the user. The key to realize the OST capacity is that the microdisplays and the lenslets are both discretely-arranged so that the light from the real world passes directly through the gaps among the microdisplays on the transparent substrate and then the flat portion on the DLA panel, providing a clear view of the real world as well as the virtual information. Analysis and numerical simulation of the stray light are conducted in detail. The stray light can be totally eliminated in the region of the eyebox, regardless of the limitation of the lenslets' F-number. In practical situations, take the limitation of the F-number into consideration, a trade-off between the size of eyebox and the stray light is made. A ring-shaped aperture on each lenslet is added to further relief the stray light by blocking the screen light that passes by the outer edge of each lenslet. A static film-based prototype and a dynamic OLED-based prototype are implemented in the experiments. The experimental results show that the proposed method is capable of obtaining a corrected perception of depth of the virtual information and an OST view in augmented reality (AR) applications.
High-resolution head mounted display using stacked LCDs and birefringent lens
Paper 10676-128
Time: 6:00 PM - 7:30 PM
Author(s): Shuaishuai Zhu, Harbin Institute of Technology (China), Univ. of Illinois (United States); Peng Jin, Harbin Institute of Technology (China); Wei Qiao, Heilognjiang College of Education (China); Liang Gao, Univ. of Illinois (United States)
Head mounted displays (HMD) showed huge market potential in recent years. In these techniques, vergence-acommodation conflict is a fundamental problem which makes viewers feel discomfort and fatigue. To overcome this limitation, researchers proposed many solutions including multi-focal-plane (MFP) displays and light-field (LF) displays. In the MFP displays, the input image is projected onto four or more depth planes, while the LF displays present a four-dimensional (4D) light field to the viewer’s retina. These techniques are able to correct or nearly correct focus cues, however, they failed to achieve both high image fresh rate and high lateral resolution with a compact architecture.
In this paper, we propose a compact HMD with correct focus cues by spatially projecting the input images onto four depth planes. In the proposed system, two stacked transparent liquid crystal displays (LCD) provide two axially separated input images in an additive fashion. We set a liquid crystal panel behind the LCDs to modulate the polarization of the emitting light from the LCDs in pixel level. Based on a known depth map, the liquid crystal panel segment each input image into two regions labelled by two orthogonal polarization states, respectively. After that, a birefringent lens further projects each input image onto two different depth planes based on polarization state of the incident light. In this design, we get four images on different depth planes at 0D, 1D, 2D, and 3D.
Comparing to the existing techniques, the proposed HMD corrects focus cues with a compact architecture. Moreover, because there is no temporal multiplexing and lateral resolution sacrifice, the HMD can easily achieve high image refresh rate and high lateral resolution. Herein, we will present the optical design of the HMD and characterize its performance in Zemax.
A retinal-projection-based near-eye display for virtual reality
Paper 10676-129
Time: 6:00 PM - 7:30 PM
Author(s): Lantian Mi, Wenbo Zhang, Chao Ping Chen, Yuanchao Zhou, Yang Li, Bing Yu, Nizamuddin Maitlo, Shanghai Jiao Tong Univ. (China)
We propose a retinal-projection-based near-eye display (NED)――also dubbed as retinal projection display (RPD)――for virtual reality (VR). Our design is highlighted by an array of tiled organic light-emitting diodes (OLEDs) and a transmissive spatial light modulator (SLM). Its design rules are to be set forth in depth, followed by the results and discussion regarding the field of view, angular pixel resolution, eye box and relief, brightness, full color operation, modulation transfer function, contrast ratio, distortion, simulated imaging, and industrial design.
The working principle of our design is briefly explained as follows. OLEDs in tandem with a SLM constitute an array of miniature projectors that deliver the virtual image directly to the retina. Since the size of an individual OLED is way smaller than that of SLM, each OLED can be regarded as a point light source. The function of SLM is to modulate the intensity of light emitted from the OLEDs. For achieving a high brightness, it is desirable to be highly transparent. As a viable option, a backlight-free, monochrome liquid crystal display can be used for this purpose. In order for the real image of SLM not to be directly perceived by the eye, it is required that SLM be placed out of the range of accommodation. By doing so, eye will trace back along the extended virtual rays to see a magnified virtual image formed at a target distance. For full color operation, a sequential color scheme is adopted, with which each primary color of OLEDs is activated in sequence.
As opposed to the conventional NEDs for VR, our RPD exhibits several unique features. First, instead of using an eyepiece or ocular lens, RPD relies on the eye itself in imaging the virtual objects. Second, lighter weight is expected as no bulky lenses are needed for RPD. Third, the distance and size of virtual objects hinge on the status of eye, including its diopter and pupil. Fourth, its ultra-large FOV could be a trump card in pursuing the throne of NEDs.
Freeform optics design for augmented reality displays with OpticsStudio API tools
(Canceled)
Paper 10676-130
Time: 6:00 PM - 7:30 PM
Author(s):
Show Abstract
See through augmented reality head mounted display(STHMD) has been regarded as the next-generation display technology with potential applications in navigation, education, and entertainments. Field of view(FOV) and the light-weight are the two key factors for a STHMD which determine the user’s experience and production cost. Freeform optics can be a good choice due to its advantages in multiple degree of freedom in designs, high-quality imaging performance, compact size and low production cost. However, it is extremely difficult to control the boundary condition of the freeform surfaces, especially for optical engineer without much experiences. In this work, we investigate several common boundary problems incorporated with several scientific merits functions for evaluations and propose an adaptive weight method for controlling the imaging quality by using customized OpticsStudio API (ZOS-API) in ZEMAX. To verify the proposed method, we designed a STHMD with single freeform surface. The FOV is large than 40 degrees, with an exit pupil diameter of 8mm, and the modulation transfer function values across the entire field of the STHMD are above 0.2 at 9-line pairs/mm (lp/mm). This work provide an effective method for optical designers.
Understanding waveguide-based architecture and ways to robust monolithic optical combiner for smart glasses
Paper 10676-131
Time: 6:00 PM - 7:30 PM
Author(s): Vincent Brac de la Perriere, Ctr de Nanosciences et de Nanotechnologies (France)
With the emergence of Augmented Reality (AR) and Virtual Reality (VR) headset during the past decade, firms and academic laboratory worked on the design of optical combiners to increase the performances and form factor of the optical combiners.
Most of the smart glasses on the market have the asset of having a small form factor that will ease the integration of the combiner in a head worn device. Most of them (Google glass, Vusix) use prism-like architecture, where the collimation and deflection of the light is performed by one single optical piece. This approch reduce the size and tolerence issues of the device. Other companies (Optinvent, Microsoft, Lumus) came with waveguide architecture, in which the light is collimated by a lens or group of lens, injected in a slab waveguide and extracted in front of the eye of the user. This way, the image is brought right in front of the eye, where prism like architecture display an image in the peripherical sight of the user.
These optical combiners however suffer from low tolerences and fabrication complexity as several pieces are combined. The injection and extraction of image rays in the waveguide can be performed either by holograms or slanted mirrors. Each technology has its downturns but for now no satisfying results were obtained with holographic option, bringing chromatic dispersion resulting in degradation of MTF.
This work is a proposition of waveguide-type optical design for AR smart glasses. This investigation deals with pupil folding mechanism to try to optimize the optical device form factor. In the meantime, leads on the design of a monolithic waveguide optical combiners are developped. To push the performances of the waveguide-like architecture and take it one step further, the problem is considered in the 3 dimensions to try to take the the pupil folding mechanism one step further.
Compact see-through AR system using buried imaging fiber bundles
Paper 10676-132
Time: 6:00 PM - 7:30 PM
Author(s): Simon Thiele, Philipp Geser, Harald Giessen, Alois Herkommer, Univ. Stuttgart (Germany)
This design concept is using multi-core imaging fiber bundles with small diameters (<500 µm) to transfer information from an image source (e.g. laser pico projector) to the eye of the user. One of the main benefits of this approach is that the resulting glasses are almost indistinguishable from conventional eyewear. Not only are the fiber bundles very thin and positioned close to the eye but their difference in refractive index compared to the surrounding medium is very small which makes them basically invisible. At the same time, they can carry a significant space bandwidth product and are easier to fabricate in comparison to similar solutions using waveguides or Fresnel-type extractors.
Using raytracing and wave-optical considerations, we show that such an approach can lead to highly inconspicuous AR glasses with a >20° diagonal field of view and a high angular pixel resolution.
Design of an immersive head mounted display with coaxial catadioptric optics
Paper 10676-133
Time: 6:00 PM - 7:30 PM
Author(s): Luo Gu, Dewen Cheng, Yongtian Wang, Beijing Institute of Technology (China)
The contradiction between large field of view, big exit pupil and large eye relief in immersive VR head-mounted displays limits the probability that such bulky device would be accepted by the most consumers. A wide field of view coaxial catadioptirc system is propsed to serve as the magnified optics in this paper, which consists of a wire grid polarizer, a quarter wavelength plate and a plano-convex lens, making the optics thinner and fabrication easier. Note that on the surface of wire grid polarizer towards the image source is a partically reflecting surface and the spherical surface of the plano-convex lens is semi-reflecting. The plano-convex lens takes the role of imaging, while the wire grid polarizer and the quarter wavelength plate work together to adjust the polarization orientation so as to suppress direct transmission. To shrink RMS spot diameter and improve the off-axis aberrations, one or more surfaces could be replaced by aspherics. The resulting monocular optical system yields the field of view as large as 110°, with virtual image exhibited 5 meters away from the observer. Exit pupil diameter and eye relief of the designed system are 10 mm and 15 mm, respectively. The MTF is higher than 0.2 at 8 lp/mm. Considerable distortion will appear inevitably when field of view approaches 100°, which could be tackled via electronic correction. With evaluation in the CodeV, weight of the whole optics is about 38 grams and the distance from the element near the observer to the image source is as thin as 25 mm, showing the suggested design is superior to the ones in the form of singlet, doublet, aspherics or Fresnel lens, in terms of the weight and volume. With this monocular optical system, an HMD binocular optical system is achieved in the form of partially overlapping field of view.
Ultra-compact pancake optics based on ThinEyes® super-resolution technology for virtual reality headsets
Paper 10676-134
Time: 6:00 PM - 7:30 PM
Author(s): Bharathwaj Appan Narasimhan, Limbak 4PI S.L. (Spain), Univ. Politécnica de Madrid (Spain)
We present an advanced optical design for a high-resolution ultra-compact Virtual Reality headset based on the traditional pancake configuration following optical foveation in the sense: Firstly, the magnification is variable along the FoV i.e. the VR pixel density is maximum at the center and gradually becomes smaller towards the edge. Secondly, the optics image quality is also adapted, and the combination of both is designed to fit with the human visual acuity with normal eye movements, so the user does not perceive the lower peripheral resolution.
VR pixel resolution (i.e. pixels per degree) of traditional pancake optics is limited to the geometrical distance of the different elements present. However, we have broken that compromise by applying Limbak’s ThinEyes® super-resolution technology to the design of four aspherical surfaces in a pancake-type configuration, so that the VR pixel resolution is dramatically increased at the center while maintaining high FoV and excellent imaging quality. We make use of a curved reflective polarizer, which could be done by vacuum molding a polymeric one made out of birefringent multilayer technology (as DBEF of 3M).
As an example, we present an optical system that uses a standard 3.28“square display, with a pixel pitch of 27.3 microns (2,160x2,160). The total track length (eye pupil to display distance) of the system is 36mm with 15 mm eye relief, so the lens thickness is only 21 mm. Thanks to the optical foveation, this design achieves a focal length of 49.5 mm at the center of the FoV, resulting in an outstanding VR resolution of 30.5 pixels per degree at the center with a circular FoV of 100 deg for a 10mm eyebox. As a comparison, for the same FoV, a conventional pancake lens design (with a linear rho-theta distortion function) would achieve a VR resolution of 21.6 ppd, which is 1.41x lower.
Solving the vergence-accomodation conflict in head mounted displays with a magnifier system
Paper 10676-135
Time: 6:00 PM - 7:30 PM
Author(s): Francisco Javier Gantes, Zacarías Malacara-Hernández, Daniel Malacara-Hernández, Centro de Investigaciones en Óptica, A.C. (Mexico)
Two possible systems of optical architecture to mitigate the Vergence-Accomodation conflict and low-vision are proposed in this work.
The first system using a single magnifier lens allows a magnification of the image projected on the display. This option is possible because the function of a converging lens is magnifying a near object. Using a converging lens, the image in the retina is bigger in angular extension than if the object is visualizing with the naked eye. Also, the use of the converging lens allows that the eye is relaxed, this is, the accommodation of the eye is like the object were at infinite, so, the eye is in a comfortable position.
A second system being studied is a Galilean telescope. The Galilean telescope is not a simple lens; it is a composite lens or a system of lenses, in which the field of view is determined by the diameter of the objective lens and the position of the eye with respect to the ocular.
The optimization and autodesign of the augmented reality waveguide based on API in ZEMAX
(Canceled)
Paper 10676-137
Time: 6:00 PM - 7:30 PM
Author(s):
Show Abstract
The quality of coupling optical design in waveguide based augmented reality display is vital to the final imaging performance. In this work, we provide a waveguide optics design tool package (AR kit in ZOS) to model waveguide based augmented reality optical system. In the demo design presented, the input/exit coupler window is 5.4X2mm in size, the field of view is 24X15°and 57° in diagonal. The optical transfer efficient is 37% and the total dimensional size is 45*20*2mm. For user-friendly interface for designers, our AR kit in ZOS allows self-defined or file data input for the in/exit coupling beam profile. The dispersion element allows angle-dependent coating or etched grating, to avoid cross-talk and maintain intensity uniform between different wavelengths, the transmission or diffraction efficiency of each layer is computed within our tools. Users can define system size (width, length, thickness), the number of layers and size of exit coupler window in flexible. Furthermore, our tools can also optimize the whole waveguide system in terms of low stray light by recording all ray data and adjust the configuration data by using inherent Zemax Optimization operands for the better beam quality at the detector.
Augmented reality display system for smart glasses with streamlined form factor
Paper 10676-139
Time: 6:00 PM - 7:30 PM
Author(s): Samuel J. Steven, Yang Zhao, Greg Schmidt, Julie L. Bentley, Duncan T. Moore, Univ. of Rochester (United States)
A smart glass augmented reality (AR) display system is designed with a streamlined form factor featuring an off-axis mirror design. The main component of the combiner optics is in the shape of a regular pair of eye glasses or sun glasses, with no diffractive gratings, waveguides (lightguides), prisms or Fresnel surfaces involved. Perfect see-through performance is achieved with a low-cost combiner that consist of only highly manufacturable reflective surfaces. The optical performance of the projected display can be enhanced by using a freeform gradient index element. The 20-degree full field of view of the AR display is centered at about 35 degrees with respect to the center of the ocular vision. Such a design allows the user to have a clear unobscured central field of view. At the same time, the projected image is accessible by moving the eyeball off the central vision. The system is designed with a circular eye box with more than 10 mm in diameter.
High-resolution optical see-through vari-focal-plane head mounted display using freeform Alvarez lenses
Paper 10676-140
Time: 6:00 PM - 7:30 PM
Author(s): Austin T. Wilson, Hong Hua, The Univ. of Arizona (United States)
Super multi-view augmented reality glasses
Paper 10676-142
Time: 6:00 PM - 7:30 PM
Author(s): Anastasiia Bolotova, Moscow Technological University (MIREA) (Russian Federation); Andrey Putilin, P.N. Lebedev Physical Institute (Russian Federation); Vladislav Druzhin, Bauman Moscow State Technical Univ. (Russian Federation)
Nowadays, the main directions of development of augmented reality (AR) glasses are: increasing of field of view (FOV) and eye-motion box; reducing weight of AR glasses; solving accommodation-convergence conflict; opportunity of individual ophthalmological vision correction. All these requirements should be combined with high image quality and decreasing dimensions of AR Glasses.
We report the optical system of AR glasses, based on Schmidt Camera scheme and using Super Multi View (SMV) technique for focusing in different depths. Calculated and modeled scheme has huge benefits: eye motion box about 10 mm and field of view 60°. Also, reported scheme is compact, lightweight, doesn't cause accommodation-convegrence conflict, has low aberrations and variable focus.
PARA: experimental device for virtual and augmented reality
Paper 10676-143
Time: 6:00 PM - 7:30 PM
Author(s): Stan Larroque, SL Process and HETIC (France); Julien Casarin, Gfi informatique/Strasbourg Univ. (France)
The presented device is a patented HMD with a custom stereo camera mounted on the front side. With a field of view of 110° for both augmented and virtual realities, we perform live software undistortions using different methods for the HMD and cameras lenses for real-time rendering and the lowest “photon-to-pixel” time. The reality and the virtual space are well aligned.
The device is also fully autonomous and can track its translation and rotation in world space thanks to Simultaneous Location and Mapping with the cameras, with an option to perform dense 3D reconstruction.