AR/VR/MR 2020: The Future Now Arriving
When Sergey Brin wore a prototype of Google Glass to a retinal disease charity event in San Francisco in April 2012, the augmented reality headset prompted wonder and dread. Swiftly adopted by many, the first users found the smart glasses fun, more than functional. And while most onlookers expressed curiosity, some felt their privacy violated and labelled the wearers ‘glassholes.'
Years later, the headset has, for now, found its niche in enterprise applications. But love it or loathe it, it's difficult to deny that Google Glass' early foray into the wonderful world of extended reality kick-started today's burgeoning industry.
"We've had Palmer Luckey, the 19 year old that put together the Oculus Rift VR headset with duct tape and sold his company to Facebook for $2 billion, and we've got Magic Leap, a recent MR start-up that's worth more than $7 billion," highlights Bernard Kress, Principal Optical Architect on Microsoft's Hololens MR Project, and also instrumental to the design of Google Glass. "The investment in augmented and virtual reality since the Google Glass days has been just amazing, and this has generated a lot of excitement."
And while this industry excitement was palpable at last year's SPIE AR/VR/MR conference — with its 2500 visitors — expect an even bigger buzz this year with 5000 anticipated attendees. As Kress puts it: "We've now graduated from Photonics West and have a fully-fledged conference."
The 2020 show comprises a technical program that looks at how researchers are enhancing the AR, VR, and MR experience in Head-Mounted Displays. Kress and Christophe Peroz, Principal Scientist at Magic Leap, chair the Sunday Conference, ‘Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality' with sessions including technology trends, visual comfort and sensors.
This is followed by more than 40 industry-invited keynotes across Monday and Tuesday. Microsoft, Magic Leap, Facebook, Google and more will provide insight into the latest product developments. As Kress emphasises: "These are industry talks, not technical talks, as this is really what people are looking for. More start-ups are choosing the event to introduce new products — this is very exciting."
Amongst a raft of activities, the AR, VR, MR Expo will showcase the latest XR gear with hands-on demonst rations featuring, for example, new sensors, displays technologies and diffractive holography. And this year's Optical Design Challenge Awards will reveal how students are solving key industry challenges from the size and weight of display engines to improving vision comfort.
Importantly, the keynotes are supported by three panel sessions, with the first led by Tom Emrich, AR Thought Leader from 8th Wall. He will ask ‘What is the Potential Market for the AR, VR Industry?', a multi-billion dollar question the answer to which still eludes the community but one that Kress hopes will shed light on consumer use-cases.
"The first Google Glass targeted the consumer but was re-packaged for enterprise markets while HoloLens found real success with enterprise use-cases," says Kress. "Yet venture capitalists and the community still want something to replace the smartphone."
"Everyone in the field is thinking of ways to develop a headset in the form-factor of regular glasses and Apple has at least a thousand people working on this, but I don't see a use-case yet so we're responding with Tom Emrich's panel," he adds.
The second panel session from former Apple and Microsoft Director, Svetlana Samoilova, will ask ‘How do we build the AR, VR World with Hardware?' As Kress highlights, this will look at the necessary HMD building blocks and will examine how these should be developed with a system architecture in mind.
In the final session, Magic Leap co-founder, Brian Schowengerdt, will examine the state-of-play of the bright and mighty micro-LED. "We see so much money floating into this display technology so we've invited start-ups to answer the questions; where are we today and how long until we can use the technology?" says Kress.
Participating companies include Sweden-based glō, Aledia, France, Plessey of the UK, Lumiode, and Mojo Vision, both from the US, Oculus-owned InfiniLED and JadeDisplay, China. As Kress emphasizes: "We wanted the people that are actually getting the VC funds and dropping the technology on the panel. Everyone wants to know when can they can get micro-LEDs in volume to work them into their headsets — including Microsoft."
Micro-LED displays aside, AR, VR, MR communities are also keen to explore the industry's choice of optical engine, which for now remains an open question. Kress is watching developments in Liquid Crystal on Silicon (LCoS) and Digital Light Projector (DLP) microdisplays, and also highlights how Hololens 2 made the bold move to use MEMS laser scanners. Meanwhile other architectures from the likes of Holoeye, Himax and Vivid-Q, are even more daring, developing true holographic display systems.
At the same time, improvements are still required in all optical building blocks to optimize performance and bring down costs, especially for augmented reality. Here, the holy grail is to make an AR headset that looks and feels like sunglasses, but can also overlay additional information and images without causing eyestrain. One key optical building block, the optical combiner, is crucial to overlaying virtual content onto a real scene without blocking the user's view, which is no mean feat. As Dr Phil Greenhalgh, Chief Technical Officer of UK-based WaveOptics points out: "[Designers] have to present a computer-generated image that's pinsharp and has accurate color with a wide field of view right in the wearer's central vision without the light engine getting in the way and obscuring the real world."
"But then we also have to do this with a minimal amount of power while competing with a massive dynamic range of brightness from the real world," he adds. "Sum all of that together and everyone in this field will agree that it's very hard."
Bio-inspired tracking: This hybrid visual-inertial system from Thales Visionix provides path integration data — conveying a sense of motion into a sense of location with landmark perception — to assist GPS-based tracking.
Yet despite the difficulties, WaveOptics recently received a hefty $39 million in venture capital funds. Given this, Greenhalgh is taking part in the Photonics West ‘Fireside Chat' with Evan Nisselson from early-stage venture fund, LDV Capital, answering questions on his company's success.
And crucially for the AR world, WaveOptics is amongst a handful of companies successfully manufacturing optical waveguide combiners for use in AR head-mounted and near-eye displays. Other companies include Finland-based Dispelix, Lumus, Israel, LetinAR, South Korea, as well as Vuzix and Digilens, both of the US. Here, the waveguide couples the virtual image from the light engine - typically located in the arms or at the top of a headset - into a glass substrate. This image is then transported through the substrate via total internal reflectance and coupled back out towards the viewing area or eye box.
Right now, the alternative technology, freeform optics, is used by the likes of Google Glass and Nreal of China. But as Greenhalgh points out, diffractive waveguide optics promise huge savings in space and weight compared to bulkier freeform optics. And while optical waveguide combiners are still challenging to manufacture, Greenhalgh reckons Wave-Optics can design a prototype waveguide from scratch in just three weeks.
"Waveguide optical combiners are already used in Hololens 2 and Magic Leap One... and Microsoft and Magic Leap are the only two organizations that have gone public with their technology here," says the CTO. "Freeform optics provide you with great quality, but I believe that waveguides are the only way to achieve mass adoption of AR headsets at a price point that will enable the consumer market."
Best of sensors
In addition to light engines and optical combiners, sensors — from depth mapping to head tracking, gaze tracking and gesture sensors — are critical to the XR experience. As Kress puts it: "You can have the best display in the world but if your sensors aren't up to the task, then the experience will be a joke."
Kress and colleagues will be addressing the challenges in a series of workshops and courses from Sunday through to Thursday at SPIE VR/AR/MR. But as the Hololens Optical Architect highlights, developing low-latency sensors is critical as data needs to be rendered in real time to produce a convincingly smooth and responsive AR experience. "When you move your head, your display has to be compensated," he says. "The motion-to-photon latency needs to be less than 10 milliseconds — any greater and you'll be uncomfortable and nauseous."
With this in mind, Microsoft built a custom chip holographic processing unit for Hololens to swiftly process data from sensors and handle tasks such as spatial mapping and gesture recognition. "We're down to 9 ms with Hololens 2 and hope to be close to 5 ms with next versions — but much work still needs to be done to reduce that motion-to-photon latency."
Jim Melzer, Technical Director for Advanced Projects at Thales Visionix, is also no stranger to the issues around sensors. As he puts it: "For good augmented reality you really need good tracking that can figure out where you're looking at any given point — this is tough and not so many people are talking about it right now."
Melzer's interests lie in visual and auditory perception, and recently he has turned to invertebrate vision and animal navigation for inspiration, as reflected in his talk, "Birds do it. Bees do it. A bioinspired look at wayfinding and navigation tools for augmented reality."
As part of this, Melzer and Thales colleagues have developed an operational visual and digital motion tracking system that will track the user, as he or she walks around indoors, without GPS. Melzer reckons the system could be invaluable to emergency services in life-and-death situations.
"Our next step is to develop this to also track where the user is looking - we're working on this right now and it's at an early protocol stage," he says. "Integrating this into a headset is about two years out."
Melzer is also running the course, ‘Head-Mounted Display Requirements and Designs for Augmented Reality Applications' at this year's SPIE VR,AR,MR alongside colleague Michael Browne from SA Photonics, US. Here, he intends to examine what's important to AR — from field of-view to color and lumen requirements — with an emphasis on what the user wants.
"You really need to understand the human user and what they are doing in order to be successful in this industry, "he says. "And that's the emphasis of the class."
Vision Scientist, Professor Marty Banks from the University of California at Berkeley strongly agrees. As one of this year's keynote speakers and also presenting at the conference, Banks's research delves deeply into visual space perception and user comfort.
For example, Banks has spent much time exploring the eye-focusing problem — the vergence-accommodation (V-A) conflict — that plagues XR. "This is getting so much attention as you don't want to make the viewer uncomfortable and decrease their visual performance," he says.
Banks points to today's varifocal and multi-focal approaches, designed to solve this problem by using a focus-adjustable lens between the eye and the display screen. These varifocal and multifocal displays generate content either continuously or at discrete focal planes according to where the person is thought to be looking.
According to Banks, these approaches can work in the short-term, but as he points out, accurately tracking where the user is looking, and at what depth, remains problematic.
With these issues in mind, he and colleagues are pioneering numerous new technologies, including an algorithm called ChromaBlur that compensates for the eye's chromatic aberration. This can be coupled with focus-adjustable lenses and gaze tracking to minimize the effects of the V-A conflict in headsets.
However, for the Vision Scientist, the long-term answer to the V-A conflict lies in the much-awaited light-field display, which could provide many views of a scene to each eye resulting in more natural focus information.
Banks discusses a route forward in his talk, ‘How many views are required for an effective light-field display?' But as he highlights: "There are difficult computational and optical challenges and nobody has produced a satisfactory display yet."
"The companies involved are very secretive," he adds. "It's clear that Facebook Reality Labs is working on it, and while other companies talk about it, the words ‘light-field display' have been used very loosely."
Banks reckons light field displays won't reach the market for at least another five years, so what can we expect in the meantime? WaveOptics' Greenhalgh is confident that the next 18 months will bring AR head mounted displays with a larger field of view for enterprise applications, but also expects to see smaller form-factor smart glasses with a relatively narrow field of view and more limited functionality.
He also highlights how his company is ‘in talks' with UK-based real-time generated holography software developer, Vivid Q, on how to couple light-field holograms into waveguides, although admits this will be ‘quite a challenge'.
Meanwhile Melzer is looking forward to shortly seeing waveguides being used with micro-LED displays. "I think this is a year or two out but its going to be key as much less silicon will be used and this is going to drive costs way down, which is very exciting."
And for Kress, the excitement will continue. "This field is evolving so fast and every year we have to adapt to a new trend or technology."
"Investors are pouring money in and we are seeing new markets all the time," he adds. "New use-cases are popping up everywhere, and hopefully we'll see this in the consumer market soon."
|Enjoy this article?
Get similar news in your inbox