Here, There, or In the Air

Here, There, or In the Air

Discussing the Practicalities of AR

Watching Sci-Fi shows like the Mandalorian, Star Trek Discovery, or Thor Ragnarok it appears that the future of collaborating is via hologram. An Augmented Reality (AR) display of the person you are talking to, projected as real in the very space you occupy. This certainly beats a Zoom/Teams call, and would overcome many distancing challenges.

For those of us in the industry, we know that this concept is being pursued by companies from Microsoft, with the ungainly and eye-wateringly expensive Hololens 2, to the ailing Magic Leap, to the secretive Apple, and now Sony has thrown their AR display hat in the ring:

How To Connect In Virtual Space

Essentially there are two broad ways to connect in virtual space. The easiest, by far, is through Virtual Reality. As VR creates the entire environment, and the avatars, it is relatively simple to don a headset and pop into a virtual room. This is already popular, I recently connected with people across countries from my hospital bed in Sydney on a VR headset that is 1/4 the price of my phone. (the Oculus Quest 2). This is becoming so common, that here at ACHIEVR we are integrating this functionality in our VR Training products. But this is not the subject of this article.

The second, and far more challenging, is Augmented Reality. Projecting the image of the collaborator realistically in your environment (and you into theirs).

Three Approaches To Holograms

From a physics perspective, there are 3 approaches to create an AR image.

  1. From a display like the Sony's above, or Portl's. I.e. create the AR image at display source,
  2. On a wearable pair of glasses, i.e. prject the image next to your eyes; or
  3. From some (magical) projection in the air like in the SciFi movies mentioned above. (unless all the characters are wearing AR capable contact lenses like in Continuum that we don't know about).

All three approaches provide increasingly intractable "laws of physics" challenges. Which of course, is why we haven't yet seen an affordable, sustainable way of doing this. In other words, why we're stuck with Zoom, Teams, WhatsApp and Facetime on 2D displays.

Technical Challenges to Overcome

Every digital display has to overcome common challenges - brightness, resolution, sustainable power, and thermal management.

AR displays also have to overcome tracking issues for both the projection, and the viewer. And these provide issues like the Vergence Accommodation Conflict (VAC).

VAC? In short, your eyes use two methods to determine depth perception - the angle of the eyes to each other (Convergence) and the focal distance of the eye lenses (Accommodation). When looking at (light reflected or projected from) an object in the real world you determine how far away it is both by it's relative size and sharpness.

AR Displays

AR Displays like Portl or Sony's don't struggle so much with sustainable power, resolution, brightness, or thermal management because you can plug them in and make them as large and heavy as you like. Of course, they do struggle with portability & affordability.

Mostly, however, this approach struggles to overcome the VAC challenge on a display because of the difficulty in tracking viewers eyes. With current technology, this is nigh on impossible for more than one viewer. In other words, on the bridge of the Discovery, there would need to be a separate display for everyone on deck.

AR Glasses

AR Glasses on the other hand have to overcome all of the standard display challenges, as well as the AR challenges.

We have come a very long way, but still have a ways to go. The advent of mobile phone components like CPUs, GPUs, high res displays, miniature cameras and more recently entire SOC's, allow for a wearable headset like the MS Hololens. Currently these still don't have the power to provide a complete standard human 210 degree Field of View. In fact less than 1/3 of that currently. (Even VR displays tap out at about 92-120 degrees FoV with few exceptions)

No alt text provided for this image

Also, either you are wearing the battery and compute elements on your head (ala the Hololens) or it is tethered via cable to a pocketable device like the Magic Leap. Batteries seldom last more than a couple of hours. At just over half a kilogram (579g) wearing the device is certainly possible for lengthy periods of time, but not like wearing spectacles or sunglasses at less than 10x that weight.

But the biggest challenge to overcome is the VAC. The ability for the glasses to measure your pupil convergence, and in real time, change the focus of the projected image. All the while maintaining the position of the image in real space, especially if it is moving. As well as allowing it to respond to other (real) artifacts that may occlude it, or light and shadow.

Here again technologies like Simultaneous Localisation and Mapping (SLAM) have made leaps and bounds. Even with single lens cameras on phones, and the depth-sensing cameras (from the research on Kinect) on the MS Hololens.

To summarise, for AR Glasses to achieve anything close to SciFi vision, they need to:

  • See and map space accurately (SLAM)
  • Project across 210 deg of horizontal FoV
  • Track pupils and accurately change the depth of image focus, reflection, occlusion, and shadow all in real-time.
  • Do this for a full day, without going flat, overheating, or being too heavy.

And we haven't begun to talk about network connectivity, global positioning, or 2-way audio yet.

Right now, your smartphone can do most of this (with the exception of overcoming VAC and FoV challenges) but you aren't strapping one on your face any time soon

Air Hologram Displays

AR Glasses appear to be our shortest route to a realistic consumer AR solution. There have been experiments with mediums like steam, where water droplets provide refractive and reflective properties, but of course no one wants to live in a Sauna.

I can't imagine the compute power you would need to manipulate air molecules, or I guess, photons (through an Air Wave Guide) accurately enough for a holographic display.

It may happen. Once people didn't believe we could fly like birds, let alone cross oceans en masse, or leave the bounds of earth. Now groups are seriously contemplating colonising Mars.

I suspect the answer will still be a combination of the approaches above. Some contact lens that reacts to some projection.

My Money is On Apple

For this to really work we need the speed of 5G Internet connectivity, the low latency of WiFi 6 or 7 for personal area networks, we need much better battery power density, we need far more minituarisation, and we need a community of designers and developers whom understand 3D spatial computing.

Whilst Microsoft is working on a number of these, Apple is working on all of these needs, including their own silicon.

With the advent of the A14 and now M1 system-on-chip, development of ARKit with thousands of developers in their community, and LIDAR sensors on their latest devices, Apple is putting in place all of the technology they need to produce workable AR glasses.

Their timeline is 18 months to two years out, which also feels realistic.

Let's Meet in VR

For now though, Virtual Reality is an affordable, and far superior way to collaborate than video conferencing. I'll see you in Oculus Venues, or Beat Saber

The Author:

Roger Lawrence is a passionate technologist, and itinerant traveller. He has been working with emerging technologies for 30 years; e.g. implemented the first Remote Access System for Nokia UK in 1995; led the team that built the first hosted (Cloud) MS Exchange and Office platform in Australia, Optus a-Services in 2000; and even featured in an article about "teleworking" in the Sydney Morning Herald in 2003.

He is fascinated by technologies that move people, as a motorcyclist, hiker, scuba instructor, ocean crossing yachtsman, and aspirational pilot (astronaut?).

He has been working with AR and VR since 2015 whilst at HPE, and developed solutions for organisations of all sizes. Now he runs ACHIEVR, a startup dedicated to solving tough training challenges for individuals and organisations using immersive computing (AR, VR, MR). You can connect with him on LinkedIn




Oliver Weidlich

Founder & Director of Design & Innovation at Contxtual - A User Experience consultancy

4 年

Great article! Thanks Roger. I'm counting down the 18 months ;-) BTW there was a great panel last night via SIGGRAPH on HCI and the future of displays (https://sa2020.siggraph.org/en/attend/acm-siggraph-frontiers-workshops/event2). Aaron Quigley says a video of the session might be up online soon.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了