Extended reality — where we are, what’s missing, and the problems ahead

Disclosure: Qualcomm, Microsoft, HP and Lenovo are clients of the author.

Qualcomm this week offered up a comprehensive presentation on where it is with mixed reality (MR). This matters because Qualcomm is the key technology provider for untethered MR solutions. To date, MR has seen success in business for training, manufacturing, and repair — particularly in areas like aerospace, where Microsoft’s HoloLens AR has been widely adopted.

Meta is the leader on the consumer side of the market, but is struggling with the typical problem of price point vs. capability, which has fallen short because Meta’s bar is too low and expectations too high. (It’s coming out with a better option soon that could close that gap if the market buys the higher price point.)

My focus is on the commercial side of things in the context of Qualcomm’s presentation.

The four types of extended reality

Terminology is all over the map in this arena, but I’m going to use extended reality as the class name with sub-classes of VR/MR (virtual reality and mixed reality), augmented reality (AR) standalone, AR viewers, and smartglasses. The last is a particularly interesting class because it has less to do with what we typically think of as extended reality and more about putting the display on your face. 

VR/MR is what we think of when we talk immersion. Both require the entire image be rendered with the difference between VR and MR being how much is imaginary and how much is real. With VR, the entire scene exists only in software, nothing is real, and cameras (if there are cameras) are mostly used to keep you from running into things. MR uses cameras to capture the surroundings and then renders the scene with a mix of computer-generated images and rendered images of what’s around you. This is potentially the most realistic of blended solutions, but requires massive processing power, often has latency and hardware weight issues, and tends to lend itself to tethered solutions. These approaches are used for immersive training (the US Space Force is using this for space battle training) and entertainment. Of the headsets I’ve seen and tested, Varjo arguably has the best one, while HP’s Reverb G2 offers a good entry-level option.

AR standalone is best represented by Microsoft’s HoloLens and Lenovo’s ThinkReality A6 glasses, which are similar to HoloLens but move some of the weight from the head to the waist for better long-term use. This uses the technology to project a rendered image in glasses that are transparent and allow images to float like ghosts in front of the user’s eyes. Used in manufacturing and field service — and with interesting use cases in healthcare — this technology has been particularly helpful for non-repetitive manufacturing line jobs and aerospace.

AR viewer is a variant of the above and typically uses a smartphone with the glasses to lower costs while providing similar performance and capabilities. The best-known device in this class was Google Glass, which was so poorly rolled out it set back this product line for years.

And finally, there are head-mounted displays. The display sits on your head, taking the place of the display on your phone or PC. This is most useful today for watching videos, particularly those with content that you do not want seen by others. It could be used for confidential training in public areas, or to provide large-screen experiences using  portable devices. While this is not really an ER device, it could evolve into one and merge with the AR viewer segment. The product I’m most familiar with is the Lenovo ThinkReality A3 Smart glasses.

What’s missing at the moment

None of these solutions is mature yet, but both VR/MR and AR standalone are in production at scale and useful today. The AR viewer, which was crippled by Google Glass, and the head-mounted display offerings are still in their infancy, but advancing quickly.

What’s missing for immersion in the VR/MR is full-body instrumentation so you can move and interact in the virtual world(s) as you would in the real world. Hand scanning with cameras on a headset has not been very reliable and the common use of controllers creates a disconnect between how you want to interact with a virtual world and how you must react with it. This is particularly problematic with MR because you use your naked hand for touching real objects and the controller for touching rendered objects, which spoils the experience. Haptics, which Meta and others are aggressively developing, are only a poor stop-gap method; what’s needed is a way to seamlessly bring a person into the virtual world and allow full interaction and sensory perceptions as if it were the real world.

AR standalone has had issues with occlusion, which are being worked on by Qualcomm and others. When corrected, rendered objects will look more solid and less like ghostly images that are partially transparent. But the use cases for this class are very well developed, making this the most attractive solution today.

AR viewer has similar issues to AR standalone but is performing well below its potential, given it should be less expensive than AR standalone while offering similar benefits.

Head-mounted displays have been around since the early 2000s. The problem has been getting people comfortable with the glasses and providing a way to use them for more than just video viewing. They could eventually replace monitors, but users first need a better way to be able to look down and see their hands and desk, or to train people in blind typing or stronger speech-to-text skills. The latter seems unlikely.

Changing how we interact with technology

These solutions, as they mature, will change the way we interact with our smartphones and PCs. Once you can provide the experience of a large display in glasses and bring in cloud options like Windows 365, your computer moves into the cloud and you only need a high powered, wireless (5G/6G) device, not a full-on PC. 

These devices are already changing how industries train people and how they build and repair products. In time, I expect, they will dramatically change what’s on our desk and might even make the desk itself obsolete.

Copyright © 2022 IDG Communications, Inc.

Source