NVIDIA Researchers Demonstrate Ultra-thin Holographic VR Glasses That Could Reach 120° Field-of-view

12

A team of researchers from NVIDIA Research and Stanford published a new paper demonstrating a pair of thin holographic VR glasses. The displays can show true holographic content, solving for the vergence-accommodation issue. Though the research prototypes demonstrating the principles were much smaller in field-of-view, the researchers claim it would be straightforward to achieve a 120° diagonal field-of-view.

Published ahead of this year’s upcoming SIGGRAPH 2022 conference, a team of researchers from NVIDIA Research and Stanford demonstrated a near-eye VR display that can be used to display flat images or holograms in a compact form-factor. The paper also explores the interconnected variables in the system that impact key display factors like field-of-view, eye-box, and eye-relief. Further, the researchers explore different algorithms for optimally rendering the image for the best visual quality.

Commercially available VR headsets haven’t improved in size much over the years largely because of an optical constraint. Most VR headsets use a single display and a simple lens. In order to focus the light from the display into your eye, the lens must be a certain distance from the display; any closer and the image will be out of focus.

Eliminating that gap between the lens and the display would unlock previously impossible form-factors for VR headsets; understandably there’s been a lot of R&D exploring how this can be done.

In NVIDIA-Stanford’s newly published paper, Holographic Glasses for Virtual Reality, the team shows that it built a holographic display using a spatial light modulator combined with a waveguide rather than a traditional lens.

The team built both a large benchtop model—to demonstrate core methods and experiment with different algorithms for rending the image for optimal display quality—and a compact wearable model to demonstrate the form-factor. The images you see of the compact glasses-like form-factor don’t include the electronics to drive the display (as the size of that part of the system is out of scope for the research).

You may recall a little while back that Meta Reality Labs published its own work on a compact glasses-size VR headset. Although that work involves holograms (to form the system’s lenses), it is not a ‘holographic display’, which means it doesn’t solve the vergence-accommodation issue that’s common in many VR displays.

On the other hand, the Nvidia-Stanford researchers write that their Holographic Glasses system is in fact a holographic display (thanks to the use of a spatial light modulator), which they tout as a unique advantage of their approach. However, the team also writes that it’s possible to display typical flat images on the display as well (which, like contemporary VR headsets, can converge for a stereoscopic view).

Image courtesy NVIDIA Research

Not only that, but the Holographic Glasses project touts a mere 2.5mm thickness for the entire display, significantly thinner than the 9mm thickness of the Reality Labs project (which was already impressively thin!).

As with any good paper though, the Nvidia-Stanford team is quick to point out the limitations of their work.

For one, their wearable system has a tiny 22.8° diagonal field-of-view with an equally tiny 2.3mm eye-box. Both of which are way too small to be viable for a practical VR headset.

Image courtesy NVIDIA Research

However, the researchers write that the limited field-of-view is largely due to their experimental combination of novel components that aren’t optimized to work together. Drastically expanding the field-of-view, they explain, is largely a matter of choosing complementary components.

“[…] the [system’s field-of-view] was mainly limited by the size of the available [spatial light modulator] and the focal length of the GP lens, both of which could be improved with different components. For example, the focal length can be halved without significantly increasing the total thickness by stacking two identical GP lenses and a circular polarizer [Moon et al. 2020]. With a 2-inch SLM and a 15mm focal length GP lens, we could achieve a monocular FOV of up to 120°”

As for the 2.3mm eye-box (the volume in which the rendered image can be seen), it’s way too small for practical use. However, the researchers write that they experimented with a straightforward way to expand it.

With the addition of eye-tracking, they show, the eye-box could be dynamically expanded up to 8mm by changing the angle of the light that’s sent into the waveguide. Granted, 8mm is still a very tight eye-box, and might be too small for practical use due to variations in eye-relief distance and how the glasses rest on the head, from one user to the next.

But, there’s variables in the system that can be adjusted to change key display factors, like the eye-box. Through their work, the researchers established the relationship between these variables, giving a clear look at what tradeoffs would need to be made to achieve different outcomes.

Image courtesy NVIDIA Research

As they show, eye-box size is directly related to the pixel pitch (distance between pixels) of the spatial light modulator, while field-of-view is related to the overall size of the spatial light modulator. Limitations on eye-relief and converging angle are also shown, relative to a sub-20mm eye-relief (which the researchers consider the upper limit of a true ‘glasses’ form-factor).

An analysis of this “design trade space,” as they call it, was a key part of the paper.

“With our design and experimental prototypes, we hope to stimulate new research and engineering directions toward ultra-thin all-day-wearable VR displays with form-factors comparable to conventional eyeglasses,” they write.

The paper is credited to researchers Jonghyun Kim, Manu Gopakumar, Suyeon Choi, Yifan Peng, Ward Lopes, and Gordon Wetzstein.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • Indias Tour Advisor

    Best Tour Package Company Personalized Holiday Packages Customized Tour Packages from multiple local Explore Delhi, Agra, Jaipur enchanting Sightseeing & Places and trusted tour package Company.

    • Max-Dmg

      I’m in! How much?

  • sfmike

    This tech is coming along faster than I thought and boy we can really use it.

  • xyzs

    The day these VR glasses offer truly 120 degrees and 4k per eye by then, VR is gonna become mainstream for real, because the “socially acceptable” barrier will be broken.
    Until then, VR will remain a nerd hobby.

    • Jonathan Winters III

      Not a nerd hobby anymore – Quest 2 has made it to mainstream, with millions of casual users including moms/pops/grandmas/kids getting into it due to its ease of use.

  • I saw this in the nvidia news, thanks for explaining it!

    • Zelda Gutierrez

      Working online from home and earning more than $15,000 simply doing easy work. (MPB891) last month I made and received $17915 from this job by doing easy work part-time. Go to this web and follow the instructions.

      – – – – >>>> https://­r­q­.­f­y­i/OCd35q
      *******************************************************

  • Andrew Jakobs

    With all the negatives they mention, I don’t see this happening for a long time. Also they only showed the displays with the glasses, not the hardware to actualy drive them, which is also a reason why current headsets are ‘bulky’. Personally I’d rather have a bit of bulkiness than having to wear the glasses with a cable running down to the processingbox/battery, it’s the one thing that I already see as a big negative with my HTC Vive Pro wireless module. Also regular glasses design is not really practical for gaming as you move your head too much/fast so you’ll need a strap or something, just like with regular sportsglasses.

  • So… they are just trying to use a waveguide as a lens, instead of the lens we use now, in order to make a thinner display. Simple enough.

    The explanation was RIDICULOUSLY COMPLEX. I’m guess most people here know what a waveguide is, it’s how you bend light around corners in video glasses like Lenovo’s ThinkReality. It helps squish an image to just a thin line, and then expand it out again across the surface of the glasses. There’s already products out there that use these.

    Seems like they should be using a per-pixel waveguide, layered on top of the display. Sort of like an inverted insect eye. I thought of this back in highschool. Someday it’ll get made.

    • Raphael

      I thought the explanation was patronisingly simple. I didn’t appreciate having the data presented as if I’m a poorly educated man who should be picking mushrooms at a mushroom farm.

      • benz145

        It wasn’t just slapping a waveguide on there. It was also using an SLM, and then creating an algorithm to optimally generate the holographic image to be seen through this specific optical pipeline.

  • brandon9271

    Really this is interesting but.. How can you have a “3D” image on a single eye? I mean, i get that it could have depth but could you even see it? The hologram would have to have different focal depths