Researchers from Meta Reality Labs, Princeton University, and others have published a new paper detailing a method for achieving ultra-wide field-of-view holographic displays with retina resolution. The method drastically reduces the display resolution that would otherwise be necessary to reach such parameters, making it a potential shortcut to bringing holographic displays to XR headsets. Holographic displays are especially desirable in XR because they can display light-fields, a more accurate representation of the light we see in the real world.

Reality Labs Research, Meta’s R&D group for XR and AI, has spent considerable time and effort exploring the applications of holography in XR headsets.

Among the many obstacles needed to make holographic displays viable in an XR headset is the issue of étendue: a measure of how widely light can be spread in a holographic system. Low étendue means a low field-of-view, and the only way to increase the étendue in this kind of system is by increasing the size of the display or reducing the quality of the image—neither of which are desirable for use in an XR headset.

Researchers from Reality Labs Research, Princeton University, and King Abdullah University of Science & Technology published a new paper in the peer-reviewed research journal Nature Communications titled Neural étendue expander for ultra-wide-angle high-fidelity holographic display.

The paper introduces a method to expand the étendue of a holographic display by up to 64 times. Doing so, the researchers say, creates a shortcut to an ultra-wide field-of-view holographic display that also achieves a retina resolution of 60 pixels per-degree.

Image courtesy Meta Reality Labs Research

Higher resolution spatial light modulators (SLM) than exist today will still be needed, but the method cuts the necessary SLM resolution from billions of pixels down to just tens of millions, the researchers say.

Given a theoretical SLM with a resolution of 7,680 × 4,320, the researchers say simulations of their étendue expansion method show it could achieve a display with a 126° horizontal field-of-view, and a resolution of 60 pixels per-degree (truly “retina resolution”) in ideal conditions.

No such SLM exists today, but to create a comparable display without étendue expansion would require an SLM with 61,440 × 34,560 resolution, which is far beyond any current or near-future manufacturing capability.

Étendue expansion itself isn’t new, but the researchers say that existing methods expand étendue at significant cost to image quality, creating an inverse relationship between field-of-view and image quality.

“The étendue expanded holograms produced with [our method] are the only holograms that showcase both ultra-wide-FOV and high-fidelity,” the paper claims.

The researchers call the method “neural étendue expansion,” which is a ‘smart’ method of expanding étendue compared to existing naive methods which don’t take into account what is being displayed.

Image courtesy Meta Reality Labs Research

“Neural étendue expanders are learned from a natural image dataset and are jointly optimized with the SLM’s wavefront modulation. Akin to a shallow neural network, this new breed of optical elements allows us to tailor the wavefront modulation element to the display of natural images and maximize display quality perceivable by the human eye,” the paper explains.

The authors—Ethan Tseng, Grace Kuo, Seung-Hwan Baek, Nathan Matsuda, Andrew Maimone, Florian Schiffers, Praneeth Chakravarthula, Qiang Fu, Wolfgang Heidrich, Douglas Lanman & Felix Heide—conclude their paper saying they believe the method isn’t just a research step, but could itself one day be used as a practical application.

Meta Connect 2024 Developer Conference Announced for Late September: What We're Hoping For

“[…] neural étendue expanders support multi-wavelength illumination for color holograms. The expanders also support 3D color holography and viewer pupil movement. We envision that future holographic displays may incorporate the described optical design approach into their construction, especially for VR/AR displays.”

And while this work is exciting, the researchers suggest they have much still to explore with this method.

“Extending our work to utilize other types of emerging optics such as metasurfaces may prove to be a promising direction for future work, as diffraction angles can be greatly enlarged by nano-scale metasurface features and additional properties of light such as polarization can be modulated using meta-optics.”

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.

Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • impurekind

    Good, because increased FOV really is the most desirable thing for me personally at this point. Honestly though, even the proposed FOV is still far below where I’d like it to be. I’d take a lower resolution, something around where we’re currently at, if would could get much closer to the 200 degree FOV mark.

    • ShaneMcGrath

      I agree, Quest 3 resolution is decent enough for me for a while but the FOV needs bumping up a lot, Biggest immersion breaker is now FOV not resolution.

    • Daskaling

      You can get full horizontal FOV with the pimax 8kx. I own it. I also thought FOV was most important. But when I first put it on, I didn’t notice a difference between that and my 120 degree odyssey. I believe vertical FOV is more important than horizontal. And I now think the weight breaks the immersion the most. We really need lightweight glasses.

      • Foreign Devil

        Exactly. . Most these people have never tried something with a ultra wide FOV. . Why do they think it is so importance to the experience? I haven’t tried wide FOV VR. . .and it is not seomthing that comes to mind and most important to improve. . resolution and dynamic contrast seem more important.

      • impurekind

        If you didn’t notice a difference between a full FOV, which should be around 200 degrees, and a 120 degree FOV, I’d suggest something isn’t right with the headset claiming it gives full FOV. One should be like wearing the ski goggles, and the other should basically be like wearing nothing. But, yeah, vertical FOV needs to increase too. Weight is another thing, but of course it would be great to get that down as low as possible too. I would take the same weight again though if it mean the FOV increased dramatically.

  • Zantetsu

    So is this basically a machine learning-derived upscaler?

    • Christian Schildwaechter

      No, it’s a special type of lens for coherent light that allows increasing the FoV by a factor of eight without significantly reducing image quality. The resolution numbers given are just a (rather bad/confusing) example: If you wanted to reach the same FoV as a (not yet existing) 7680 pixel wide SLM paired with this new “neural étendue expansion”/(active) lens without any type of lens, you’d need an eight times as wide display surface, which would equal to 61440 pixels if you produced it in the same way.

      Which would be a completely insane approach. If there is no way around creating a larger display area, one would first increase the pixel size instead of using 64 pixels as one. If producing something with 64 times the pixels was even technically possible, one would use these to actually improve the resolution, not just the FoV. And the most reasonable/feasible way is to instead create a compatible lens to magnify a much smaller display area, which is what was done here.

      The listed increase in resolution is completely hypothetical, probably makes for a great headline, but otherwise only gives a completely wrong impression what this is all about. The machine learning is only involved with optimizing the lenses useable in holographic displays with very complex light paths, no upscaling/resolution change is involved anywhere.

  • ViRGiN

    Valve Deckard killers?

  • gothicvillas

    every time I use VR, I am reminded of narrow FOV. Not a game breaker a such, but needs to be increased ASAP. If next Quest has the same FOV as Q3, it will be huge disappointment

  • Christian Schildwaechter

    TL;DR: This solves one specific issue in a display type far from ready for use in HMDs. The rest is about what SLMs are/do, which part they (probably) improved, very long and technical.

    Among the many obstacles needed to make holographic displays viable in an XR headset is the issue of étendu

    Nobody get too excited. This addresses one particular problem with a technology that one day will allow for very slim XR HMDs due to using holographic lenses with the complete light path inside a thin plastic film, instead of requiring a long distance between display and lenses as with aspheric/Fresnel, or a just as long path in a shorter lens module with multiple internal reflections/refractions as in pancake lenses.

    All image technology using anything holographic needs coherent light with synchronized wave phases like we get from lasers. OLED just controls the intensity of the emitted light. LC displays additionally also limit the polarization/rotation of the beam by sending everything through liquid crystal filters, which probably helps reduce losses in pancake lenses that use internal refraction/reflection based on polarized surface coatings. SLM also control the phase/synchronization, making them usable with holographic displays, and the ones electronically-addressed/controlled like displays (EASLM) are typically based on LCD or LCOS microdisplays.

    As far as I can tell, current EASLM can only emit a single color, so like in LCOS based beamers, you’ll need three of them for red, green and blue each and then optically combine the colors. Their resolution is usually around 1080p, with Holoeye claiming their “up to” 4K GAEA-2 having the highest resolution on the market. All their SLMs are “phase-only”, so I’m not sure if they just serve as (very lossy) filters for other displays. SLMs are used in beamers, and Holoeye’s products are purchasable, so not only research products, but with prices only on request, so probably not cheap. And I found an image of an SLM attached to a heat sink roughly 50 times its size (~7*7) , so there may be some other limits for using them in HMDs.

    The problem solved here is that light comes out of the holographic lens/display with controlled emission, polarization and phase as a very thin beam that just like a laser doesn’t spread, resulting in a very small image/FoV, with regular optical lenses that would typically be used to solve this working badly/not at all. I haven’t read the paper, so I still have no clue what exactly their SLM is/does, though it sounds more like a (optically-addressed) OASLM, a special type of SLM that doesn’t create the image, but alters the images from an EASLM and is used as a sort of repeater/resolution multiplier for holographic displays. I assume they managed to model this OASLM in a way that allows for an XY eight-fold increase in the width of the resulting beam without introducing major distortions.

    So an important puzzle piece, but not something that would bring SLM based HMDs to consumer HMDs anytime soon. But the statement that it reduces the required resolution from 61,440 × 34,560 to only 7,680 × 4,320 is somewhat sensationalist, as the higher number implies that the display is composed of millions of 8 × 8 pixel groups each showing only one color to get to the higher FoV, something that nobody would seriously consider, as it would be way too hard/expensive to increase the resolution of the SLM 64-fold just for that. You still need the same high resolution SLM that don’t exist yet in a form usable form for HMDs, but can now get a higher FoV thanks to clever compatible lenses that don’t suck.

    • Thanks for the very detailed comment

    • psuedonymous

      The SLM they are using is a CoTS LCOS SLM (Holoeye Pluto). Reflective SLMs are a commodity part: any ‘DLP’ projector has between 1 and 3 of them (which may be actual TI DLP SLMs, an off-brand DMD, or LCOS). UHD and 4k SLMs are available for digital cinema projectors – both consumer and professional installation projectors. There is commercial demand for non-pixel-shift 8k reflective SLMs from the digital cinema realm, so an ‘8k’ SLM is likely to be a CoTS part in the near future.

      The SLM is the least interesting portion of this work, as it’s just a generic part.

  • xyzs

    I’ll believe it when I see it…

  • Andrew Jakobs

    126 degrees is ultrawide fov for them? Ultrawide fov is more like 160-180 degrees to me. But I’ll take 126 degrees over the ones we have now. Any major increase is an important step forward. I’ll take wider fov over higher ppd (keeping current Q3/pico4 ppd in mind).

  • Ardra Diva

    don’t agree with Zuck politically but i applaud him for being the money to push VR/MR/XR/whateverR forward. Somebody’s gotta watch a pile of money burn to figure this stuff out, but we’ll enjoy our holodeck they bring us. Maybe not in my lifetime, but I bet i’ll see smart contact lenses. Simple ones already exist.