Facebook published new research today which the company says shows the “thinnest VR display demonstrated to date,” in a proof-of-concept headset based on folded holographic optics.

Facebook Reality Labs, the company’s AR/VR R&D division, today published new research demonstrating an approach which combines two key features: polarization-based optical ‘folding’ and holographic lenses. In the work, researchers Andrew Maimone and Junren Wang say they’ve used the technique to create a functional VR display and lens that together are just 9mm thick. The result is a proof-of-concept VR headset which could truly be called ‘VR glasses’.

The approach has other benefits beyond its incredibly compact size; the researchers say it can also support significantly wider color gamut than today’s VR displays, and that their display makes progress “toward scaling resolution to the limit of human vision.”

Let’s talk about how it all works.

Why Are Today’s Headsets So Big?

Photo by Road to VR

It’s natural to wonder why even the latest VR headsets are essentially just as bulky as the first generation of headsets that launched back in 2016. The answer is simple: optics. Unfortunately the solution is not so simple.

Every consumer VR headset on the market uses effectively the same optical pipeline: a macro display behind a simple lens. The lens is there to focus the light from the display into your eye. But in order for that to happen the lens need to be a few inches from the display, otherwise it doesn’t have enough focusing power to focus the light into your eye.

That necessary distance between the display and the lens is the reason why every headset out there looks like a box on your face. The approach is still used today because the lenses and the displays are known quantities; they’re cheap & simple, and although bulky, they achieve a wide field of view and high resolution.

Many solutions have been proposed for making VR headsets smaller, and just about all of them include the use of novel displays and lenses.

The new research from Facebook proposes the use of both folded optics and holographic optics.

Folded Optics

What are folded optics? It’s not quite what it sounds like, but once you understand it, you’d be hard pressed to come up with a better name.

While the simple lenses in today’s VR headsets must be a certain distance from the display in order to focus the light into your eye, the concept of folded optics proposes ‘folding’ that distance over on itself, such that the light still traverses the same distance necessary for focusing, but its path is folded into a more compact area.

You can think of it like a piece of paper with an arbitrary width. When you fold the paper in half, the paper itself is still just as wide as when you started, but it’s width occupies less space because you folded it over on itself.

But how the hell do you do that with light? Polarization is the key.

Image courtesy Proof of Concept Engineering

It turns out that beams of light have an ‘orientation’. Normally the orientation of light beams at random, but you can use a polarizer to only let light of a specific orientation pass through. You can think of a polarizer like the coin-slot on a vending machine: it will only accept coins in one orientation.

Using polarization, it’s possible to bounce light back and forth multiple times along an optical path before eventually letting it out and into the wearer’s eye. This approach (also known as ‘pancake optics’ allows the lens and the display to move much closer together, resulting in a more compact headset.

But to go even thinner—to shrink the size of the lenses themselves—Facebook researchers have turned to holographic optics.

Holographic Optics

Rather than using a series of typical lenses (like the kind found in a pair of glasses) in the folded optics, the researchers have formed the lenses into… holograms.

If that makes your head hurt, everything is fine. Holograms are nuts, but I’ll do my best to explain.

Unlike a photograph, which is a recording of the light in a plane of space at a given moment, a hologram is a recording of the light in a volume of space at a given moment.

When you look at a photograph, you can only see the information of the light contained in the plane that was captured. When you look at a hologram, you can look around the hologram, because the information of the light in the entire volume is captured (also known as a lightfield).

Hand-tracking Text Input System From Facebook Researchers Throws Out the Keyboard (sort of)

Now I’m going to blow your mind. What if when you captured a hologram, the scene you captured had a lens in it? It turns out, the lens you see in the hologram will behave just like the lens in the scene. Don’t believe me? Watch this video at 0:19 at look at the magnifying glass in the scene and watch as it magnifies the rest of the hologram, even though it is part of the hologram itself.

This is the fundamental idea behind Facebook’s holographic lens approach. The researchers effectively ‘captured’ a hologram of a real lens, condensing the optical properties of a real lens into a paper-thin holographic film.

So the optics Facebook is employing in this design is, quite literally, a hologram of a lens.

Continue Reading on Page 2: Bringing it All Together

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.

Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • VRagoso

    Cool explanation guys. Well done.

  • Ad

    Just by the tone I could tell this was Ben. Talking about this as speculative tech is the right way to go, I don’t think there’s much point hoping for the next generation of headsets to be like this. Comfort isn’t necessarily about form factor. A Vive or Quest is night and day compared to an index not because they’re the wrong form factor but because weight is distributed wrong, the straps are not nearly good enough, and they try to be as compact as possible even though they just shoved everything towards your face. The “google” prototypes last year were like this. They clearly should have had straps but they didn’t just because they wanted them to look like glasses. And it seems like that just made them really expensive and uncomfortable, but giving off the impression of being a huge step forward even though the displays were pretty bad as a resulting of the folding method they used.

  • Rudl Za Vedno

    Holographic Optics??? Corteks leaked that NVIDIA’s coming out with ultra high res 4 panel (2×2 displays like Varjo) HMD with Holographic technology and eye tracking with dynamic foveated rendering when It releases RTX Ampere GPUs. I was LMAO discarding leak immediately, but damn, now I’m not so sure it isn’t true.

    • kontis

      “Corteks leaked” <—- LOL.

      This guy can BS for half an hour about wild assumption based on a single patent picture.

  • Andrew Jakobs

    Sounds cool, but especially what they say So it’s going to be a while before this tech makes it out of the lab, and that’s only if a better solution isn’t found by then.
    I think something like directly projecting the image onto your retina will probably be out before this (as it’s already being shown), and that doesn’t require lenses.
    Personally I don’t care if the headset isn’t as small as a regular pair of sunglasses, as I want to have a set that blocks out all light.

  • Foreign Devil

    I was reading recently about another nano technology making VR lens out of “metaloids” rather than glass. Anyways it was some sort of nano-material and actually looked to be more provmising aas soon as they can find a way to sale up production .

  • Lucidfeuer

    So how is this different from Nvidia’s Lightfield glasses from a few years back?

    • kontis

      It’s in the paper:

      Near-eye light field displays. An alternative method to create a thin virtual reality display is to synthesize a light field near the eye. Lanman and Luebke [2013] show how a microlens array placed over a display panel can create the focal depth cues and has produced the thinnest VR display known to date at 10 mm thick. With fur- ther engineering, designs could also likely be made significantly thinner. Although highly innovative, the display sacrifices signif- icant spatial resolution to generate the light field, and published prototypes [Huang and Hua 2018; Lanman and Luebke 2013] have preserved <10% of the resolution of the underlying display panel. The theoretical maximum resolution of the display is also limited by the aperture diffraction of the microlenses. An alternative design, Pinlight Displays [Maimone et al. 2014], creates an image without lenses using a structured backlight consisting of point light sources. The design preserves much of the resolution of the underlying dis- play panel, but has a very low diffraction limited resolution. In contrast to these light field designs, the proposed design is capable of preserving the full resolution of the display panel and has been demonstrated in slightly thinner form factors. Our optical design does not have a practical diffraction limit on resolution and we demonstrate that it scales to the limit of normal human vision. How- ever, unlike light field displays, we will require additional hardware to support the focal depth cues

      • Lucidfeuer

        Thanks, but I see it doesn’t explain how THEIR solution is different if they don’t use microlenses or pinholes?

    • benz145

      Great question. While the reply from @kontis:disqus explains the difference from a pros/cons standpoint, I’ll try to explain from a technological standpoint.

      Although this Facebook research involves holograms, it is not a light-field display because the viewable output is not a light-field. The display/optics pipeline in the Facebook prototype is functionally identical to what’s in today’s consumer headsets, but it uses a novel approach (which involves holography) to make the whole thing much more compact.

      A light-field display is a class of displays which output a light-field in real-time. There are several methods to create light-fields; the NVIDIA prototype you’re referring to uses a microlens array. The primary benefit of light-field is that they present light to our eyes in the same way that we see the real world.

      That means that light-field displays inherently support ‘accommodation’ (focusing of the light by the lens in your eye), depth-of-field (blurring of objects outside of your focal distance), and per-eye parallax (subtle changes in the view depending on the position of your eye relative to the virtual object). None of today’s VR headsets support these features because they don’t quite correctly model the virtual light as if it was coming from the real world.

      Back to Facebook’s prototype and its use of holograms. Although a hologram is a captured light-field, the hologram as used in the prototype headset is actually a hologram of a traditional lens. The purpose of doing this was to reduce the bulk of the optics from relatively large lenses down to paper-thin films. It was achieved by capturing the light-bending properties of a traditional lens onto a holographic sheet, and then placing that sheet where the real lens would normally be placed in the optical pipeline.

      So the hologram takes the place of a normal lens (only much thinner) in Facebook’s prototype. And just like a normal display/lens setup in today’s VR headsets, the output is a ‘flat’ image which is made stereoscopic by rendering a different view for each eye.

      Please let me know if this made any sense!

      • Lucidfeuer

        “Hologram of a traditional lens” my mind is blown. It’s so smart, but I’m not sure to understand: this prototype, just displays one fixed image through a screen, and the paper-thin lens is a “pre-rendered optical diffraction” of that same image? Which means you’d need one different paper for each image (although I can image the idea is to make it a mecanical/digital one) if I understood well?

        • benz145

          The headset shows rendered imagery just like VR headsets today. I think I see your confusion.

          A hologram as most of us known them are the weirdly colored ‘3D photos’ that you can view on a piece of holographic film. These are captures of a specific scene (a hologram is effectively a recording of the light from that scene).

          The hologram used in the Facebook prototype is a recording of how light interacts with a lens (lenses bend light). So instead of recording a scene in the hologram, they recorded a lens.

          The lens is not the display, it’s just the part that bends the light from the display so that it is focused.

          The display itself in these headsets is a fairly typical LCD. The thing that makes the headset so much thinner than normal headsets is purely the novel optics.

          • Lucidfeuer

            Ok thanks, I get the “concept” though I’m still not sure how they “recorded” or reproduced the light bending properties of a lenses unless they simply have micro-lenses arrays like in a typical lightfield display (without the need for multiple viewpoint rendering), like what the actual “holographic lens” is made of…

  • mfx

    Dream news

  • lieisacake

    Still going through the paper, but so far one thing is worth noting:
    “Speckle reducers” are actually very truthfully named, they reduce laser speckle, but don’t eliminate it. For anyone who doesn’t know what laser speckle is, it’s like an extremely fine grain mura effect. I’ve had a chance to see them in action in a video projection system. The speckle reducers make the grains less fine and more blurry but are far from eliminated and reducers have drawbacks too, like making the speckle appear to “swim” on the image. It’s a pretty “in-your-face” artifact, would be less noticeable on an AR headset but not sure VR users will tolerate it very well.

  • Miqa

    It wasn’t touched upon in the article. Is this display technology varifocal? To me that seems more important than form factor.

    • wheeler

      Options for complementing this design with varifocal are mentioned in the paper

      • Miqa

        Cool, thanks

    • kontis

      No, it’s not.
      They suggest using mechanical solutions…

  • Jonathan Winters III

    Ummm…..yeah. VR glasses that don’t block out the outside world are absolutely useless for VR gaming. For AR, it’s ok, but VR, no.

    • Octo

      Did you just look at the top image and go straight to commenting? There is also an image of them capped off. Maybe the top image was for showing off the lenses..whatever, you’re jumping to conclusions for no good reason.

  • wait wait wait… virtual lenses? Virtual lenses. Virtual…. lenses? They make lenses virtual now? wait, waht?

    my mind cannot compute.

    • duck

      If its true , the holographic lenses can even be used in place of spectacles.

  • brubble

    Wow, very interesting!

  • As soon as I see these polarized reflective techniques it reminds me of the issues faced with the waveguides in the Hololens/Magic Leap. The devil will be in the details when it comes to colour convergence, brightness, resolution, etc.

  • duck

    Wow wow wow
    A big thumbs up
    This is revolution in optics
    Holographic lens + Polarization + rest of tech == The real thingy, what all want w/o a box on your face.
    Great job

  • duck

    If its true , the holographic lenses can even be used in place of spectacles by designing all parameters digitally, diopter settings, cylindrical aspect etc compute the holographic lens then print it on a thin sheet of glass/perspex/plastic. Hologram may even be stored on a pendrive so that the process can be repeated in case of damage. Finally specs wearers can have designer glasses.
    Great business opportunity that will overturn the optical lenses/spectacles industry upside down.

  • blue5peed

    I can’t stop laughing at how insane this sounds. Its like someone smoked a bowl and said “fu*k it lets just use holograms”.

  • Fascinating. And thanks for the explanation, it was better than the one provided by facebook. Regarding the timing, Jeri Ellsworth says that she expect this technology to be used “soon-ish”, so probably is more 5 years than not 10

    • Rogue Transfer

      Highly unlikely, if you look at the speckle grain and mura over the two prototypes’ images. It’s never going to be refined to an acceptable level in just 5 years.

      Just look how slow they’ve been to develop other, less optically challenging tech and this is a whole new ball game with a number of fundamental, unsolved issues. Not to mention they only managed 69° vertical FOV and less than that stereo-overlap(since they are widely-spaced, square views).

  • Chaven Yenketswamy

    If you can achieve retinal resolution then this technology also has the potential to be used for corrective eyeware and a whole assortment of optical consumer products.

  • MadMax1998

    The article sounds like science fiction… lasers as a light source? Why don’t we use LEDs as we do now? What is a “holographic film” and how do you “capture” a hologram onto it? AFAIK, holograms right now can only be displayed by a “slotted” display that shows different angles of an image per slot, which is crude and low-res.

  • Rogue Transfer

    “In order to focus squarely on the heart of their research (the optics), they left out things that would normally be inside the headset, like the light source (laser in this case), driving electronics, tracking cameras, etc.”

    So, they misrepresent the actual thickness necessary, by omitting the laser source, and show a publicity picture of some dummy black glasses that can’t display anything without it. Makes you wonder why they didn’t/couldn’t attach the laser light source for the press photos.

    • benz145

      Why shrink a laser light source if there isn’t first a reason? This research shows a good reason for future research to focus on shrinking a laser light source.

  • Kim from Texas

    They make eye glasses using this same technology. I tried these glasses and they failed miserable since they made me nauseous.

  • Bob

    The bit after you said ‘Now I’m going to blow your mind’ actually blew my mind.

  • allen mchey

    It is place ugly. I would refuse to buy it. I am so surprised at these amazing companies and their vision of technology is hijacked by few engineers that lack the focus for high end design. With all that Jazz, good and bad reviews. I do understand the build process and technology but this piece of glass is still way worse than magic leap (I am not a magic leap fan, I only helped to manufacture it).

  • dk

    sooo to make this work they have a light field display
    92◦×69◦ fov and good angular resolution ….hmmm interesting

  • mfx

    Can this technology achieve a 180 degree FOV in theory as well?
    Or there is a physical limitation to a certain amount like 120 degrees ?

    That’s important as the NUMBER 1 request is StarVR One FOV for next generations.

    Also by using so many polarized light system, does this mean that the source light will have to be crazy powerful ?

  • Orcinus

    Your explanation of holographic lenses is nonsensical and just plain wrong. A lense ”recorded” in a hologram will lense the imagery that’s a part of the hologram, not a light field coming from outside the hologram. The video you use as a demonstration has absolutely nothing to do with holographic optical elements.

    HOEs are a form of diffractive (as opposed to refractive) optics. They reshape an incoming light field by a diffractive grating. You can think of it almost as a fresnel lense at a microscopic level.