Meta Researchers Reveal Compact Ultra-wide Field-of-View VR & MR Headsets

28

Ahead of an upcoming technical conference, researchers from Meta’s Reality Labs Research group published details on their work toward creating ultra-wide field-of-view VR & MR headsets that use novel optics to maintain a compact goggles-style form-factor.

Published in advance of the ACM SIGGRAPH 2025 Emerging Technologies conference, the research article details two headsets, each achieving a horizontal field-of-view of 180 degrees (which is a huge jump over Meta’s existing headsets, like Quest 3, which is around 100 degrees).

The first headset is a pure VR headset which the researchers say uses “high-curvature reflective polarizers” to achieve the wide field of view in a compact form-factor.

Image courtesy Reality Labs Research

The other is an MR headset, which uses the same underlying optics and head-mount but also incorporates four passthrough cameras to provide an ultra-wide passthrough field-of-view to match the headset’s field-of-view. The cameras total 80MP of resolution at 60 FPS.

Image courtesy Reality Labs Research

The researchers compared the field-of-view of their experimental headsets to that of the current Quest 3. In the case of the MR headset, you can clearly see the advantages of the wider field-of-view: the user can easily see someone who is in a chair right next to them, and also has peripheral awareness of a snack in their lap.

Image courtesy Reality Labs Research
Image courtesy Reality Labs Research

Both experimental headsets appear to use something similar to the outside-in ‘Constellation’ tracking system that Meta used on its first consumer headset, the Oculus Rift CV1. We’ve seen Constellation pop up on a number of Reality Labs Research headsets over the years, likely because it’s easier to use for rapid iteration compared to inside-out tracking systems.

The researchers point out that similarly wide field-of-view headsets already exist the consumer market (for instance, those from Pimax), but the field-of-view often comes at the cost of significant bulk.

A Pimax headset, known for its wide field-of-view. | Photo by Road to VR

 

The Reality Labs researchers claim that these experimental headsets have a “form-factor comparable to current consumer devices.”

“Together, our prototype headsets establish a new state-of-the-art in immersive virtual and mixed reality experiences, pointing to the user benefits of wider FOVs for entertainment and telepresence applications,” the researchers claim.

For those hoping these experimental headsets point to a future Quest headset with an ultra-wide field-of-view… it’s worth noting that Meta does lots of R&D and has shown off many research prototypes over the years featuring technologies that have yet to make it to market.

SEE ALSO
The 15 Best VR Horror Games on Quest, PSVR 2 and PC VR to Play This Halloween

For instance, back in 2018, Meta (at the time still called Facebook) showed a research prototype headset with varifocal displays. Nearly 7 years later, the company still hasn’t shipped a headset with varifocal technology.

As the company itself will tell you, it all comes down to tradeoffs; Meta CTO Andrew ‘Boz’ Bosworth explained as recently as late 2024 why he thinks pursuing a wider field-of-view in consumer VR headsets brings too many downsides in terms of price, weight, battery life, etc. But there’s always the chance that this latest research causes him to change his mind.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Arashi

    That's pretty cool! I respect Pimax for trying but honestly their wide FoV headsets have sub optimal optics, to say it nicely LOL. I didn't like it very much. Awesome to hear that Meta is researching this.

    • Rogue Transfer

      Meta researches everything they can think of, most of it never comes to a product though, sadly. Nor is there anything to say these prototype optics aren't sub-optimal – since no one outside of Meta has tried them to tell.

      • Charles U. Farley

        It is spelled "Meat"

  • Erik Middeldorp

    Typo in the first sentence of the article, Meat instead of Meta.

    I'm looking forward to wider fov but my PC already struggles with all the pixels of the quest 3 and I wouldn't want the resolution any lower than quest 3. We need dynamic foveated rendering and for it to deliver the hoped for performance gains.

  • Charles U. Farley

    "Meat’s Reality Labs Research group" — BEST TYPO EVER!!!!!

    • Jistuce

      Reality Labs Research: Bringing you the finest bridges between cyberspace and meatspace.

  • jiink

    Cool stuff, but I'm surprised how big this still looks compared to Hypervision's high FOV optics + displays

    • Andrew Jakobs

      Except hypervision hasn't really shown a working prototype with the renders they show on their website, all they shown was a handheld prototype which was pretty large.

      • Rogue Transfer

        Except Meta hasn't really shown working prototypes with these images above. No one from the media has tried them to tell, if they even work well or not.

        As we've learnt from Norm of Tested, who finally got to test most of Meta's prior prototypes last year, they all turned out to be flawed.

        That's the thing about prototypes, they aren't product-ready, they're just internal test devices of unknown fitness.

  • Christian Schildwaechter

    TL;DR: the FoV is limited by optics, displays, compute power and human vision, and most (standalone) HMD manufacturers will continue to stick to ~100° FoV due the disadvantages that come with higher FoV.

    So far a higher FoV on mobile SoCs made no sense as the required render performance increases exponentially with FoV increase as you have to cover a lot more geometry per degree when looking at it from the side. And the added graphics are is only visible in the peripheral view where our vision is much worse than when looking straight forward. So it made a lot more sense to increase the image quality at the center instead of waisting it on a higher FoV.

    Dynamic foveated rendering based on eye tracking will reduce the performance issues, but still leaves the fundamental problem that our vision simply doesn't work that well beyond just a few degrees. You cannot read any text beyond 45° of your view direction to the left/right, and it is hard even if you turn your eyes, because now the lens focuses the light onto a low resolution part of the retina. We turn our heads to see clearly beyond 25°, and see the sharpest only within about 6° straight forward. So a high FoV still is sort of waste, even if the performance problem is solved. Creating lenses that don't introduce a lot of distortion towards the edge is also rather hard, and with current displays with uniform pixel densities, a higher FoV will unavoidably reduce pixel density at the center.

    So small high FoV optics are a nice research projects, and high FoV no doubt adds to immersion. But until we have displays beyond 4K/eye resolution, or special displays/lenses with non-uniform pixel density/magnification plus GPUs that can render in much finer density steps than the ones we have today, most HMDs will still go for a lower FoV. Esp. since pretty much everybody is now targeting media use and productivity, where a high PPD is desirable for sharp text, which doesn't need a lot of render performance. The sole exception could be pure gaming HMDs that don't really need to be that sharp, or even could be, as they are more limited by the available GPU power. These might even benefit from a high FoV, because with ETFR the actual number of pixels that has to be rendered will be lower with a lower PPD. But the only company that might even consider a standalone HMD focused primarily on gaming would be Valve, everybody else will follow Apple's AVP template, picking higher pixel density over higher FoV.

    • Stephen Bard

      I found that I could never get used to the Quest 2 FOV, where I always felt I was looking out from a claustrophobic ((swim mask)) tube, and the Quest 3 at 110º feels barely adequate. I have been hugely disappointed that all of the current expensive micro-OLED headsets have FOVs even narrower than the Quest 2, and so I really do wish for headsets like these Meta prototypes with FOVs that would be very immersive. It would be perfectly acceptable for the pixel density to be lower on the periphery if I can experience less claustrophobia / more immersion.

    • Ondrej

      1. The exponential performance loss is NOT because of added geometry to render, but because planar projection of hardware rasterizer is terrible for wide FOV and necessary distortion correction for the lenses causes the eye buffer size to increase exponentially in resolution if you want to maintain 1:1 pixel quality in the center.
      This was already discovered with DK1, which was only 640 x 800 per eye, but eye buffer could go close to 2K x 2K if the FOV was pushed far.
      Rendering purely with raytracing (primary rays for geometry) solves this problem, but raytracing is still much slower on current hardware. However RT also allows MUCH more efficient foveated rendering than rasterization, so there are trade offs.

      2. Wide FoV immersion/presence has NOTHING to do with sharp fovea vision to read text. Up to 270° is needed for maximum human FOV when rotating eyes far, but doing that for a prolonged time is painful, so it doesn't matter that much. 200° is still worth it, though.
      New military headset from Palmer Luckey will achieve 360° by doing tricks like virtual mirrors or stretching etc, but that's for awareness, not immersion.

      • Jistuce

        Raytracing's been much slower than rasterization for decades, I don't see that ever changing. And honestly, if nVidia or AMD ever ships a chip capable of more than "neat toy" levels of hardware-accelerated raytracing, I'll be extremely surprised.

      • Christian Schildwaechter

        1. Let's do a little practical experiment to see if increased FoV exponentially adds geometry that needs to be rendered, thus increasing the required performance. We'll use a grid of 30*30 standard 2m*2m*2m cubes with 2m distance in Blender with a camera at 8m distance and different FoV settings.

        a) At 50° FoV or -25° to +25° , we see one row in +-X +-Y, or four cubes with 12 faces.
        b) +-2 rows: 90° FoV/+-45°, 16 cubes, 48 faces
        c) +-3 rows: 115° FoV/+-57.5°, 36 cubes, 108 faces
        d) +-4 rows: 130° FoV/+-65.5°, 64 cubes, 192 faces
        e) +-5 rows: 140° FoV/+-70°, 100 cubes, 300 faces
        f) +-10 rows: 160° FoV/+-80°, 400 cubes, 1200 faces
        g) +-15 rows: 166.5° FoV/+-83.25°, 900 cubes, 2700 faces

        If we now calculate how much extra geometry per degree of added FoV we have to render, we get
        b) 50° -> 90° = +40°: 0.9 faces/°
        c) 90° -> 115° = +25°: 2.4 faces/°
        d) 115° -> 130° = +15°: 5.6 faces/°
        e) 130° -> 140° = +10°: 10.8 faces/°
        f) 140° -> 160° = +20°: 45 faces/°
        g) 160° -> 166.5° = +6.5°: 231.7 faces/°

        I hope you agree that adding FoV increases the amount of polygons to be rendered exponentially and that it get's f*cking expensive for higher FoVs with diminishing returns. Going from 160° to 166.5° requires rendering 100 times the extra polygons per degree as going from 90° to 115°. And that is before an inefficient rasterizer even comes into play, this is a pure projection issue.

        2. I never said that sharp vision was required for immersion, it is required for productivity/media use. As current display technology has only so much pixel available, HMD designers have to choose between high FoV with lower PPD for higher immersion and low FoV with higher PPD for productivity. Most will chose the latter, even those targeting gaming, because of the high render cost a high FoV causes as described above.

        https://uploads.disquscdn.com/images/1c20d5479edc09273d80e9a12ee521694e002fbc8a5947069a55faf75275b22d.png

  • Pretty cool stuff!

  • xyzs

    Ahhhhhhhh. Every 5 years, there’s this day of RnD hope, and it’s today!
    Really cool to see they finally focus on full FOV + contained form factor.
    Thanks for sharing this great progress

    • Mike

      FOV increases are great – but only if they aren't trading off any binocular overlap.

      • xyzs

        Agree. It looks like the overlap is quite similar.
        I also hope that this new lenses do not add a worse glare/ colour distortion than quest 3 that is pretty good but not immaculate.

  • asdf

    They are dead-meta, creating future zombies!

  • ShaneMcGrath

    Weight and battery life isn't an issue if you replace the crap head strap with a proper one like BoboVR S3 pro.
    I'd rather spend more to get some extra FOV, It's the biggest drawback for me in VR, Like looking through swimming goggles and breaks immersion.
    At least have 2 options, Cheap standard one and a high end wide FOV version.

  • Octogod

    All of Meta's prototypes for the past several years have been to deflate the PR of other organizations. We won't see these results in practice for years, if at all.

  • Kara

    "ACM SIGGRAPH 2025 Emerging Technologies conference" – that's not the name of the conference. "Emerging Technologies" is one of the programs at SIGGRAPH, the largest conference on computer graphics and interactive techniques in the world. Other interesting programs for VR enthousiast are the "Immersive Pavilion" (XR installations and demos) and "Spatial Storytelling" (XR performances).

  • Ivan

    To find more of Meta's research, go to the publication website of the abstract or paper. In this case, the paper is called "Wide Field-of-View Mixed Reality," and you should click through the authors, especially the last ones (usually the project leads or supervisors). You can then see the other papers this person has published and find out about other projects Meta is working on.

    Another project that Meta will present at the Emerging Technologies venue at SIGGRAPH 2025 is called "Hyperrealistic VR: A 90-PPD, 1400-Nit, High-Contrast Headset."

    The abstract is as follows:

    "We demonstrate a virtual reality (VR) head-mounted display (HMD) that provides an angular resolution of 90 pixels per degree (PPD), up to 1, 400-nit brightness, and high contrast. Leveraging inside-out tracking from the Meta Quest 2, this HMD utilizes an optical design that achieves diffraction-limited performance at 90 PPD. By utilizing Unreal Engine 5 and Nvidia’s DLSS 3 AI supersampling method, hyperrealistic demos can be supported at interactive frame rates. We present this Emerging Technologies installation to exhibit the combined visual benefits of ultra-high-resolution, high-brightness and high-contrast displays. To our knowledge, this work documents the first VR HMD that achieves this combination of visual benefits in a manner supporting a natural sense of visual immersion — setting a new milestone in terms of how realistic VR experiences can be with state-of-the-art technologies.
    "

    More on the graphics side of Meta's research is also their work with Codec Avatars. They are presenting a paper a SIGGRAPH this year called "Relightable Full-body Gaussian Codec Avatars".

    If you search on YouTube for "SIGGRAPH 2025 Technical Papers Trailer" you can find a short video on this project along with other projects from other companies and universities being presented at the conference this year.

    There are other projects from other journals/conferences from Meta as well if you look through the different authors. For instance a project called: "HoloChrome: Polychromatic Illumination for Speckle Reduction in Holographic Near-Eye Displays".

    If you want to find even more research, that normally isn't covered in VR/AR news sites because there are just so many and they can get very technical, look for different conference websites and YouTube channels. Many of them have trailers and short videos presenting projects.

    There are many, but here are just a few conferences you can look up videos for: IEEE VR, ACM SIGCHI, ACM SIGGRAPH, If you want to find more research that Meta is working on, go to the abstract/paper's publication website, in this case the paper is called "Wide Field-of-View Mixed Reality" and click through the authors of the paper, especially the last ones (usually the leads/supervisors of the project). You can then see other papers this person has published and find the other things Meta is working on.

    For example, another project that Meta will be presenting at the Emerging Technologies venue at SIGGRAPH 2025 is called "Hyperrealistic VR: A 90-PPD, 1400-Nit, High-Contrast Headset".

    The abstract is as follows:

    "We demonstrate a virtual reality (VR) head-mounted display (HMD) that provides an angular resolution of 90 pixels per degree (PPD), up to 1, 400-nit brightness, and high contrast. Leveraging inside-out tracking from the Meta Quest 2, this HMD utilizes an optical design that achieves diffraction-limited performance at 90 PPD. By utilizing Unreal Engine 5 and Nvidia’s DLSS 3 AI supersampling method, hyperrealistic demos can be supported at interactive frame rates. We present this Emerging Technologies installation to exhibit the combined visual benefits of ultra-high-resolution, high-brightness and high-contrast displays. To our knowledge, this work documents the first VR HMD that achieves this combination of visual benefits in a manner supporting a natural sense of visual immersion — setting a new milestone in terms of how realistic VR experiences can be with state-of-the-art technologies.
    "

    More on the graphics side of Meta's research is also their work with Codec Avatars. They are presenting a paper a SIGGRAPH this year called "Relightable Full-body Gaussian Codec Avatars".

    If you search on YouTube for "SIGGRAPH 2025 Technical Papers Trailer" you can find a short video on this project along with other projects from other companies and universities being presented at the conference this year.

    There are other projects from other journals/conferences from Meta as well if you look through the different authors. For instance a project called: "HoloChrome: Polychromatic Illumination for Speckle Reduction in Holographic Near-Eye Displays".

    If you want to find even more research, that normally isn't covered in VR/AR news sites because there are just so many and they can get very technical, look for different conference websites and YouTube channels. Many of them have trailers and short videos presenting projects that you can access for free.

    There are many, but here are just a few conferences you can look up projects and videos for: ACM Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH), ACM SIGGRAPH Asia, ACM Special Interest Group on Computer-Human Interaction (SIGCHI, and CHI PLAY), ACM Symposium on Applied Perception (SAP), ACM Symposium on Spatial User Interaction (SUI), ACM Symposium on User Interface Software and Technology (UIST), Designing Interactive Systems (DIS), HCI International, IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE VR, IEEE visualization and visual analytics (VIS), Intelligent Computing and Virtual & Augmented Reality Simulations (ICVARS), International Conference on HCI in Games, International Conference on Multimodal Interfaces (ICMI), International Conference on Tangible, Embedded, and Embodied Interaction (TEI), International Conference on Virtual Reality (ICVR), International Society for Optics and Photonics (SPIE), International Symposium on Mixed and Augmented Reality (ISMAR), Laval Virtual, Ubiquitous Computing (UbiComp), Virtual, Augmented and Mixed Reality for Human-Robot Interaction (VAM-HRI), Virtual, Augmented and Mixed Reality (VAMR), Virtual Reality Software and Technology (VRST)

    • Arthur

      I'd love for there to be a VR-version of "2 Minute Papers" Just short technical deep dives into these papers and nothing else. How nice would that be. Thanks for this post btw. Good info.

      • Ivan A.

        You're welcome.

        Me too. I haven't seen anyone create videos that discuss VR/AR technical papers. Most videos I've seen online just talk about VR news, posted on this and other websites.

        I've thought about creating these types of videos, as I research VR, AR, and robotics, but I, unfortunately, do not have the time.

  • Albert Hartman

    PICO and AVP and others are competing with Meta on technology chops. To avoid departing developers unhappy with their walled-garden approach they will need to up their technology advantage to attract them. Wide FOV without huge formfactors (ex:Pimax,VRgineers) would be a big deal.

  • Rupert Jung

    Can't wait not to be able to buy this.

  • JB1968

    This is cool but also sucks it was researched by Meta. We won’t see it in real life unless some other company license this tech(or develop own solution).
    Meta is fully focused on MOBILE now (eg. thin glasses with XR ads and social crap under AI assistant surveilance) and this has nothing to do with their plans.