Oculus is calling its latest rendering tech ‘Stereo Shading Reprojection’, and says that for scenes which are heavy in pixel shading, the approach can save upwards of 20% on GPU performance. The company says the new VR rendering technique can be easily added to Unity-based projects, and that it “shouldn’t be hard to integrate with Unreal and other engines.”

The VR industry has a vested interest in finding ways to make VR rendering more efficient. For every bit of increased efficiency, the (currently quite high) bar for the hardware that’s required to drive VR headsets becomes lower and less expensive, making VR available to more people.

Following last year’s introduction of ‘Asynchronous Spacewarp’, another bit of rendering tech which allowed Oculus to establish a lower ‘Oculus Minimum Spec’, the company has now introduced Stereo Shading Reprojection, a technique which can reduce GPU load by 20% on certain scenes.

The basic idea behind SSR involves using similarities in the perspective of each eye’s view to eliminate redundant rendering work.

Because our eyes are separated by a small distance, each eye sees the world from a slightly different perspective. Though each view is different, there’s a lot of visual similarity from one to the next. SSR aims to capitalize on that similarity, and avoid doing redundant rendering work for the parts of the scene which have already been calculated for one eye, and from which you infer what the scene should look like from a similar perspective.

The green region of the photo on the right shows artifacts introduced during the SSR process, this is corrected for in a later step. | Image courtesy Oculus

More technically, Oculus engineers Jian Zhang and Simon Green write on the Oculus developer blog that SSR uses “information from the depth buffer to reproject the first eye’s rendering result into the second eye’s framebuffer.” It’s a relatively simple idea, but as the pair points out, to work well for VR the solution needs to achieve the following:

  • It is still stereoscopically correct, i.e. the sense of depth should be identical to normal rendering
  • It can recognize pixels not visible in the first eye but visible in the second eye due to slightly different points of view
  • It has a backup solution for specular surfaces that don’t work well under reprojection
  • It does not have obvious visual artifacts

And in order to be practical it should further:

  • Be easy to integrate into your project
  • Fit into the traditional rendering engine well, not interfering with other rendering like transparency passes or post effects
  • Be easy to turn on and off dynamically, so you don’t have to use it when it isn’t a win

They share the basic procedure for Stereo Shading Reprojection rendering:

  1. Render left eye: save out the depth and color buffer before the transparency pass
  2. Render right eye: save out the right eye depth (after the right eye depth only pass)
  3. Reprojection pass: using the depth values, reproject the left eye color to the right eye’s position, and generate a pixel culling mask.
  4. Continue the right eye opaque and additive lighting passes, fill pixels in those areas which aren’t covered by the pixel culling mask.
  5. Reset the pixel culling mask and finish subsequent passes on the right eye.
The left image shows an edge ghosting artifact, a Conservative Reprojection Filter is used to fix this issue as seen on the right. | Image courtesy Oculus

The post on the Oculus developer blog futher details how reprojection works in this case, how to detect and correct artifacts, and a number of limitations of the technique, including incompatibility with single-pass stereo rendering (a VR rendering technique which is especially good for CPU-bound scenes).

SEE ALSO
Existing VR Games Would Look Great on Vision Pro, But Without Controllers Most Are Stuck
Image courtesy Oculus

With SSR enabled, Oculus says their tests using an Nvidia GTX 970 GPU reveal savings of some 20% on rendering time, and that the technique is especially useful for scenes which are heavy in pixel shading (like those with dynamic lights).

As will all rendering performance gains, the extra overhead and be left free to make the scene easier to render on less powerful hardware, or it can be utilized to add more complex graphics and gameplay on the same target hardware.

Oculus says they plan to soon release Unity sample code demonstrating Stereo Shading Reprojection.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • nebošlo

    Good idea. And I actually understood this, which makes me like it even more :D

  • Nimso Ny

    It’s a beautiful rendering technique to completely remove redundant pixels that you’re already rendering for the first eye.

    I’d imagine camera position reliant elements like Specular and maybe Fresnel based edge glows would be added normally, after performing the reprojection on Diffuse.

    I wonder if this would have any impact on Screen Space Effects though? They are using the Depth buffer, so maybe this could be applied to Ambient Occlusion and such?

  • brandon9271

    What happened to Nvidia’s single pass stereo and reprojection mumbo jumbo?

    • Caven

      Doesn’t work on AMD hardware. Oculus’ implementation should allow support on all VR-capable systems, regardless of the GPU used.

  • PrymeFactor

    Yet you still have folks coming up to opine rubbish about how Facebook’s involvement in Oculus is ‘bad for VR’. They fund these studies!

  • kool

    I wonder if these rendering techniques make it to psvr games? Hell isn’t that a move controller in the demo.

  • Rafael

    We need FOV rendering, it should be cheap to implement the hardware for it.

  • Mane Vr

    I wonder if this will once again lower the spec for an oculus vr ready pc. FB is not only making the hmd cheaper to buy but they’re making it so vr ready pc also become cheaper to get, widen the field. When more people see they don’t need to upgrade their pc or laptop to jump into vr. It clear to see oculus is chasing Psvr not vive