NVIDIA Explains Pascal’s ‘Lens Matched Shading’ for More Efficient VR Rendering

New rendering technique can increase pixel shading throughput by 50%, says company

9

The graphics pipeline in NVIDIA’s new ‘Pascal’ GPUs has been rearchitected with features which can significantly enhance VR rendering performance. Here the company explains how Simultaneous Multi-projection and Lens Matched Shading work together to increase VR rendering efficiency.

As the name implies, ‘Simultaneous Multi-projection’ (hereafter ‘SMP’) allows Pascal-based GPUs to render multiple views from the same origin point with just one geometry pass; rendering multiple views this way previously would have required a pass for each projection, but with SMP up to 16 views can be rendered in a single pass (or up to 32 projections in the case of rendering from two viewpoints for VR).

nvidia-pascal-simultaneous-multi-projection-smp-gtx-1080

SMP can be used specifically for VR to achieve what Nvidia calls ‘Lens Matched Shading’ (hereafter LMS). The goal of LMS is to avoid rendering pixels which end up being discarded in the final view sent to the display in the VR headset after the distortion process.

vr-headset-pre-distortion
Traditional distortion squeezes the initial rendering and some of the original pixels never make it to the display.

As the company explains, traditionally a single planar viewpoint is rendered and then distorted to counteract the opposite distortion imposed by the headset’s lenses. The distortion process however ends up cutting out a significant number of pixels from the initial render, which means wasted rendering power which could instead be used elsewhere.

nvidia-pascal-lens-matched-shading-vr-virtual-reality-gtx-1080
Lens Matched Shading begins with an initial image which better matches the final image sent to the display, resulting in fewer wasted pixels.

LMS aims to avoid rendering excess pixels in the first place by using SMP to break up the view for each eye into four quadrants that better approximate the final distorted view from the get-go. The result is that the GPU renders fewer pixels that are destined to be discarded, ultimately allowing that horsepower to be used elsewhere.

SEE ALSO
Xreal Announces Air 2 Ultra AR Glasses Ahead of Apple Vision Pro Release

Nvidia says that LMS can result in up to a 50%
increase in throughput available for pixel shading. However, these features don’t happen automatically simply by using a Pascal-based GPU; in order to take advantage of SMP and LMS, applications need to be built against Nvidia’s VRWorks SDK.


Disclosure: Nvidia paid for travel and accommodation for one Road to VR correspondent to attend an event where information for this article was gathered. Nvidia also provided Road to VR with a GTX 1080 GPU for review.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Graham J ⭐️

    I’m sure devs will enjoy using one SDK for one GPU brand and another, or their own code, for others.

    • Myles Walker

      Not necessary if developers are using Unreal Engine 4 or Unity 5. Seems like most if not all of the work will already be done.
      https://blogs.nvidia.com/blog/2015/11/09/gameworks-vr-unreal-engine-4-ue4/
      https://blogs.nvidia.com/blog/2016/03/15/gdc-2016-unity-vr/

      • Graham J ⭐️

        Unity, Unreal and other engine devs are still devs.

        • Myles Walker

          That’s true, but they already had to implement different SDKs for different HMDs anyway. A vendor agnostic solution probably wouldn’t be as efficient as either LiquidVR or VR Works.

          From what I can tell, using either is not actually a requirement for being able to develop on that platform. If you don’t want to put the work in to reap the benefits for that platform (eg. performance gains from Lens Matched Shading, etc.), then don’t. A one-size-fits-all approach would be ideal, but this is the reality of things.

          • Graham J ⭐️

            Now they not only have to support multiple HMDs, but multiple GPU SDKs for each, multiplying effort. As you say they’re probably doing that to some extent already, my point is more that Nvidia is promoting this idea to push devs to make a choice rather than support both. Why not propose a standard for LMS, for example.

          • Myles Walker

            I don’t disagree, it sucks. I couldn’t find anything indicating that LiquidVR was implemented in Unreal or Unity. CryEngine seems to be the only one that uses it. That’s unfortunate.

  • Rogue_Transfer

    [Edit: See jos’ reply below, sadly this isn’t going to as possible as I thought here]So, if SteamVR and the Oculus APIs can incorporate this instead of their current warping functions in their APIs – all VR apps would automatically gain a massive performance boost on Pascal GPUs, without any developer intervention.

    Either way, since everything big for VR is yet to come, the future is very bright for this technique!

    There’s no need for prior native VR games to be updated as they are already optimised to a perfect 90 FPS on a 980 and the GTX 1080 beats even a 980Ti.

    Vorpx will incorporate it, so existing non-VR games will gain(on Pascal). Future VR supporting games with more detail will also incorporate such a major benefit.

    So, all in, all VR games(past & future) that need more performance for graphic detail will benefit with a Pascal card. Let’s hope AMD also have some ideas to do similar benefits too in their new range, though, this seems like a home run, being hardware-based.

    • jos

      Unfortunately, they can’t just implement it in the API for SteamVR. SteamVR only has you to submit a frame texture, nothing else. The warping here is done before that frame texture is built. It has to be implemented in the game’s render pipeline itself.

      • Rogue_Transfer

        Yes, looking into the API, the current method does just take simple rendered textures.

        Fortunately, previous VR apps don’t need it(as they’re designed for (near) solid 90FPS on older, lower performing GPUs than the 1080/1070). Hopefully, new apps will take advantage of such a big potential though that this feature seems to offer to gain greater detail graphics.