Virtual reality is coming, and it’s going to need high frame rates and low latency visuals. In this guest post from NVIDIA Graphics Programmer Nathan Reed, we take a deep dive into their forthcoming Gameworks VR initiative. What is it and exactly how will it help drive our forthcoming virtual reality experiences?


nathan-reed-nvidiaNathan Reed is a graphics programmer, amateur physicist, and sci-fi nerd. He teaches computers how to make pretty pictures and is excited by beautiful, immersive, story-driven games and interactive fiction. Nathan works on VR stuff at NVIDIA, and previously worked at Sucker Punch Productions on the Infamous series of games for PS3 and PS4.


We know you’re excited about VR, and we are too! It’s no exaggeration to say it’s the most interesting thing to happen in gaming and entertainment in years. VR is the next frontier, the possibilities are endless, and we’re still just at the beginning.

NVIDIA is getting into VR in a big way. VR games are going to require lots of GPU horsepower—with low latency, stereo rendering, high framerate, high resolution, and high-fidelity visuals all required to bring truly immersive experiences to life. As a GPU company, of course we’re going to rise to meet the new challenges that VR presents, and we’re going to do all we can to help VR game and headset developers use our GPUs to create the best VR experiences possible.

1gameworksvr-key1

To that end, we’ve built—and are continuing to build—GameWorks VR. GameWorks VR is the name for a suite of technologies we’re developing to tackle the challenges of high-performance VR rendering. It has several different components, some aimed at game engine developers and some aimed at headset developers. GameWorks VR technologies are available under a limited alpha program now, and we’re working closely with Oculus, Valve, Epic, and others to get these technologies road-tested and hopefully soon deployed to our customers around the world.

SEE ALSO
First Filed in 2017, Vision Pro EyeSight Patent Envisioned Anime Eyes & Furry Avatars

For Engine Developers

VR SLI

People have two eyes, so a VR game has to perform stereo rendering. That increases both the CPU and GPU cost of rendering a frame quite a bit—in the worst case, almost doubling it. Some operations, such as physics simulations and shadow map rendering, can be shared across both stereo views. However, the actual rendering of the views themselves has to be done separately for each eye, to ensure correct parallax and depth cues that your brain can fuse into a perception of a 3D virtual world.

nvidia-sli-image

It’s intuitively obvious that with two independent views, you can parallelize the rendering work across two GPUs for a massive improvement in performance. In other words, you render one eye on each GPU, and combine both images together into a single frame to send out to the headset. This reduces the amount of work each GPU is doing, and thus improves your framerate—or alternatively, it allows you to use higher graphics settings while staying above the headset’s 90 FPS refresh rate, and without hurting latency at all.

oculus-rift-cv1-back
The Oculus Rift VR Headset

That’s the main way we expect people to use VR SLI. But VR SLI is even more than that—it’s really a DirectX extension API that allows engine developers explicit control over how work is distributed across any number of GPUs. So if you’re a developer and you want to support 4 GPUs or even 8 GPUs in a machine, you can do it. The power is in your hands to split up the work however you want, over however many GPUs you choose to support.

SEE ALSO
Meta Releases New Mixed Reality Showcase for Unreal Engine Developers

As mentioned, VR SLI operates as a DirectX extension. There are two main ways it can be used. First, it enables GPU affinity masking: the ability to mask off which GPUs a set of draw calls will go to. With this feature, if an engine already supports stereo rendering, it’s very easy to enable dual-GPU support. All you have to do is add a few lines of code to send all the left eye’s draw calls to the first GPU, and all the right eye’s draw calls to the second GPU. For things like shadow maps that will be used by both GPUs, you can send those draw calls to both GPUs. It really is that simple, and incredibly easy to integrate in an engine.

directx-12-logo

The other way to use VR SLI is to use GPU broadcasting. This requires a deeper integration and more work on the part of developers, but it has benefits not only in GPU performance, but CPU performance as well. CPU performance is important because once you’ve split your game’s rendering work across two GPUs, the CPU becomes the next most likely bottleneck to be impairing a game’s performance.

GPU broadcasting allows you to render both eye views using a single set of draw calls, rather than submitting entirely separate draw calls for each eye. Thus, it cuts the number of draw calls per frame—and their associated CPU overhead—roughly in half. This works because the draw calls for each eye are almost completely the same to begin with. Both eyes can see the same objects, are rendering the same geometry, with the same shaders, textures, and so on. So from the driver’s point of view, it’s just doing the same work twice over. The only difference between the eyes is their view position—just a few numbers in a constant buffer. VR SLI enables you to submit your draw calls once, broadcasting them to both GPUs, while also sending different constant buffers to the two GPUs so that each GPU gets its correct eye position. This lets you render both eyes with hardly any more CPU overhead then it would cost to render a single eye.

SEE ALSO
Netflix is Selling the '3 Body Problem' Headset, But Sadly It's Just a Prop

I’ve discussed two different ways to use VR SLI—affinity masking and broadcasting—but note that you don’t have to choose between them. Both styles of use can easily be mixed within the same application. So it’s easy for engine developers to start out using affinity masking, then gradually convert portions of their engine over to broadcasting as and when they need to. However, any level of VR SLI support does require active integration into an engine—it’s not possible to automatically enable VR SLI in games that haven’t integrated it.

Since VR SLI is a software (driver) feature, it works across a wide variety of NVIDIA GPUs, going all the way back to the GeForce GTX 600 series. Currently it exists as an extension to DX11. DX12 already includes very similar multi-GPU support built in. We hope to eventually expose VR SLI extensions for OpenGL and Vulkan, as well as bringing VR SLI to Linux further down the road.

VR SLI is a great feature to get maximum performance out of a multi-GPU system. But how can we make VR rendering more efficient even on a single GPU? That’s where my next topic comes in.

Page 2 – Muti-Resolution Shading

1
2
3
4

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Based in the UK, Paul has been immersed in interactive entertainment for the best part of 27 years and has followed advances in gaming with a passionate fervour. His obsession with graphical fidelity over the years has had him branded a ‘graphics whore’ (which he views as the highest compliment) more than once and he holds a particular candle for the dream of the ultimate immersive gaming experience. Having followed and been disappointed by the original VR explosion of the 90s, he then founded RiftVR.com to follow the new and exciting prospect of the rebirth of VR in products like the Oculus Rift. Paul joined forces with Ben to help build the new Road to VR in preparation for what he sees as VR’s coming of age over the next few years.
  • Roy

    Link to page two not working…

    • Paul James

      Sorry about that – fixed across the board.

  • WadeWatts

    I am super excited for this. As a developer I want to support the highest quality VR experience for users of my VR game. Do you have BETA access for VR developers and if so how do I sign up?

    • Adored

      You should contact AMD about LiquidVR as that’s what everybody else is doing. Nvidia is at least a year behind here and probably more.