NVIDIA’s latest ‘RTX’ cards are not just an incremental step in performance, but represent significant new direction for NVIDIA’s approach to real-time rendering. Built on the company’s ‘Turing’ architecture, the RTX cards pave the way for graphics infused with accelerated ray-tracing and artificial intelligence, and also bring VR-specific enhancements over NVIDIA’s prior 10-series GPUs which are based on the ‘Pascal’ architecture.

The Turing architecture introduces two new types of ‘cores’ (processors designed to quickly handle specific tasks) that are not found in prior GeForce cards: RT Cores and Tensor Cores. RT cores are designed to accelerate ray-tracing operations, the math that simulates how light bounces around a scene and interacts with objects. Tensor Cores are designed to accelerate tensor operations which are useful for AI inferencing like that which comes from neural networks and deep learning.

NVIDIA’s Turing chip | Image courtesy NVIDIA

This means that in additional to the usual CUDA-based rendering, RTX cards also have the ability to bring accelerated ray-tracing and AI processes into the rendering mix, which can make for some impressive real-time reflections, among other things. But beyond the potential for better graphics thanks to more realistic lightning, what do RTX cards bring to the table for VR? NVIDIA recently broke down some of the highlights.

Multi-view Rendering for Ultra-wide FOV Headsets

Image courtesy NVIDIA

Over the last few years, so-called ‘single-pass stereo’ rendering has become an important part of rendering stereoscopic scenery for VR headsets, given that each eye needs to see a slightly different view of the scene to create an accurate 3D view. Single-pass stereo allows the geometry of the scene to be rendered for both eyes with a single rendering pass, instead of one pass for each eye.

Upcoming ultra-wide FOV headsets like StarVR One typically use displays which are angled to one another to achieve their wide view. Rendering such a wide field (especially when the content of each eye’s view is quite different given the expanded FOV and angle of each display) without distortion is not a trivial task.

RTX GPUs are now capable of Multi-view Rendering, which NVIDIA says is like a next-generation version of Single-pass Stereo. Multi-view Rendering bumps the number of possible geometry projections which can be achieved with a single pass from two to four, allowing ultra-wide fields of view to be rendered in a single pass for headsets with angled displays. The company also says that all four projections are now position-independent and able to shift along any axis, which should allow for more even more complex display layouts in future headsets.

It appears that each perspective from Multi-view Rendering should also be capable of being segmented for Lens Matched Shading via Simultaneous Multi-projection (a feature introduced with Pascal cards).

Variable-rate Shading for Foveated Rendering

Foveated rendering—reducing detail in your peripheral view where you don’t notice it for more efficient rendering—has been talked about for years now, but practical implementation relies heavily on eye-tracking technology which isn’t available in many current-gen headsets. But with eye-tracking expected in a range of next-generation headsets, the need for an efficient method of foveated rendering is increasingly important.

SEE ALSO
Eye-tracking is a Game Changer for XR That Goes Far Beyond Foveated Rendering

NVIDIA says that their new RTX cards support a feature called Variable-rate Shading which allows for dynamic adjustments to how much shading is done in one part of the scene vs. another. It isn’t clear just yet exactly how this feature works, but it sounds like the next step up from the previous Multi-res Shading feature which worked like a static foveated rendering solution.

Accelerated Ray-traced Sound Simulation

An illustration of how VRWorks Audio paths through geometry to build an acoustic model of the environment | Image courtesy NVIDIA

As it turns out, ray-tracing doesn’t just have to involve light. Ray-tracing can also be used for simulating the complex interactions of sound waves as they bounce around an environment. As NVIDIA points out, many spatial audio implementations for VR today provide accurate positional sound, but generally don’t account for how that sound would interact with the geometry of the environment around the user, which can have a major impact on the accuracy and immersiveness of the scene.

NVIDIA revealed their VRWorks Audio solution—which simulates sound in real-time on the GPU—back in 2016. With the RT Cores in the new RTX cards, the company says that VRWorks Audio implementations are accelerated up to 6x compared to the prior generation of GPUs. As VRWorks Audio demands a share of a GPU’s processing power, the newly accelerated capability might be more attractive to developers as they can retain more of the GPU’s horsepower for graphical tasks than before.

VirtualLink

Image courtesy NVIDIA

Then of course there’s VirtualLink, a USB-C based connection standard being pushed by VR’s biggest players. The VirtualLink connector offers four high-speed HBR3 DisplayPort lanes (which are “scalable for future needs”), a USB3.1 data channel for on-board cameras, and up to 27 watts of power, all in a single cable. The standard is said to be “purpose-built for VR,” being optimized for latency and the demands of next-generation headsets.

All Turing cards technically support VirtualLink (including Quadro cards), though ports could vary from card to card depending upon the manufacturer. For NVIDIA’s part, the company’s own ‘Founder’s Edition’ version of the 2080 Ti, 2080, and 2070 are confirmed to include it.

– – — – –

All of the features mentioned above (except for VirtualLink) are part of NVIDIA’s VRWorks package, and must be specifically implemented by developers or game engine creators. The company says that Variable-rate Shading, Multi-view Rendering and VRWorks Audio SDKs will be made available to developers through a VRWorks update in September.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Rick

    upcoming ultra-wide FOV headsets like PIMAX 8K you mean!

    • Kev

      Impressions of the Pimax at the meeting today are all really positive. Could be a very cool thing.

    • SomeGuyorAnother

      I think articles are trying to hold back on talking about Pimax, given the delays and silence. Check this thread of some of the source of the doubts. https://www.reddit.com/r/Pimax/comments/9a5uvz/some_not_so_good_info_from_8k_testers/

      • sfmike

        Also check out the positive info coming from the Berlin Pimax gathering. https://www.youtube.com/watch?v=13etqjaYLKU&t=202s

        • Peter Hansen

          Too bad I am not speaking this language.

          • bud
          • Sebastian asked questions in Chinese then answered to the camera in English. He also started in English explaining this.

          • Peter Hansen

            There were long sections in Chinese which got barely translated into English. Just saying.

      • Kev

        Those doubts largely turned out to be false in real testing from the first big gathering of backers.

        • Peter Hansen

          Which ones turned out to be false? I was there yesterday.

          • Kev

            Basically none of the stuff was valid. People said the brightness was low – turned out to be very slight. People said distortion was horrible – not true at all, at least 20 people at the meet said they didn’t even notice it until they were asked and even then had to stare to see it. People said the tracking had issues- totally debunked. People said some games wouldn’t work – big library tested today and that is totally debunked. People said the FOV wasn’t really high with some even claiming it was just barely more than a Vive – totally debunked its close to 200 degrees. People said it was far heavier than a Vive turned out to be the same weight. Dozens of people at the meet today were shocked these things weren’t true.

          • Peter Hansen

            Brightness is no big problem, true. But the black level is rather weak compare to the OLED displays of other headsets.

            20 people not noticing the distortion… Not much of experts there, it seems. At my event many noticed, it was widely discussed. Also demos were chosen wisely. In the hand tracking demo the periphery was basically black. Not really sure about Elite Dangerous there. In Assetto Corsa you just have to focus on more or less distant objects in the center of the image. But have you tried Skyrim VR, then the distortion was very prominent, because basically everywhere were highly structured textures in this dungeon where the game starts.

            Tracking has improved drastically, obviously. It did have solid issues before. But if a company does not get lighthouse tracking right, they have bigger problems then lens distortion. I was very worried in this regard after the previous reviews I had read and seen.

            Game compatibility is a development over time. Good that this has improved!

            And many people did not belive the FOV? It was known for a long time that the FOV actually is as big as announced. That is strange indeed.

            Anyways, the last iteration of the HMD still had many issues, and even the M2 has some. Lens distortion, black level, and comfort (pressure on the nose, loose fit on some faces) are some. It is by no means a perfect device. And it was totally justified to be highly skeptical after the first rough prototypes that were shown and the massive delay this project developed – for some it will be more than one year after the promised delivery time that they can hold their Pimax 8k in hands.

            We can all be happy and relieved that the device turned out to be so good. But personally I also see this in relation to it being a Kickstarter. If a company like Oculus or HTC/Valve delivered such a partly unfinished product, they would have a rather big PR problem.

          • Kev

            I was a backer for Oculus and a day 1 order for the vive and can tell you with zero hesitation they were both rather disasterous on launch. I spent two weeks trying to get my vive to even consistently operate. They were both laughably late as well.

            The Oculus launch was indeed an unmitigated disaster. Driver issues, defectives, extensive delays, problems installing h

          • Peter Hansen

            The DK1 was a (pre-) dev kit. The Vive was hardware-wise what my Vive is today. Because it is the same device. Partly rough software start, but otherwise without major surprises (except positive ones).

            The base Pimax is technically superior in FOV and visual clarity compared with other headsets of its price point (i.e. except StarVR and XTAL). Not perfectly sure about comparing it with the Vive Pro, though, and Samsung Odyssey (Plus) regarding clarity. Because the latter have their pixels spread over less FOV and in the vertical dimensions they even have more on the absolute count (1600 vs. 1440).

            And yes, the Pimax 8k has more Pixels vertically, but still less visual info (and more blur), because of the upscaling from 1440p to 4k per eye.

            I am not saying that I am not a little bit hyped now after the hands-on with the 8k. But still you have to see things realistically.

          • Kev

            The stats:

            Vive Pro 2880×1600 (1440×1600 per eye) 110 degree Fov ~100 degrees per eye FoV ~14.4 ppd per eye

            Pimax 5K: (also 5K native input) 5120×1440 (2560×1440 per eye) 200 Degree FoV 150 degrees per eye FoV 17.07 ppd per eye

            Pimax 8K: (upscaled from 2560×1440) 7680×2160 (3840×2160 per eye) 200 degree FoV 150 degrees per eye FoV 25.6 ppd per eye

            Also the Pimax is RGB vs. the not so good pentile on the Vive Pro.

          • Peter Hansen

            Yes. The Pimax 5k has a little denser pixels horizontally, while the Vive Pro has more pixels vertically. So the Vive Pro either has a higher vertical pixel density _or_ a higher vertical FOV. Both would be beneficial in some way.

            The Pimax 8k clearly has more pixels and a higher pixel density, but this is not backed up well by visual information because of the upscaling process. The result is less SDE, which is very much appreciated, but there is also a somewhat annoying blur which does not correspond well to the actual pixel density and leaves you with “I feel I should be seeing clearer than this”. This was well visible in Assetto Corsa.

            The Pimax 8k X of course is far superior display-wise. But without two GTX 1080 ti (at least!) you won’t get very for with this HMD.

          • Peter Hansen

            Why are you so eager to see the Pimax as the perfect device? It is always good to maintain a differentiated view.

          • Kev

            Huh?? Wtf I say some positive things about it and I’m suddenly some sort of shill. You are the one who asserted Oculus and Vive had perfect releases. I think all devices have serious flaws and a device that checks all the right boxes to a high degree is years away. However I do think Pimax is at least attempting to push the envelope with an ambitious project and I do indeed respect that. I very much hope Oculus, HTC and others do the same.

          • Peter Hansen

            I did not say that, but your assessment seemed overly positive to me. All doubts were “debunked”? They were not. However, I also think Pimax goes the right way and I hope others will follow. I am looking forward to receiving my 8k.

          • polysix

            pimax is janky, but then so was Vive.

            Rift still > everything even with its lesser spec. Only Valve can save us with their upcoming system or a true rift gen 2, everyone else makes tat.

  • Jordan_c

    Hopefully, OpenXR will abstract away most of the vendor specific code and make my life as a dev easier.

  • Nicholas

    Oh, you mean like the revolutionary “VR works” that was going to give at least twice the performance in VR games when using a Pascal GPU? Oddly enough, it ended up tanking hard and was never heard of except in one racing game or something. It’s not a very uncommon fate for new and exciting features from nvidia XD

    • SomeGuyorAnother

      Yeah, NVIDIA-reliant tech has a habit of fading away quickly, as many devs don’t want to rely on something that can isolate the AMD side of the gamer market, with the exception of those that NVIDIA puts the money into. Not that the tech is bad, but potential sales are what drives what tech is used, not whether or not it’s the shiniest thing.

    • Kev

      Except that pascal only had one type of core so the benefit was proportionately less. 3 types of cores can be used to far greater effect. I suspect people will find all sorts of uses for this with wide fov VR being an obvious one.

    • Lucidfeuer

      Yup aka Vaporworks. You won’t see it in a single VR experience. Every time we tried fiddling wether with Flex, nvidiaworks and even VR works, their APIs are such a huge catastrophic unoptimised mess that there’s no point bothering integrating it.

    • Dave

      I don’t know about VR works, but I can say the Pascal architecture in my 1070 has given me fanstatic VR performance from the 980 before so they must be doing something right. I’m expecting the 20xx cards to also perform well in VR.

  • impurekind

    This is all great stuff for VR.

    • R FC

      It certainly is, and for computer graphics in general. Even for current generation of headsets, the increase in compute (have a look at the difference in CUDA core efficiency between Pascal and Turing) will allow increased SS with higher frame rate.

      RTX2080Ti should be arriving at my office in 2 weeks :)

    • Icebeat

      what exactly is the “great stuff for VR”? the Ray-traced Sound Simulation or the VirtualLink because the rests of technologies were introduced with pascal and nobody is using it.

      • impurekind

        All of it. Anything that’s about improving any areas of the VR experience is great stuff for VR.

  • Mike

    There’s a typo in the caption for the picture showing the chip.

    • Mike

      Oh good, it’s fixed.

  • Peter Hansen

    Bunch of buzz-words. Seems basically the same as the 10-series GPU buzz-words.

  • mirak

    Aren’t they missing on wireless video transmission ?

  • oompah

    RT is exciting
    Shadows, reflections, refractions, translucence
    time to wake up in Matrix
    (but watchout for deja-vus aka CIA, NSA, etc US cats)

  • Maruvi

    Definitely one of your weakest articles. First off, it’s Turing not Turning. Secondly, Multi-res and multi shading have been around since GTX 1000’s series. They were even shown off in its reveal. Thirdly, ray traced sounds are not new at all! Wtf you made an article about it 2 years ago with AMD! One of the ONLY new things this line of cards brings is the virtual link which saves a whopping 1 cable (sarcasm). I’m a bit grumpy because as nice as RTX cards may be, the tech behind it is stupid expensive for 40fps at 1080p and marketing is very shady with a lack of proper benchmarks (red flag). Nvidia wants to sell ray tracing to you. But it’s also scared of the imperfections.

    • benz145

      Whoops, silly mistake on my part to accidentally write “Turning,” will fix that.

      On the other points, the headline states “new VR rendering features and enhancements,” meaning some new rendering features, and some new enhancements to existing stuff.

      – As far as we know, Variable Rate Shading is new, as stated
      – We didn’t claim Multi-res Shading was new, in fact we specifically pointed out that it was a prior feature:

      It isn’t clear just yet exactly how this feature works, but it sounds like the next step up from the previous Multi-res Shading feature which worked like a static foveated rendering solution.

      – We didn’t claim ray-traced audio was new, only that it’s accelerated with the new cards. We specifically said it was first introduced by NVIDIA in 2016:

      NVIDIA revealed their VRWorks Audio solution—which simulates sound in real-time on the GPU—back in 2016. With the RT Cores in the new RTX cards, the company says that VRWorks Audio implementations are accelerated up to 6x compared to the prior generation of GPUs.

  • JustNiz

    Basically this generation pretty much brings nothing actually new specifically to VR. Its outright wrong to list RT and Tensor as VR-specific improvements. Even the 2080ti won’t have high enough raytracing performance for the sustained framerate necessary for VR. Tensor also adds nothing specific to VR. Multi-view rendering is not specific to Turing, your Pascal card will be able to do it too. No headsets exist with a VirtualLink interface yet and even if they eventually do, it’s just the existing DP1.4a standard signal + USB going down a different cable so doesn’t actually add anything significant new.

  • David

    Every time I read one of these things I drool over wanting to get my fingers on RTX. I’m playing around with water simulation and there are numerous ways this would make my life easier in the not too distant future. Instead of doing marching cubes over the surface, I’d just check where my ray intersected the first water particle in SPH. Instead of twisting GLSL into knots trying to simulate refraction, or reflection, I’d just create another two rays using snells law and geometric optics. Easy. Peasy. Outside of the complexities of the fluid dynamics, water is one giant ray tracing experience that employs really simplistic math. In fact, I’m rather surprised how much we’ve seen ‘reflections’ demonstrated by RTX and nothing with water, glass or other materials with variable indices of refraction. Enough so that I’m somewhat concerned the current systems can’t handle it (with AI, are they only limited to static surfaces?) – after all, lighting or reflections alone are hardly the coolest part of ray tracing. That said, if I actually started being able to readily program with it, I would. A ray tracing pipeline would be far easier and far superior to a non-ray tracing pipeline.