NVIDIA’s Deep Learning Supersampling (DLSS) technology uses AI to increase the resolution of rendered frames by taking a smaller frame and intelligently enlarging it. The technique has been lauded for its ability to render games with high resolution with much greater efficiency than natively rendering at the same resolution. Now the latest version, DLSS 2.1, includes VR support and could bring a huge boost to fidelity in VR content.

DLSS is a technology which can allow more detail to be crammed into each frame by increasing rendering efficiency. It leverages hardware-accelerated AI-operations in Nvidia’s RTX GPUs to increase the resolution of each frame by analyzing and reconstructing new, higher resolution frames.

The goal is to achieve the same resolution and level of detail as a natively rendered frame of the same resolution, and to do the whole thing more efficiently. Doing so means more graphical processing power is available for other things like better lighting, textures, or simply increasing the framerate overall.

Image courtesy NVIDIA

For instance, a game with support for DLSS may render its native frame at 1,920 × 1,080 and then use DLSS to up-res the frame to 3,840 × 2,160. In many cases this is faster and preserves a nearly identical level of detail compared to natively rendering the frame at 3,840 × 2,160 in the first place.

Because of the need to keep latency to a minimum, VR already has a high bar for framerate. PC VR headsets expect games to render consistently at 80 or 90 FPS. Maintaining the frame-rate bar requires a trade-off between visual fidelity and rendering efficiency, because a game rendering 90 frames per second has half the time to complete each frame compared to a game which renders at 45 frames per second.

NVIDIA has announced that the latest version, DLSS 2.1, now supports VR. This could mean substantially better looking VR games that still achieve the necessary framerate.

For instance, a VR game running on Valve’s Index headset could natively render each frame at 1,440 × 800 and then use DLSS to up-res the frame to meet the headset’s display resolution of 2,880 × 1,600. By initially rendering each frame at a reduced resolution, more of the GPUs processing power can be spent elsewhere, like on advanced lighting, particle effects, or draw distance. In the case of Index, which supports framerates up to 144Hz, the extra efficiency could be used to raise the framerate overall, thereby reducing latency further still.

NVIDIA Announces GeForce RTX 30-series GPUs, Claims "Greatest Generational Leap" in Performance

NVIDIA hasn’t shared much about how DLSS 2.1 will work with VR content, or if there’s any caveats to the tech that’s unique to VR. And since DLSS needs to be added on a per-game basis, we don’t yet have any functional examples of it being used with VR.

However, DLSS has received quite a bit of praise in non-VR games. In some cases, DLSS can even look better than a frame natively rendered at the same resolution. The folks at Digital Foundry have an excellent overview of what DLSS means for non-VR content.

It isn’t clear yet when DLSS 2.1 will launch, but it seems likely that it will be released in tandem with NVIDIA’s latest RTX 30-series cards which are set to ship starting with the RTX 3080 on September 17th. Our understanding is that DLSS 2.1 will also be available on the previous generation RTX 20-series cards.

Way back in 2016 NVIDIA released a demo called VR Funhouse which showed off a range of the company’s VR-specific rendering technologies. We’d love to see a re-release of the demo incorporating DLSS 2.1—and how about some ray-tracing while we’re at it?

Alternative Text

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.

Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • kontis

    Hopefully this will make the main renderer of Unreal Engine, deferred renderer, with all features (like raytracing), viable for VR.

    Blurring of temporal AA and high overhead of all the Gbuffers rising exponentially with resolution meant that forward shading was hugely preferred for VR, but it’s a second class citizen for Epic Games and I predict that Nanite and Lumen in UE5 won’t be supported by it, just like raytracing, so it would be great to be able to just go with the main solution also for VR.

    This would also mean much easier unreal engine desktop-> PCVR ports. For example: Hellblade had to change renderers fro VR and it took a lot of time to do that.

    Keep in mind that the core trick of DLSS is actually temporal AA (the upscaling/supersampled one like in newest Unreal engine versions), so at its core it has the same blurring problem, however better accumulation and supersampling + neural nets mostly solve this problem. At least on desktop.

    • Ad

      However much they advance mobile VR, PC VR will likely have jumped a fair amount ahead by then. I wasn’t expecting a doubling of GPU performance this year.

    • Trekkie

      DLSS is not temporal AA or upsampling. There is no blur. Instead there is increased resolution and the introduction of more details into the frame.

      This sounds like magic. Whats going on is that the AI is “taught how the game looks like” by feeding it hours of 4K footage rendered offline. A lower quality version of this footage is also fed in. The AI now learns how this specific game looks like and the relationship between lo and hires footage.

      Now when presented with a low resolution frame at runtime it can use its learned knowledge to add in details it has seen before and provide a hi res larger frame.

      So its not simple photoshop upsampling or stretching.

      • Gonzax

        That’s like fucking magic indeed!

      • Azreal42

        No DLSS is some sort of TAA, it doesn’t hallucinate new details like srgan
        : https://developer.nvidia.com/gtc/2020/video/s22698

        • Trekkie

          Do some study please.

          • Azreal42

            Did you watch the video i just linked with nvidia explaining how it works ? It use the same trick TAA / Checkerboard rendering, which is jittering. The hard part is how to combine the different frames. And they use ML to determine the reconstruction heuristics.

          • Trekkie

            I work with this in my day job so I should know how it works. You are picking up the jargon and making a word salad out of it and appearing to make sense. Sorry.

            Let me attempt to explain in English:

            TAA renders at FULL resolution. So if your screen is at 1080p and you rendering full screen the the engine creates a 1920*1080 framebuffer(s) as well as 6 or more G buffers also at that resolution. The shaders then operate on 1920*1080 =2073600 pixels. Several such full res frames are analysed (hence “temporal”) and that information used to produce the antialiased final image that you see. All in full res.

            DLSS requires the engine to render at half resolution which is 1/4 the number of pixels (1920/2 * 1080/2) = 518400 pixels [540p]. So all buffers are at this resolution and hence the shaders do1/4 of the work. Now this is scaled up to 1080p. Upscaling will blur – thats a hard fact. Imagine you have a license plate of a car which is far away, the numbers and letters being barely readable. Now if you upscale it it will blur and still be unreadable and worse, its blurred. DLSS has seen thousands of license plates in the high res footage that it ingested during the training phase and knows that the license plate should look like. It then adds in the details that it learned. So now we have a full res image and more detail than the rendered image.

            Hope this helps.

          • Azreal42

            Look, DLSS is working using information off the previous frames. And you need to jitter like in checkboard rendering. DLSS is *not* a generative model. It’s not a GAN. Can you watch the video i linked please ? Hope this helps ;)

      • kontis

        DLSS is not temporal AA or upsampling

        I suggest watching Nvidia’s own technical presentation.
        DLSS fundamental data source is Temporal Upsampling. There were even rumors of DLSS 3.0 being possibly used as a driver level hack for any TAA game.

        The main reason why it can sometimes look better than native is because it uses data that doesn’t exist in a single native frame.
        Multiple 1080p frames can often have more details than a single 4K frame if the camera moves a bit. Some smartphones already use this technique to increase resolution for camera, by exploiting the fact a user’s hand is never stable.

        This is also the reason why TAAU in UE5 demo (1440p -> 4K) looked so good WITHOUT using neural nets. Because it’s like 60% of the DLSS magic. It’s already available in UE4 – it’s quite decent and much better than traditional TAA.

      • Zerofool

        You are describing DLSS 1.0. In contrast, DLSS 2.0 does in fact have a temporal component – together with the current lo-res frame, it uses the previous final frame, as well as a motion vector map to generate the new final frame. It also uses pixel jitter offset between frames to get more sample data in static situations. It’s basically a mixture of TAA and DLSS 1.0.
        That’s why you can see some temporal artifacts, like in Death Stranding as demonstrated by Digital Foundry.

        • Azreal42

          Well, from my perspective and current understanding, that’s not quite right either.

          Even though nvidia wasn’t really crystal clear on how it works back then in 2018, you can see on the white paper (link below) that it was already some kind of super sampling filter from multiple accumulated frame.
          And to make things more confusing, the dlss 1.x had two mode with DLSS and DLSS 2X, with the second mode being the same algo used to produce a good quality anti-aliasing without relying on manual heuristics to mitigate artifacts (and that’s why it’s called DLSS and not DLSR, it is not a SuperResolution algo).
          And I think the key difference between DLSS 1.x and 2.x is the use of motion vector and i suspect they reduced the accumulated frames.

          In the ML perspective, DLSS is more relying on how to best combine frames given textures patterns/high order features (frequency, neighbors colors, etc), motion vector patterns, than of texture knowledge (i.e. remembering seeing a “5” and add missing pixels).
          And there is a reason for that: When a ml model learn something, the knowledge is somehow embedded in the network parameters, and to be able to recall specifics details of textures would require an huge amount of parameters, meaning a big model and lot of vram usage.
          Such networks exists: SRGAN is hallucinating pixels that way, and facebook superresolution paper is also kind of working that way, but there is still an open problem of temporal coherency, even though researcher are making progress on this.


    • Andrew Jakobs

      Yeah, I would hope so too with the UE4, but now I see there is a special UE4 RTX branche, so I wonder if you really need that if you want to have the options available or if it’s also available in the ‘default’ UE4 which you download directly through the Unreal launcher.

      • Zerofool

        Unfortunately, if you don’t want to integrate the functionality into UE4 on your own, you’re forced to use the Nvidia forks of UE4. When talking about their exclusive technologies, their track record of supporting these forks is quite bad. The VRWorks fork was last updated for version 4.18.2 of UE4, almost 3 years ago. So if you relied on it, you’re stuck with that old version. An example of a developer that supported MRS and LMS is Cyan in their game Obduction. At some point they switched to the main UE4 branch and all these Nvidia VR technologies are no longer supported. I won’t be surprised if the same thing happens with Nvidia’s RTX UE4 fork at some point.
        I really hope such technologies become a part of DirectX and all GPU vendors support them. Only then Epic would consider adding support in their main branch of the engine. But Nvidia is known for their desire to have exclusive technologies, even if this means that just a handful of games will ever use them. Everything they do isn’t in the best interest of gamers, they only serve their investors and their dear leader :/

  • Ad

    The best thing they could do is have their in house RTX team go help a coming game that’s perfect for RTX like Lo Fi. That game would be a perfect showcase for both RTX and DLSS 2.1 with it’s endless night settings, neon style, and lighting heavy style.

    I hope this takes off and becomes a major driver for VR headset resolution. I wonder if this could also push visual requirements in VR down a little. Honestly I want this to become an open source or DX12 type thing so all games can make use of it, and on all graphics cards with the necessary hardware.

    • marcandrdsilets

      Totaly agree for Lo Fi, it would be amazing to expericence a fully raytraced VR game.

      I can’t believe how GPU optimized a game could be with dynamic foveated rendering + Dlss 2.1 ON. This is a game changer.

      Sadly, from what I understand, DLSS is calculated by Tensor cores which is the AI chip only available on Nvidia card :( (I haven’t seen anything about big navy yet so maybe they have something equivalent)

      • Ad

        DLSS is also something that NVIDIA has an SDK for, and I think you need to apply for it. At the very least it’s not something that would come to too many games. The ideal solution would be for an open alternative to emerge, for NVIDIA to let the engines just implement it and in a way that AMD could make hardware that could use it, or even have it integrated into SteamVR itself the way ASW draws information from games.

        • Steve

          Nvidia’s proprietary crap always fails. They had to admit failure on Gsync and it will be the same with DLSS. If AMD implements a version of it on consoles and their PC cards, game developers will migrate to it. Right now Nvidia basically has to bribe developers with free software development support to get them to include it at all.

  • Brettyboy01

    I understand that my statement here may be pointless, but until a PC headset has the comfort of the PSVR then fidelity still wont keep my attention for long.

    • Ad

      The Index and Reverb have really high comfort, usually called the highest, but they’re not halo style if you like that.

    • Gonzax

      I couldn’t disagree more, to be honest. PSVR for me is very very uncomfortable, by far the worst headset I’ve ever tried. My Index on the contrary is very comfortable, same as my CV1.

      Everyone’s head is different but I would never use PSVR as an example of comfort, even less if you need glasses. You can’t even use prescription lens inserts with it as far as I know. That alone makes it a big no in comfort.

      That’s just me, of course, it will be different for others.

      • Andrew Jakobs

        And so you see, different people, different opinions..

    • 3872Orcs

      Seems like you’ve not tried the Index. It’s the most comfortable headset out there and I’ve owned and played a bunch of them. Index is very adjustable, has a unique comfortable fabric for the facemask and the highest refresh rate range on the market (80-144hz) Reverb also seems to be similar comfort wise as the Index except it has less a lower refresh rate than Index. Refresh rate is important for comfort.

    • Erik

      Rift S has a similar halo strap as the PSVR.

    • crim3

      I agree, comfort is a pillar of VR. Although the best design depends on each one’s particular anatomy. In my experience so far, the best design is halo type (since it removes pressure from the face, I don’t like too much pressure there) but complemented with head straps, preferently in the ear to ear direction better than from front to back. That extra support removes a lot of pressure from the halo around your head.

  • Cool!

  • Valentin Remy

    To clarify, DLSS 2.1 SDK is already available ! There’s just no title using it.

  • Patrick Hogenboom

    I’d think that the upscaling algorithm will have different artifacts for each eye.
    So the depth perception will be less precise, especially on small details.

    Another thing that bothers me is that I’d like to know how much energy is spent on training the AI. It’s a carbon footprint and should be something that people who care about the climate are made aware of.

    • Bob

      DLSS implemented on flatscreen games is perfectly fine because you’re usually over a feet away from the screen so artifacts aren’t generally seen. With VR it’s a completely different story; visual artifacts can be more easily seen and we are very good at noticing these sorts of things.

      DLSS for VR sounds great on paper but could it really reconstruct the image so as not to introduce any visible artifacts? Bearing in mind the screen is essentially a few inches away from your eyeballs.

      • kontis

        On the other hand VR also has a huge advantage flat screens don’t have.

        The temporal multi frame accumulation to get more data into the frame requires movements of camera. If you don’t move mouse in a flat game the algorithm will struggle to get native-like details.

        This is not a problem in VR, because our heads always move even when trying to be completely stationary (micro movements).

        Super resolution zoom in smartphones works in a similar way – getting more by sampling pixels at different positions of the hand. However it cannot use motion vectors and all the buffers that a game engine has, so it’s harder to achieve nice results.

    • crim3

      Motion reprojection already introduces quite nasty visual artifacts and we deal with them because when the game can’t reach native fps the net result is a better experience than only orientation reprojection. I guess this would be the same.
      Also, how can you worry about a drop in an ocean when we live in a world where daily military actions keep wasting resources and hurting the enviroment at an alarming, wholly different scale than our regular lifes as common citizens?

      • Andrew Jakobs

        yeah, it’s a problem with a lot of VR games that use resolutionscaling, it brings awful artifacts around stuff that moves, like your ‘hands’. At least in the older games.

    • yeso126

      This could happen however AI based machine learning could implement coherence on both images

      • Patrick Hogenboom

        Good point, hadn’t thought of that. Use stereoscopic coherence as a training criterium.

  • Vic

    Would 20xx cards also get DLSS 2.1 or is it exclusive to 30xx?

    • benz145

      I haven’t heard anything about it being 30-series only, and 20-series got upgraded to 2.0 recently, so AFAIK 2.1 should be available on both.

  • Raphael

    Already doomed to fail. All of these proprietary systems are doomed largely because of the laziness or resistance of developers to incorporate. Nvidia already had some good hardware VR acceleration available since the 900 series GPUs. Single pass stereo, lens-matched shading etc. Features that have doubled FPS in some cases for the games that utilised the features. But here we are some years down the line and hardly any games support VRworks features.

    DCS World has dire VR performance. Eagle Dynamics to date, haven’t introduced even one VR acceleration feature. DCS VR users go to insane lengths even to reach 45 fps. Eagle either lack the development skills or are simply hostile towards VRworks. DCS VR users are thus forced to buy massively overpowered GPU’s even though DCS doesn’t give anywhere near the full performance of a GTX 1070.

    DLSS will make it in to a percentage of games. It won’t become standard and will probably only reach mediocre levels of support. This is generally what happens when you rely on smart-thinking of developers. I also think Nvidia have failed over many years to get these proprietary systems adopted en-masse by game developers. Hand out free or greatly reduced cost latest gen GPU’s to established development studios.

    • kontis

      True, however there is a huge difference in hype here. When your customers are so well aware of a great feature it’s getting harder to ignore, because more and more people will be asking and nagging. Small dev studios also like to use low hanging fruit in marketing and DLSS might be that case, unlike those other tricks not known by average user.

      • Raphael

        Yes, this kind of tech always makes it into Unreal engine. It’s great tech as is the existing VRworks features. Smart developers will make use of it. The not-so-smart ones will never bother. :)

        I hope this one has more success than the existing VRworks acceleration. It’s a shame these systems always require coding support from the devs. That’s really where they fail and Nvidia doesn’t really know how to push really hard to build that support infrastructure. They should learn from Palmer Luckey era oculus hardware promotion/education infrastructure building.

        • Zerofool

          Yes, this kind of tech always makes it into Unreal engine. It’s great tech as is the existing VRworks features. Smart developers will make use of it. The not-so-smart ones will never bother. :)

          Unfortunately, if you don’t want to integrate the functionality into UE4 on your own (which is not trivial), you’re forced to use the Nvidia forks of UE4. When talking about their exclusive technologies, their track record of supporting these forks is quite bad. The VRWorks fork was last updated for version 4.18.2 of UE4, almost 3 years ago. So if you relied on it, you’re stuck with that old version. An example of a developer that supported MRS and LMS is Cyan in their game Obduction. At some point they switched to the main UE4 branch and all these Nvidia VR technologies are no longer supported. I won’t be surprised if the same thing happens with Nvidia’s RTX UE4 fork at some point.
          Our only hope is Epic to add support in the main UE4 branch, but I don’t see this happening unless the tech in question is GPU vendor agnostic :/

          • Andrew Jakobs

            Ah, that’s a shame, as I wouldn’t want to use an alternate branch exactly for the reasons you’re saying (only available for a specifc version).

    • crim3

      Eagle Dynamics has repeatedly say that they are not going to implement propietary solutions in their software. I guess no developer is willing to tell their customers that their software will be better or worse depending on the brand of your hardware.

      • Rosko

        Yes it has nothing to do with laziness not ability, they said that VR performance was a top priority but they are probably hesitant to support such technologies.

    • Andrew Jakobs

      As more and more games are created in the ‘standard’ game engines like UnrealEngine, Unity, CryEngine etc, it would be the creators of those engines to support the feature. And as far as I know UE already supports the nvidia SDK’s for a long time.

  • Trip

    Needing to be added per-game is the problem. The thing I most need to improve frame-rates in is DCS World and they’ve stated that they can’t justify the dev time to implement DLSS when it will only help a fraction of their user base. =(

    • Steve

      And this is the reason it will fail. Why would I as a game developer waste any time or money on something that no consoles, no AMD cards, and no older Nvidia cards support. That is like 75%+ of the gaming market. And since most games are just console ports anyway, it is a nonstarter.

  • notRobot2

    wow cool. ultimate
    Ray tracing in VR, mmmm , that’d be awsome. I presume, some people’d start living in VR world (similar to lonely guys in Japan who marry dolls).

  • BlockyBlender

    Thank you for this useless information which says nothing about the technicalities about DLSS 2.1 despite it’s title, given the lack of information, you could’ve just told more about your prediction on how it will optimize resolution like probably using eye tracking but instead you talked about the significance of DLSS in general, which has nothing to do with DLSS 2.1 for vr, given the title.

  • meh

    The tech future is a wide ditch, a very wide ditch. We’ll see what happens enough with the futurist fortune teller banter.

  • Jim P

    While everyone argues on here. Unity needs to get with it so I can have sweet eye candy on my VR