ILMxLab, the experimental division of Lucasfilm’s VFX house, Industrial Light & Magic, is experimenting with large GPU arrays of up to eight cards in an effort to bring movie-quality graphics to virtual reality.

While the graphical capabilities of real-time game engines are becoming increasingly impressive in VR, they still aren’t capable of rendering in real-time the quality of the pre-rendered CGI that we’re used to seeing in major film productions. By throwing more GPUs at the problem, ILMxLab hopes to slim that gap.

See Also: PresenZ Upgrades CGI-Quality VR Rendering Tech with More Realistic Reflections

According to the official NVIDIA Blog, ILMxLAB Principal Engineer and Technology Development Lead Lutz Latta will be speaking at SIGGRAPH 2016 this week to explore how pre-rendered “movie-quality” assets can be re-used in real-time virtual reality experiences. Latta notes the group’s experimentation with large GPU arrays for rendering high fidelity VR.

“We’re experimenting with using four to eight NVIDIA graphics cards working together for rendering,” Latta said. “We’re also closing the gap between creating movie assets and VR assets with an eye towards continually increasing frame rates.”

Among ILMxLab’s charter is a drive to explore the future of immersive entertainment. The division recently released Trials on Tatooine on Steam, a short VR “experiment” for the HTC Vive that’s set in the Star Wars universe

“There is a fine line between a VR video game and an interactive cinematic experience that engages users in the story,” Latta told the Nvidia Blog. “We want engagement with the story and the world it plays in, but less of the competitive nature of a video game. Trials on Tatooine was our first step in creating something meaningful.”

PSVR Shooter 'Fracked' is Coming to PC VR in May
See Also: New Darth Vader VR Experience Coming from ILMxLAB, Here’s a Quick Peek

Trials of Tatooine supports up to two GPUs through Nvidia’s VR SLI stack (part of VRWorks); utilizing more than two GPUs will likely involve the use of VRWorks’ ‘GPU Affinity’ API which “provides dramatic performance improvements by managing the placement of graphics and rendering workloads across multiple GPU’s. This provides developers fine grain control to pin OGL contexts to specific GPU’s,” according to Nvidia.

These sorts of GPU array rendering implementations are unlikely to provide much benefit to your average VR gamer who is likely to have one (or maybe two GPUs), but could make sense for ultra-high-end VR activations like brand experiences or ‘out-of-home’ use-cases.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

  • Pistol Pete

    Awesome!!! More VR SLI compatible games and applications devs PLEASE!!

    • Raphael

      Would be great but since vr sli relies on individual developer adoption it will be as hit or miss as conventional sli. At lesst unreal and unity support would help to some extent.

      • Chris Malone

        SLI seems like a bit of a cul-de-sac, the faster we can get Vulkan integrated into graphics engines, the better. I believe that the steamVR GPU test uses Vulkan under the hood, which is why you can see scaling on multiGPU setups.

  • Badelhas

    How is this even possible if SLI doesn’t even work yet?

  • Marco –

    rather, this makes me think that in few years we’ll have the same rendering quality on a single gpu setup.