Google has revealed a new ‘surface light-field’ rendering technology that it’s calling ‘Seurat’ (after the famous Pointillism painter). The company says that the tech will not only bring CGI quality visuals to mobile VR, but it will do so at a miniscule filesize—a hurdle that other light-field approaches have struggled to surmount.

Today at I/O 2017 Google introduced Seurat, a new rendering technology that’s designed to take ultra high-quality CGI assets that couldn’t be run in real-time even on the highest performance desktop hardware, and format them in a way that retains their visual fidelity while allowing them to run on mobile VR hardware. Now, that wouldn’t be very impressive if we were just talking about 360 videos, but Google’s Seurat approach actually generates sharp, properly rendered geometry which means that it retains real volumetric data, allowing players to walk around in a room-scale space rather than having their head stuck in one static point. This also means that developers can composite traditional real-time assets into the scene to create interactive gameplay within these high fidelity environments.

So how does it work? Google says Seurat makes use of something called surface light-fields, a process which involves taking original ultra-high quality assets, defining a viewing area for the player, then taking a sample of possible perspectives within that area to determine everything that possibly could be viewed from within it. The high-quality assets are then reduced to a significantly smaller number of polygons—few enough that the scene can run on mobile VR hardware—while maintaining the look of high quality assets, including perspective-correct specular lightning.

SEE ALSO
Google to Shutter Jump VR Video Service in June

While other light-field approaches that we’ve seen are fundamentally constrained by the huge volumes of data they take up (making them hard to deliver to users), Google says that individual room-scale view boxes made with Seurat can be as small as just a few megabytes, and that a complex app containing many view boxes, along with interactive real-time assets, would not be larger than a typical mobile app.

That’s a huge deal because it means developers can create mobile VR games that approximate the graphical quality that users might expect from a high-end desktop VR headset—which may be an important part of convincing people to drop nearly the same amount of money on a standalone Daydream headset.

Google seems to still be in the early phases of the Seurat rendering tech, and we’re still waiting for a deeper technical explanation; it’s possible that potential pitfalls are yet to be revealed, and there’s no word yet on when developers will be able to use the tech, or how much time/cost it takes to render such environments. If it all works as Google says though, this could be a breakthrough for graphics on mobile VR devices.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • Pedro Kayatt

    It just look like an Unreal scene…

    • Jim Cherry

      Injustice 2 looks better in engine than most cgi movies you’ll see these days. The real advantage of this tech is file size which will allow oems to continue producing phones with lack luster storage capacity.

      • Pedro Kayatt

        Of course it makes sense to have small files for “mobile vr” (which is not what they are selling and indeed a standalone device), but I still don’t get what they are doing. In the end it is the Unreal that compress the assets to ETC2 or whatever..

      • kontis

        The materials used in real-time games, including Injustice 2, are primitive and they absolutely cannot compare to path-traced CG movies (especially when it comes to flexibility, because in some specific cases can effectively look identical, but that doesn’t prove parity).

        Current game engines with deferred renderers use so many awful fake screen space techniques that break down so often it’s just sad (devs and artists have to often fight with the renderer to fake things). Hair shading also went downhill because of that (the death of anisotropic shading and alpha blending in games).

        What this Google tech does is baking not just to flat textures and lightmaps, but to light fields, so instead of using PS4-level or better hardware to compute shading (how the light behaves on the materials) in real-time the engine looks for a current camera perspective in a pre-baked solutions available in the memory, so the hardware can be low-power smartphone and the material can be a theoretically even CG level quality.

        • Pedro Kayatt

          But the baked lighting from Unreal Engine doesn’t do exactly that? It can use even other kind of renders (like Forward Render). Do you know how this “baking” is different from the Baking that UE4 does? It uses other software to generate the buildlights maps for it? (https://docs.unrealengine.com/latest/INT/Engine/Rendering/LightingAndShadows/Lightmass/index.html)

          • RSIlluminator

            The article mentions this: “including perspective-correct specular lightning.” This is a biggie because regular baking would “freeze” that information whereas in reality it’s all view dependent. Lightfields are quite amazing, and I hope that we’ll be able to use this tech soon.

      • RSIlluminator

        Jim, can you post a comparison where Injustice 2 looks better in engine that most vfx movies of today? Are we looking at the same thing?

  • William Wallace

    So I was just talking with Joe Ludwig, and he mentioned some new IEEE VR standard whose “strength” is its ability to STORE your biometric data. Below someone who I don’t think is a mark zuck “dumbfuck” asks a good question. Will /u/RoadtoVR_Ben get these answers for us? (probably not, a search at road to VR for privacy has ZERO articles authored by Ben Lang, pathetic, /u/kentbye is k3wl though) http://www.roadtovr.com/?s=privacy
    Anyways Jaron Lanier asks – who owns the future. https://youtu.be/cCvf2DZzKX0
    https://np.reddit.com/r/oculus/comments/6bxjn4/handson_googles_standalone_daydream_headset/dhqq3bd/
    The WorldSense data that is stored and improves over time, who has access to that? Is it shared or uploaded? How is it refined? Is it refined locally or in the cloud? It’s a little like Google Glass in that it captures what it sees, but this captures so much more information. It’s totally amazing and I’m in for sure, but there are privacy and security implications of this tech. It would suck to suffer a backlash like glass did because Google didn’t anticipate some of these things.

  • PacoBell

    “there’s no word yet on […] how much time/cost it takes to render such environments”

    13 milliseconds per frame on mobile hardware. Someone wasn’t paying attention during the presentation =P

  • VRgameDevGirl

    Wouldn’t this also help desktop pc’s?? I mean, imagine what we could play! I have issues with smaller scenes that don’t even contain that many objects, and still have issues optimizing enough to get to 90 fps. When I’m working with alot of trees and foliage it’s even worse.

    • Brent

      they used the same technique on trials of tattoine they just called it by a different name ^_~

  • WyrdestGeek

    Sounds like occlusion culling… maybe I’m missing something… probably I am missing something.

  • I would like to see a comparison between this and Lytro…