Google has revealed a new ‘surface light-field’ rendering technology that it’s calling ‘Seurat’ (after the famous Pointillism painter). The company says that the tech will not only bring CGI quality visuals to mobile VR, but it will do so at a miniscule filesize—a hurdle that other light-field approaches have struggled to surmount.

Today at I/O 2017 Google introduced Seurat, a new rendering technology that’s designed to take ultra high-quality CGI assets that couldn’t be run in real-time even on the highest performance desktop hardware, and format them in a way that retains their visual fidelity while allowing them to run on mobile VR hardware. Now, that wouldn’t be very impressive if we were just talking about 360 videos, but Google’s Seurat approach actually generates sharp, properly rendered geometry which means that it retains real volumetric data, allowing players to walk around in a room-scale space rather than having their head stuck in one static point. This also means that developers can composite traditional real-time assets into the scene to create interactive gameplay within these high fidelity environments.

So how does it work? Google says Seurat makes use of something called surface light-fields, a process which involves taking original ultra-high quality assets, defining a viewing area for the player, then taking a sample of possible perspectives within that area to determine everything that possibly could be viewed from within it. The high-quality assets are then reduced to a significantly smaller number of polygons—few enough that the scene can run on mobile VR hardware—while maintaining the look of high quality assets, including perspective-correct specular lightning.

Watch Google's 'Visual Positioning Service' AR Tracking in Action

While other light-field approaches that we’ve seen are fundamentally constrained by the huge volumes of data they take up (making them hard to deliver to users), Google says that individual room-scale view boxes made with Seurat can be as small as just a few megabytes, and that a complex app containing many view boxes, along with interactive real-time assets, would not be larger than a typical mobile app.

That’s a huge deal because it means developers can create mobile VR games that approximate the graphical quality that users might expect from a high-end desktop VR headset—which may be an important part of convincing people to drop nearly the same amount of money on a standalone Daydream headset.

Google seems to still be in the early phases of the Seurat rendering tech, and we’re still waiting for a deeper technical explanation; it’s possible that potential pitfalls are yet to be revealed, and there’s no word yet on when developers will be able to use the tech, or how much time/cost it takes to render such environments. If it all works as Google says though, this could be a breakthrough for graphics on mobile VR devices.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


    • Injustice 2 looks better in engine than most cgi movies you’ll see these days. The real advantage of this tech is file size which will allow oems to continue producing phones with lack luster storage capacity.

      • Of course it makes sense to have small files for “mobile vr” (which is not what they are selling and indeed a standalone device), but I still don’t get what they are doing. In the end it is the Unreal that compress the assets to ETC2 or whatever..

      • The materials used in real-time games, including Injustice 2, are primitive and they absolutely cannot compare to path-traced CG movies (especially when it comes to flexibility, because in some specific cases can effectively look identical, but that doesn’t prove parity).

        Current game engines with deferred renderers use so many awful fake screen space techniques that break down so often it’s just sad. Hair shading also went downhill because of that (the death of anisotropic shading and alpha blending in games).

        What this Google tech does is baking not just to flat textures and lightmaps, but to light fields, so instead of using PS4-level or better hardware to compute shading (how the light behaves on the materials) in real-time the engine looks for a current camera perspective in a pre-baked solutions available in the memory, so the hardware can be low-power smartphone and the material can be a theoretically even CG level quality.

  1. So I was just talking with Joe Ludwig, and he mentioned some new IEEE VR standard whose “strength” is its ability to STORE your biometric data. Below someone who I don’t think is a mark zuck “dumbfuck” asks a good question. Will /u/RoadtoVR_Ben get these answers for us? (probably not, a search at road to VR for privacy has ZERO articles authored by Ben Lang, pathetic, /u/kentbye is k3wl though)
    Anyways Jaron Lanier asks – who owns the future.
    The WorldSense data that is stored and improves over time, who has access to that? Is it shared or uploaded? How is it refined? Is it refined locally or in the cloud? It’s a little like Google Glass in that it captures what it sees, but this captures so much more information. It’s totally amazing and I’m in for sure, but there are privacy and security implications of this tech. It would suck to suffer a backlash like glass did because Google didn’t anticipate some of these things.

  2. “there’s no word yet on […] how much time/cost it takes to render such environments”

    13 milliseconds per frame on mobile hardware. Someone wasn’t paying attention during the presentation =P

  3. Wouldn’t this also help desktop pc’s?? I mean, imagine what we could play! I have issues with smaller scenes that don’t even contain that many objects, and still have issues optimizing enough to get to 90 fps. When I’m working with alot of trees and foliage it’s even worse.