Google unveiled a method of capturing and streaming volumetric video, something Google researchers say can be compressed down to a lightweight format capable of even being rendered on standalone VR/AR headsets.

Both monoscopic and stereocopic 360 video are flawed insofar they don’t allow the VR user to move their head completely within a 3D area; you can rotationally look up, down, left, right, and side to side (3DOF), but you can’t positionally lean back or forward, stand up or sit down, or move your head’s position to look around something (6DOF). Even seated, you’d be surprised at how often you move in your chair, or make micro-adjustments with your neck, something that when coupled with a standard 360 video makes you feel like you’re ‘pulling’ the world along with your head. Not exactly ideal.

Volumetric video is instead about capturing how light exists in the physical world, and displaying it so VR users can move their heads around naturally. That means you’ll be able to look around something in a video because that extra light (and geometry) data has been captured from multiple viewpoints. While Google didn’t invent the idea—we’ve seen something similar from NextVR before it was acquired by Apple—it’s certainly making strides to reduce overall cost and finally make volumetric video a thing.

In a paper published ahead of SIGGRAPH 2020, Google researchers accomplish this by creating a custom array of 46 time-synchronized action cams stuck onto a 92cm diameter dome. This provides the user with an 80-cm area of positional movement, and also bringing 10 pixels per degree angular resolution, a 220+ degrees FOV, and 30fps video capture. Check out the results below.

 

The researchers say the system can reconstruct objects as close as 20cm to the camera rig, which is thanks to a recently introduced interpolation algorithm in Google’s deep learning system DeepView.

This is done by replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells which are better suited for representing panoramic light field content, researchers say.

SEE ALSO
Facebook Says It Has Developed the 'Thinnest VR display to date' With Holographic Folded Optics

“We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final, compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser,” Google researchers conclude.

In practice, what Google is introducing here is a more cost-effective solution that may eventually spark the company to create its own volumetric immersive video team, much like it did with its 2015-era Google Jump 360 rig project before it was shuttered last year. That’s of course provided Google further supports the project by say, adding in support for volumetric video to YouTube and releasing an open source plan for the camera array itself. Whatever the case, volumetric video, or what Google refers to in the paper as Light Field video, is starting to look like a viable step forward for storytellers looking to drive the next chapter of immersive video.

If you’re looking for more examples of Google’s volumetric video, you can check them out here.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 3,500 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • Thomas Hall

    Has anyone worked out if Google’s volumetric videos can be watched in VR? This is the future of film for me- very exciting!

  • Fear Monkey

    It would be amazing to be able to watch a movie and move though it someday. the adult industry would certainly be at the doors wanting that kind of tech lol.

  • Greyl

    Heh, you know what everyone wants this tech for ;)

    • Martijn Valk

      Of course.. Nature documentairies right?

      • brandon9271

        I LOVE nature :)

      • Bob

        No just documentaries about various types of bushes :)

        • Jonathan Winters III

          and mountains. Hope Cas and Chary are not reading this thread ;)

  • “Welcome to Lightfields” by Google free for PCVR on Steam, stunning samples of this future technology

    https://uploads.disquscdn.com/images/aac57e7772df132c1aeaed316e9af8eb281a717495884d0bc3353a89f36e7815.jpg

  • Surprised the article didn’t mention that Mr Doob has built a webgl viewer for stills in this format.

    https://deepview-ar.glitch.me

    and

    https://deepview-vr.glitch.me

  • Erik Romão

    Wasn’t Nvidia also experimenting with similar tech a few years ago?

  • dk

    hmmm obviously this has to happen ….but u know what’s weird is ….they haven’t even made this for digital rendered scenes…. there should already be lightfield videos on youtube of stuff like Henry