Digital lightfields are a cutting-edge technology that can render photorealistic VR scenes, and Otoy has been a pioneer of the rendering and compression techniques to deal with the massive amounts of data required to create them.
Their OctaneRender is GPU-based, physically correct renderer that has been integrated into 24 special effects industry tools with support for Unity and Unreal Engine on the way. They’ve been pioneering cloud-based compression techniques that allows them to stream volumetric lightfield video to a mobile headset like the Gear VR, which they were demonstrating at SIGGRAPH 2016 for the first time last week.
Jules Urbach is the CEO and cofounder of OTOY, and I had a chance to sit down with him at SIGGRAPH in order to understand what new capabilities digital lightfield technologies present, some of the new emerging file formats, the future of volumetric lightfield capture mixed with photogrammetry techniques, capturing an 8D reflectance field, and his thoughts on swapping out realities once we’re able to realistically render out the metaverse.
LISTEN TO THE VOICES OF VR PODCAST
Otoy is building their technology stack on top of open standards so that they can convert lightfields with their Octane renderer into an interchange format like gLTF, which will be able to be used in all of the industry-standard graphics processing tools. They also hope to eventually be able to directly deliver their physically correct renders directly to the web using WebGL.
In the Khronos Group press release about gLTF momentum, Jules said, “OTOY believes glTF will become the industry standard for compact and efficient 3D mesh transmission, much as JPEG has been for images. To that end, glTF, in tandem with Open Shader Language, will become core components in the ORBX scene interchange format, and fully supported in over 24 content creation tools and game engines powered by OctaneRender.”
Jules told me that they’re working on OctaneRender support for Unity and Unreal Engine, and so users will be able to start integrating digital lightfields within interactive gaming environments soon. This means that you’ll be able to change the lighting conditions of whatever you shot once you get it into a game engine, which makes it unique from other volumetric capture approaches. The challenge is that there aren’t any commercially available lightfield cameras available yet, and Lytro’s Immerge lightfield camera is not going to be within the price range of the average consumer.
Last year, OTOY released a demonstration video of the first-ever light field capture for VR:
Jules says that this capture process takes about an hour, which means that it would be primarily for static scenes. But Jules says that they’re working on a much faster techniques. However, they’re not interested in becoming a hardware manufacturer, and are creating 8D reflectance field capture prototypes with the hope that others will create the hardware technology to be able to utilize their cloud-based OctaneRenderer pipeline.
Jules says that compressed video is not a viable solution for delivering the amount of pixel density that the next generation screens require, and that their cloud-based lightfield streaming can achieve up to 2000fps. Most 360 photos and videos are also limited to stereo cubemaps, that don’t really account for positional tracking. But lightfield capture cameras like Lytro do a volumetric capture that preserves the parallax and could create navigable room-scale experiences.
@OTOY added support for rendering stereo cube maps in the Octane renderer. Their test is the highest quality scene I have seen in an HMD.
— John Carmack (@ID_AA_Carmack) February 6, 2015
Jules expects that the future of volumetric video will be a combination of super high-quality, photogrammetry environment capture with a foveated-rendered lightfield video stream. He said that the third-place winner of the Render the Metaverse Contest competition used this type of photogrammetry blending. If Riccardo Minervino’s Fushimi Inari Forest scene were to be converted into a mesh, then it would be over a trillion triangles. He says that the OctaneRender output is much more efficient so that this “volumetric synthetic lightfield” can be rendered within a mobile VR headset.
Overall, Otoy has an impressive suite of digital lightfield technologies that are being integrated with nearly all of the industry-standard tools and with game engine integration on the way. Their holographic rendering yields the most photorealistic results that I’ve seen so far in VR, but the bottleneck to production of live action volumetric video is the lack of any commercially available lightfield capture technologies. But lightfields are able to solve a lot of the open problems with the lack of positional tracking in 360 video, and so will inevitably become a key component to future of storytelling in VR. And with the game engine integration of OctaneRender, then we’ll be able to move beyond passive narratives and have truly interactive storytelling experiences and the manifestation of the ultimate potential of a photorealistic metaverse that’s indistinguishable from reality.