Lytro’s Immerge light-field camera is meant for professional high-end VR productions. It may be a beast of a rig, but it’s capable of capturing some of the best looking volumetric video that I’ve had my eyes on yet. The company has revealed a major update to the camera, the Immerge 2.0, which, through a few smart tweaks, makes for much more efficient production and higher quality output.

Immerge 2.0 | Image courtesy Lytro

Light-field specialist Lytro, which picked up a $60 million Series D investment earlier this year, is making impressive strides in its light-field capture and playback technology. The company is approaching light-field from both live-action and synthetic ends; last month Lytro announced Volume Tracer, a software which generates light-fields from pre-rendered CG content, enabling ultra-high fidelity VR imagery that retains immersive 6DOF viewing.

Immerge 2.0

On the live-action end, the company has been building a high-end light-field camera which they call Immerge. Designed for high-end productions, the camera is actually a huge array of individual lenses which all work in unison to capture light-fields of the real world.

Photo by Road to VR

At a recent visit to the company’s Silicon Valley office, Lytro exclusively revealed to Road to VR the latest iteration of the camera, which they’re calling Immerge 2.0. The form-factor is largely the same as before—an array of lenses all working together to capture the scene from many simultaneous viewpoints—but you’ll note an important difference if you look closely: the Immerge 2.0 has alternating rows of cameras pointed off-axis in opposite directions.

With the change to the camera angles, and tweaks to the underlying software, the lenses on Immerge 2.0 effectively act as one giant camera that has a wider field of view than any of the individual lenses, now 120 degrees (compared to 90 degrees on the Immerge 1.0).

Image courtesy Lytro

In practice, this can make a big difference to the camera’s bottom line: a wider field of view allows the camera to capture more of the scene at once, which means it requires fewer rotations of the camera to capture a complete 360 degree shot (now with as few as three spins, instead of five), and provides larger areas for actors to perform. A new automatic calibration process further speeds things up.

This section of the camera’s interface shows the view through each individual lens | Image courtesy Lytro

All of this means increased production efficiency, faster iteration time, and more creative flexibility—all the right notes to hit if the goal is to make one day make live action light-field capture easy enough to achieve widespread traction in professional VR content production.

Ever Increasing Quality

Lytro has also been refining their software stack which allows them to pull increasingly higher quality imagery derived from the light-field data. I saw a remastered version of the Hallelujah experience which I had seen earlier this year, this time outputting 5K per-eye (up from 3.5K) and employing a new anti-aliasing-like technique. Looking at the old and new version side-by-side revealed a much cleaner outline around the main character, sharper textures, and distant details with greater stereoscopy (especially in thin objects like ropes and bars) that were previously muddled.

Lytro CEO Jason Rosenthal demonstrates the camera’s touchscreen interface | Photo by Road to VR

What’s more, Lytro says they’re ready to bump the quality up to 10K per-eye, but are waiting for headsets that can take advantage of such pixel density. One interesting aspect of all of this is that many of the quality-enhancing changes that Lytro has made to their software can be applied to light-field data captured prior to the changes, which suggests a certain amount of future-proofing available to the company’s light-field captures.

– – — – –

Lytro appears to be making steady progress on both live action and synthetic light-field capture & playback technologies, but one thing that’s continuously irked those following their story is that none of their light-field content has been available to the public—at least not in a proper volumetric video format. On that front, the company promises that’s soon to be remedied, and has teased that a new piece of content is in the works and slated for a public Q1 release across all classes of immersive headsets. With a bit of luck, it shouldn’t be too much longer until you can check out what the Immerge 2.0 camera can do through your own VR headset.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • With this along with their light-field software tracer for the CGI elements we will get some incredible VR films at some point. Not just a 360 VR film but one where you have room scale in it. Imagine walking around real footage of the main actor or having CG bullets fly through your room.

  • Zerofool

    I can’t wait for Q1 (among other reasons, like the alleged next gen Nvidia consumer cards reveal). I’ve been hyped to test real world lightfield captures since that OTOY demo in 2015.

    • Mo Last

      what’s that demo? can anyone try it with a vive/rift? and what even is lightfield? i always hear about it but don’t quite understand it

      • Steve Cooper

        Here’s a primer on Light Fields http://blog.lytro.com/what-is-light-field/

      • Zerofool

        Hi, sorry for the late reply. This is the demo I was referring to, and unfortunately it never became publicly available (to my knowledge).

        https://www.youtube.com/watch?v=pyJUg-ja0cg

        By today’s standards it isn’t really impressive, but just thinking about the possibilities this tech would offer over 180/360 stereo video or even photogramatically captured environments, it makes you want the future to come faster :)

        As for what Lightfields are… it’s hard to explain and comprehend but the article by Steve above provides a thorough explanation.
        I’d also urge anyone interested in getting a bit more details and visual explanation to watch this great 30 minute talk by Paul Dabevec from 2015:

        https://youtu.be/Raw-VVmaXbg?t=227

  • I can’t wait to try something recorded by them on my Rift!