Back toward the end of 2015, light field camera company Lytro announced a major turn toward the VR market with the introduction of ‘Immerge’, a light field camera made for capturing data which can be played back as VR video with positional tracking. Now the company is showing the first footage shot with the camera.

Lytro has made point-and-shoot consumer light field cameras since 2012. And while the company has had some success in the static photo market, the potential market for the application of light field capture has pulled the company into VR in a big way.

Lytro_Immerge_Coast
See Also: Lytro’s ‘Immerge’ 360 3D Light-field Pipeline is Poised to Redefine VR Video

Immerge, a 360 degree light field camera in the works by Lytro, captures incoming light from all directions. With not only the color of the light, but also its direction, the camera is capable of capturing data representing a stitch-free snippet of the real world, and (uniquely compared to other 360 degree cameras) the data which is captured allows for positional tracking from the user’s head (the ability to move your head through 3D space {‘parallax’} and have the scene react accurately).

This ability is one of the major advantages over standard film capture, and is seen as critical for immersion and comfort in VR experience. Now, Lytro is showing off the first light field footage shot by their Immerge camera; they say it’s the “first piece of 6DOF 360 live action VR content ever produced.”

Light field captures from Lytro’s camera also have a few other tricks, like the ability to change the IPD (distance between the stereo images, to align with each user’s eyes) and focus as needed in post-production.

SEE ALSO
Prop Hunt-style VR Multiplayer ‘Mannequin’ Heads Into Early Access on Quest Next Month

The company says that Immerge’s light field data captures scenes not only with parallax, but also with view-dependent lighting (reflections that move correctly based on your head position), and truly correct stereo which works no matter the orientation of your head. Traditional 360 degree camera systems have issues showing stereoscopic content when the viewer tilts their head in certain directions, while Immerge’s light field captures retain proper stereo no matter the orientation of the head, Lytro says.

8i-vr-video-1
See Also: 8i are Generating Light Fields from Standard Video Cameras for VR Video

According to Lytro’s VP of Engineering, Tim Milliron, Immerge can render up to 8k per eye resolution, synthesizing the view from hundreds of constituent sub-cameras. Milliron says the company expects content creators to use Immerge’s light field captures like a high quality master file, from which a high-end 6DOF-capable experience could be distributed in app-form to desktop VR headsets, or other more basic 360 video files could be rendered for uploading and playback through traditional means.

Last year, Lytro raised a $50 million investment to pursue their VR interests. While the company initially expected to have Immerge ready in the first half of 2016, it’s just now in Q3 that we’re seeing the first test footage shot with the device. Felix & Paul Studios, Within (formerly ‘Vrse)’, and Wevr were initially said to be among the first companies outside of Lytro to get access to the camera to begin prototyping content. The company is also accepting applications for access to the prototype camera on the official Immerge website.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Bryan Ischo

    Hm, I wonder if Lytro has finally found a use for their tech.

  • Daemon Hunt

    Been following Lytro for the few years they have been around, and I knew it was only a matter of time before they realised VR was a perfect fit for their product. The fact that in traditional ‘2D/3D’ film-making Lytro was showing great potential in terms of vastly superior post processing capabilities was impressive enough. Once ‘360 degree videos’ have true depth perception, parallax, sense of immersion, presence etc, I’d be quite happy to finally call them VR :) We’ll need to come up with another name though, saying ‘360 degree videos’ will be like saying ‘phonograph’ very soon.

    • Tehen

      “VR videos” may be :) ?

  • kalqlate

    By the sounds of it, the camera will be SUPER expensive. The capture files will be so GARGANTUAN that in full depth with parallax, they will have to be streamed. Perhaps, they should team up with Realities and use their software to transform their captures into texturized 3D environments with the same origin and parallax limit: http://www.roadtovr.com/realities-photogrammetry-virtual-reality-htc-vive/. Maybe that would decrease the file sizes some.

    • Zach Mauch

      I fully expect that that is the end goal with true 6DOF capture tech. We will just be getting varying levels of 3D rendering. It will be big files that require a lot of processing, but that is the world of VR.

      • kalqlate

        On closer review, what Realities is doing is stereo-photogrammetry from multiple camera perspectives. While VERY impressive, it is not the same as doing object recognition/extraction and texturizing.

        It is more likely that an AR headset creator like Microsoft (HoloLens), Intel (RealSense/Project Alloy), Google (Project Tango), and Magic Leap will reach the Holy Grail of 3-D scene decomposition into texturized objects and 3-D lighting. I say this because they are wanting/NEEDING their software objects to be interactive with objects in the real-world, so they are already recognizing and delineating basic objects. Google has demonstrated some degree of this (as you have stated) with Project Tango. I suspect, though, that Magic Leap is the company going whole hog on this according to the requested qualifications in some of their job ads. Even in some of their early talks and tech mock-up videos, they have alluded to being able to virtually manipulate and interact with objects detected in the real world, with full respect for light sources, reflections, and shadows. Deep neural-networks trained on petabytes of photos and videos with objects, textures, and scene lighting characteristics labeled (supervised learning) and unlabeled (unsupervised learning).

        • Neel Bedekar

          Very well said @kalqlate. Realities is a lot more about taking a static scene and applying computer vision techniques to transform it into a general, rough model that a game engine can interpret. Lytro’s job is much tougher, because it requires exact reconstructions, but it appears to be a lot less about training neural networks to better classify objects, and a lot more about finding a way to represent the copious amounts of data out there as much less.

    • Tony Snow

      Realities is a static enviroment, it’s a totally different thing from a video footage.

      • kalqlate

        Isn’t video FRAME-based? And isn’t each SINGLE FRAME independently static? Aren’t even 3-D VR games FRAME-based? What would be true if you paused and froze a 3-D VR game on any given frame? You’d still be able to look around, but nothing in the scene would be moving, except your perspective, right?

        Not only is it possible that REALITIES’ tech could reduce data on each frame independently, but by doing frame-to-frame-to-frame-to-frame diff calculations, they possibly could compress the data even further by storing only a few full key frames every nth frame, then computing the other frames from the compressed diff information.

        • David Herrington

          I don’t think you understand Tony. What he’s saying is that REALITIES is static in time to take all necessary shots in the scene. That means that to form a static scene in REALITIES, the photographer needs to take hundreds of photos from every possible angle to create a texture map which the software can apply to generated surfaces.
          To replicate this through time in a video you would have to move the camera to all areas and perspectives so that the camera/s could capture all perspectives as the scene changes. You would have to have infinite amounts of cameras moving throughout the scene all at the same time to capture every possible perspective to reproduce the same effect that is done in REALITIES. I’m not saying this is an impossible task, but we do not have the tech to do it now.
          The best way to replicate what you are saying is to place a couple of Immerge cameras in a scene, and use interpolation to fill in the gaps between them with post processing.

          • kalqlate

            Yes, I understand exactly what Tony is saying. But we are all misunderstanding what Realities tech is. They are not extracting 3D object and texture information. They are doing stereo-photogrammetry from multiple positions. They use math (photogrammetry) to compute interpolated STEREO IMAGES (not 3D objects and textures) for any position that was not directly captured.

            Given a SINGLE FRAME from a Lytro light-field movie, Realities could process that frame from the same SINGLE 360 perspective that the Lytro camera gets. Please look at the top photo on this page. The Lytro camera only captures from ONE position, not “every possible angle”. The difference between the way Realities would capture the SINGLE-FRAME scene and how Lytro captures it is that Lytro is able to capture light angle information, which allows it to accurately produce lighting and shadows that adjust according to the parallax desired with head movement. If a SINGLE FRAME from a Lytro light-field movie can be processed with Realities, then so could TWO FRAMES, THREE FRAMES, … ALL FRAMES of the Lytro light-field movie.

            But forget about Realities for now.

            The actual tech desired is one that does indeed do object, texture, and light extraction. Then, the scene becomes a set of 3D texturized objects with lighting sources. This would be true for a single frame as it would be for a video COMPOSED of many single frames. (Please look at my answer to Zach Mauch below.)

    • Chris Keath

      It is incredibly expensive, and part of that cost is the integrated capture / storage system they sell it with – It basically comes with a specialized SAN to capture…

      That does indicate that the file sizes are too big to be practical for most content producers, but it’s also kind of awesome – that device is capturing an obscene amount of data.

      It might take 20 years, but Light Field Capture is the future….

  • John Horn

    Incredible tech… must be monstrously expensive on storage requirements. But then again, I have no idea how this data is stored really. It could be that it’s stored in a different way than I imagine.

  • Surykaty

    I used to criticize all of the Lytro’s previous attempts at cameras which were simply gimmicky underthought and desperste devices and easilly called it that they will never become a commercial success – but when Immerge was introduced it was an easy guess that this product is a true proper application of this tech and that it will be the single the most important capturing tech for VR when it comes to capturing reality and playing it back in VR.

    As far as I understand the underlying tech, the filesize is massive but you could have playback at reasonable speeds even on a gear vr.

    Also.. this is the true proper device to capture VR porn.

  • OgreTactics

    The technology Lytro has developed is amazing, is the future and is 10 years in advanced. But not single engine and 3D manufacturer is capable of doing their fucking job, especially Otoy, do you remember Orbx/Lightfields or Brigade Engine? Neither does anybody, fuck them.

  • Chuckles

    Lytro regularly lies about what their products do. They lied about the Illum, they lied about their cinema camera, and they’re lying about this. In their pitch video they showed someone with a VR headset leaning his head forward in a workshop, supposedly using the VR camera, but it wasn’t shot with the VR camera. They shot it with a cinema camera to make it look as pretty as humanly possible, and then just had someone move the camera forward. They then converted it to a stereoscopic video, and at the point in the video where the videographer moved the cinema camera forward, they had the guy who was wearing the headset move his head forward. It wasn’t actually doing anything, they just pretended it was.