Lytro have announced another hefty wedge of funding, with a $60 million series D round led by Blue Pool. What’s more, they’ve formed a content partnership Within, and the first 360 degree 3D light field content is now set to arrive in Q2 of this year.

There are so many technologies that the advent of accessible virtual reality has encouraged to evolve. But few excite me as much as the potential for light field ‘video’. As a movie enthusiast, the idea that motion pictures can now be captured both in 3D and allow the viewer to ‘peek’ in, out and around upon viewing, blows my mind. Lytro promise to deliver VR film with six degrees of freedom and parallax at a potential resolution “greater then 6k per eye”.

SEE ALSO
Lytro's 'Immerge' 360 3D Light-field Pipeline is Poised to Redefine VR Video

We wrote in 2015 about Lytro‘s potentially groundbreaking Immerge system, then a gargantuan domed array of light field sensor slices that capture absurd amounts of data about the light light it sees, all in 360 degrees. And, as the angle and source of the light is captured, the data recorded can be used to recreate the camera’s surroundings in three dimensions too. Clearly the potential for immersive movie making with Lytro’s new kit is immense and a perfect fit for virtual reality viewing.

Now, Lytro have announced that, in addition to the $50M in funding they acquired in 2015 to develop Immerge, they’ve just received a further $60M in series D funding, in a round led by Blue Pool Capital, to continue refining Immerge and, perhaps as importantly, producing content with it.

“We believe that Asia in general and China in particular represent hugely important markets for VR and cinematic content over the next five years,” said Jason Rosenthal, CEO of Lytro. “A key goal of this capital raise was to assemble a group of trusted capital partners to help us best understand and navigate this new market.”

On the content front, Lytro are also announcing today that they’ve formed a partnership with creative house and content platform Within (formerly Vrse), co-founded by one of the few directors out there to have already made a name for themselves in the embryonic medium of VR film, Chris Milk. The first production from the new partnership has already wrapped, and is currently in post-production. According to a press release from Lytro, they’re planning to launch this new content at some point in Q2 2017 – that’s not long at all.

SEE ALSO
Within and Fox Partner to Create VR Content, Spike Jonze to Co-Produce Original VR Film

So what else has the Lytro team been up to since we last heard from them? Well there’s been a fairly major change to the form factor and nature of the Immerge camera. Instead of the incredibly ambitious 360 ‘capture all angles at once’ system featured previously, the company have instead pivoted to a ‘planar’ (in other words, front-facing only) camera system. Lytro claim this change was made in response to feedback from their creative partners, allowing for more traditional ‘behind the camera’ (not an option with 360 filming) director / talent collaboration and tighter control over the filmed volume. To be clear though, the system will still offer 360 capture, but instead of capturing all at once, the system can be rotated, filming those different angles one at a time.

lytro-planar-1

Whilst this does sound like an almighty pain, because the Immerge is dealing with light fields, it should be much easier to seamlessly blend each of those views when compared with conventional spherical camera array. So, whilst it’s not quite as neat and impressive as the company’s original vision, we still get high resolution light field films which can be adjusted for different IPDs, offering parallax and freedom of movement within the captured volume. In short, it’s still pretty bloody cool!

There is still a question however, and quite an important one. Even with ‘downscaled’ versions of the assembled films, there’s a lot of data required to deliver these experiences to a VR headset in the home. Lytro previously spoke about proprietary streaming software which downloaded data only for the portion of the movie you were looking at, but there were no further details on this kind of viewer in their latest press release. We’ll be following up on this.

The long wait for Lytro’s potentially groundbreaking form of VR video capture seems almost to be over, and although there are still have question on how they’ll get it all to our faces, I’m more excited to see the results in action for myself than ever.

Introduction to Light-fields

Light-field photography differs from traditional photography in that it captures much more information about the light passing through its volume (i.e. the lens or sensor). Whereas a lytro-light-fidle-diagstandard digital camera will capture light as it hits the sensor, statically and entirely in two dimensions, a light-field camera captures data about which direction the light emanated and from what distance.

The practical upshot of this is, as a light-field camera captures information on all light passing into its volume (the size of the camera sensor itself), once captured you can refocus to any level with that scene (within certain limits). Make the camera’s volume large enough, and you have enough information about that scene to allow for positional tracking in the view; that is, you can manipulate your view within the captured scene left or right, up or down, allowing you ‘peek’ behind objects in the scene.

 

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • Lucidfer

    “We believe that Asia in general and China in particular represent hugely important markets for VR”

    This kills me. Why hasn’t any corporation bought Lytro, wtf? Google, Apple and the likes are wasting hundreds of millions of shit, vapor-buying companies and patents they never used, and nobody is buying Lytro?

    Granted their cameras are extremely expensive and processing intensive, but why hasn’t anybody in Hollywood made a true virtual movie with these yet?

    • J-2

      Hollywood uses lot of CGI and PC is a multi-purpose.

  • Ian Shook

    I wonder if their technology uses (excuse the photoshop term) “content aware fill” so that shadow areas can be filled in with data that mimics surrounding data. So a person talking to you on a wood floor would cast a data shadow of what the camera can’t see, and then the software would go “Okay, lets fill in this dark area with more wood floor stuff”

    • Lucidfer

      Wondering that too, I know there are few 3D objects filling algos but nothing as contextually advanced and clean as photoshop’s “content-aware” system.

  • iUserProfile

    Even 3D Stereoscopic VR Movies and Clips are somewhat of a let down in my experience. Most feel wrong in a lot of ways (scale, distortion) and don’t really offer a good sense of presence. This could be a good step forward.