HypeVR is working to bring live-action volumetric 360 video to VR. The company’s crazy camera rig, built to capture the necessary data, is a mashup of high-end cameras and laser scanning tech.

HypeVR recently shot a brief demonstration of the output of their rig, which they say can capture ‘volumetric’ VR video that allows users to move around in a limited space within the video, similar to Lytro’s Light field capture which we saw the other day. Traditional 360 video capture solutions don’t allow any movement, effectively locking the user’s head to one point in 3D space, reducing comfort and immersion.

The HypeVR rig used to capture the footage appears almost impractically large; consisting of 14 high-end Red cameras and a Velodyne LiDAR scanner, HypeVR says the rig can “simultaneously capture all fourteen 6K [Red cameras] at up to 90fps and a 360 degree point cloud at 700,000 points per second.”

See Also: Inside ‘Realities’ Jaw-droppingly Detailed Photogrammetric VR Environments
See Also: Inside ‘Realities’ Jaw-droppingly Detailed Photogrammetric VR Environments

With similar capture approaches we’ve seen in the past, the video data is used to ‘texture’ the point cloud data, essentially creating a 3D model of the scene. With that 3D data piped into an engine and played back frame-by-frame, users can not only see live-action motion around them, but also move their head through 3D space within the scene, allowing for much more natural and comfortable viewing.

SEE ALSO
Watch HypeVR's Impressive Volumetric 360 3D Video In Action

Fortunately, HypeVR says that this massive camera platform is not the only option for capturing volumetric VR video. The company’s purportedly patent pending capture method is camera agnostic, and can be applied to smaller and more affordable rigs, which HypeVR says are in development.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

  • Scott C

    I’m curious to see how much range of perspective motion they can capture with this. While it’s great that they get a lot of volumetric and “texture” data… It still sounds like it’s essentially a fixed-perspective camera capture. If the perspective of the viewer is offset from the camera’s initial capture point, it’s going to reveal space that the camera didn’t/couldn’t capture.

  • Rafael Lino

    I’ve only seen LIDAR work around 10-20hz, way too slow for even average
    film capture, and incredibly slow for 60fps or 120fps 360 video. curious
    on how they make this work

  • Augure

    Ridiculous overkill. FFS some people really are bad at drawing links and conceiving stuff, especially with lightfield around.