HypeVR is working to bring live-action volumetric 360 video to VR. The company’s crazy camera rig, built to capture the necessary data, is a mashup of high-end cameras and laser scanning tech.

HypeVR recently shot a brief demonstration of the output of their rig, which they say can capture ‘volumetric’ VR video that allows users to move around in a limited space within the video, similar to Lytro’s Light field capture which we saw the other day. Traditional 360 video capture solutions don’t allow any movement, effectively locking the user’s head to one point in 3D space, reducing comfort and immersion.

The HypeVR rig used to capture the footage appears almost impractically large; consisting of 14 high-end Red cameras and a Velodyne LiDAR scanner, HypeVR says the rig can “simultaneously capture all fourteen 6K [Red cameras] at up to 90fps and a 360 degree point cloud at 700,000 points per second.”

See Also: Inside ‘Realities’ Jaw-droppingly Detailed Photogrammetric VR Environments
See Also: Inside ‘Realities’ Jaw-droppingly Detailed Photogrammetric VR Environments

With similar capture approaches we’ve seen in the past, the video data is used to ‘texture’ the point cloud data, essentially creating a 3D model of the scene. With that 3D data piped into an engine and played back frame-by-frame, users can not only see live-action motion around them, but also move their head through 3D space within the scene, allowing for much more natural and comfortable viewing.

Fortunately, HypeVR says that this massive camera platform is not the only option for capturing volumetric VR video. The company’s purportedly patent pending capture method is camera agnostic, and can be applied to smaller and more affordable rigs, which HypeVR says are in development.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.

Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Scott C

    I’m curious to see how much range of perspective motion they can capture with this. While it’s great that they get a lot of volumetric and “texture” data… It still sounds like it’s essentially a fixed-perspective camera capture. If the perspective of the viewer is offset from the camera’s initial capture point, it’s going to reveal space that the camera didn’t/couldn’t capture.

  • Rafael Lino

    I’ve only seen LIDAR work around 10-20hz, way too slow for even average
    film capture, and incredibly slow for 60fps or 120fps 360 video. curious
    on how they make this work

    • brandon9271

      As long as the head tracking worked at 90+ fps it really wouldn’t matter if the “film” ran that fast. Imagine watching a 2d, 24fps film in virtual theatre. You can move your head and the position updates at 90hz. Now imagine it’s a 360 degree film. The motion of the “world” and the motion of your head in VR video are asynchronous. The world could be completely static like in “Realities.”

  • OgreTactics

    Ridiculous overkill. FFS some people really are bad at drawing links and conceiving stuff, especially with lightfield around.