Watch HypeVR’s Impressive Volumetric 360 3D Video In Action

10168
17

HypeVR may be poised to alter the 360 video landscape with their depth-mapped, volumetric video system that lets VR users move in, out and around the captured scene. Watch Fox’s Futurist Ted Schilowitz as he gives one of the first real time demonstrations of the technology.

We’ve followed HypeVR for some time now, first reporting on their incredible looking LiDAR powered, depth mapping camera rig back in early 2015 and once again just recently, after the company released the first ever look at footage captured with its technology.

epic-red-dragonHypeVR’s proprietary system uses 14 Red Dragon, 6k Video Cameras rig-mounted to capture a 360 degree field of view. Recording at 60Hz currently (with 90Hz planned) the definition of the resulting footage, once stitched would probably be impressive enough in and of itself, but there’s more. HypeVR’s rig is extended to use a Velodyne LiDAR scanner, capable of capturing up to 700,000 points of 3D depth information every second at a range of up to 100M.

The practical upshot of all this is that, the resulting data captured allows any recorded scene to be reassembled and ‘played’ with the scene able to respond in real time to a viewer’s movements – this means parallax within a video and even the ability to move in and out of the scene.

The HypeVR team have just released a video featuring 20th Century Fox’s Futurist Ted Schilowitz, who as it happens co-founded RED, the company which builds the cameras featured on HypeVR’s rig. Schilowitz holds a small tablet, with a scene apparently captured using HypeVR technology playing on it. As he begins moving around, the video (a looping coastal scene) can be seen to respond to his shifts in position, with both parallax and advancement / retreat in and out of the scene displayed.

hypevr-camera-rig-volumetric-vr-video

It’s impressive stuff and the applications for virtual reality video are blindingly obvious. However, with every apparent breakthrough, especially one still largely unseen by the media or public, questions remain. How is HypeVR’s likely vast quantities of data reassembled in such a way as to be transferable and rendered on consumer devices? Is the scene ultimately distilled to a series of simplified geometric surfaces, extrapolated from the LiDAR depth-sensing information and will therefore look poor quality under close inspection?

SEE ALSO
Lytro Shows First Light Field Footage Captured with Immerge VR Camera

We’ll have to wait to find out, but it does seem as if HypeVR – up until now perhaps a victim of their own choice of company names – is nearly ready to show the world what they can really do.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • Me

    Looks impressive, but it would be even more impressive with proper positional tracking: how about showing this video on a Vive ?

    • I’ve added the company’s original video release which shows them demo’ing it on an Oculus Rift. Hope that’ll do for you. :)

      • Me

        No it doesn’t ! I’m a Vive user !… Well at least you’ve proved my point, it’s awesome with proper tracking :)

  • Jack H

    Am I correct to think that movements of the viewer beyond the circumference of the rig creates the opportunity to witness occluded regions which weren’t captured and therefore appear as holes?

    Also is the transfer of visual data from a high quality captured point cloud to a detailed texture on a lower quality mesh a viable strategy for volumetric and free-movement video playback?

    • RaulC

      I believe this is the correct assumption. You can see this “hole” effect when he pans to the back of the surf board, where no data can be captured.

      • In the second video they clearly show off the back of the surfboard. They show a scene with no “holes”.

    • Nate Vander Plas

      I also expected to see more holes. For instance, anything behind the surf board should be a hole for both the lidar geometry and the image. I wonder if they did two captures and stitched them together- one on each side of the surf board. This demo is all well and good for a scene with basically no movement (just the waves), but what about people walking around? I imagine them looking like cheap game characters with weird mesh artifacts on their hair, etc. I also don’t know if the lidar can keep up with the 60-90fps of the cameras, so any moderately fast motion could get really wonky. All that to say, I’d love to experience what they captured in VR. It could be really awesome even if it’s limited to mostly-static scenes.

      • kalqlate

        I’m sure the math would be a bit more involved, but I wonder what could be done with three or more such camera setups capturing the scene simultaneously in an equilateral or even non-equilateral polygon. Each camera could be digitally removed from the view of the other(s) in post processing. What you would then have is a fully navigable world with few to no holes on the interior of the polygon and extended navigation beyond the interior of the polygon with smaller (narrower) holes than with just one camera.

        Actually, if there is already considerable post processing, an AI trained to “imagine” and fill in the holes (as our imaginations can from any given vantage point), even a single such camera setup could give almost unlimited navigation with holes filled in with “imagined” information. I’m sure this will be part of the evolution of this kind of video capture and processing. Time to get DeepMind churning on this right away.

  • Most amazing vr video development I’ve seen. Need this for vr porn as well.

    • brandon9271

      That’s a pretty damn expensive rig for porn but it would take things to a whole different level and likely make somebody a lot of money. Only problem i see is this may not be able to scale down to a mobile device like gearVR.. then again is doesnt have positional tracking anyhow.

  • TTman

    Very simple concept, it’s just projection mapping on to the extracted depth map and has been done for years. Still has significant bandwidth and high frequency detail issues.

  • If all of that looping volumetric video was suppose to be captured in just one instance, from a single point, how did they get the back of the surfboard? I think that 3D scene they are showing off has been edited quite a bit, from several consecutive scans, and not a single capture scene they are suggesting it is.

  • James Friedman

    Wow this will make 360 porn pretty crazy. Instead of being the guy/girl you can be super creep like watching a couple bang in the same room.

  • DennisonBertram

    Um didn’t they admit in one facebook group that this is actually 360 video plus cgi blended in?

  • Augure

    This + this (www.roadtovr.com/pixvana-reveals-10k-video-player-publishing-platform-spin/) fuck yeah, THESE are the people developing and assuring the future of VR, not fucking Vive, Oculus, PSVR, Unreal and whatever studio is busy NOT doing what VR needs to be a finished product.

    • Augure

      Also the point of this technology is of course that they’re using a regular camera rig with Lidar, and have developed convincing and detailed enough video mapping and stitching software. In order terms this is scalable and optimisable.

  • DC

    Unless they’re stitching/mapping from 2 camera rigs and painting out the cameras at the same time, this can’t be live footage. Live action, yes.

    SO MUCH DATA. Live volumetric has a long ways to go, but I’ll take 6K live action edited for now. Delivered on mobile VR please. Compressed to 100 mb/ min. Delivered on Pixvanna by Christmas?