HypeVR Uses ARKit as a Portal Into Volumetric Video Content


HypeVR is developing volumetric video tech which we called “a glimpse at the future of VR video,” when we got to see it back in January. Now the company has adapted their system for Apple’s ARKit, showing how phone-based augmented reality can be used to view volumetric footage, and allow users to step through a portal into the footage.

Volumetric video is that which contains spatial data and allows users to physically move around inside of it, similar to real-time rendered VR games. Volumetric video is far more immersive than the 360 video footage you can readily find today, because the perspective isn’t locked into one static location.

I was impressed with the level of immersion and quality when I got to see some of HypeVR’s volumetric video footage running through a VR headset, but with new AR tracking tools coming from Apple (ARKit), Google (ARCore), and others, there’s an opportunity to bring a huge audience part way into these experiences.

HypeVR is experimenting on that front, and recently integrated their tech with ARKit’s tracking, giving any capable iPhone the ability to view the company’s volumetric video footage on the phone’s display.

The video above shows a volumetric video scene from HypeVR being played back on an iPhone SE (which is of the ‘6S’ generation) The company is employing a ‘portal’ as a way of navigating from the real-world into the volumetric space; it’s a neat trick and definitely a cool demo, if quite novel, though a similar mechanism might be an interesting way to transition between volumetric video scenes in the future, or as a central hub for ‘browsing’ from one volumetric video to the next.

The AR tracking seen here is indeed cool to see in action, but what’s happening under the hood is equally interesting. Since rendering volumetric video can be challenging for a mobile device (not to mention, take up a ton of storage), CEO Tonaci Tran says the company has devised a streaming scheme; the phone is actually relaying its movements to a cloud server which then renders the appropriate video frame and sends it back to the phone, all fast enough for a hand-held AR experience.

The 10 Coolest Things Being Built with Apple's ARKit Right Now
One of the crazy camera rigs that HypeVR uses to capture volumetric video. | Image courtesy HypeVR

That means the output is being drawn from the same source data set that would playback on a high-end VR headset and require a beefy GPU. This not only lowers the computational bar enough that even a last-gen iPhone can play back the volumetric video, but it also means users don’t need to download a massive file.

Tran tells me that the company also plans to support volumetric video playback via Google’s ARCore. Between ARKit and ARCore, there’s expected to soon be hundreds of millions of devices out there that are capable of this sort of tracking, and HypeVR intends to launch an ARKit app in early 2018.

Tran says that the ultimate vision for HypeVR revolves around distribution and monetization of volumetric video. The company is in the process of raising a Series A investment and encourages inquiries to be sent through their website.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

  • Ian Shook

    This is kind of fucking awesome

  • VRphotogram

    There is nothing volumetric about this in any way. All they are showing are static 3D models in the foreground, and greenscreen 2D billboards of people far enough away that you can’t tell it’s being faked. HypeVR is a funding play, they don’t have any novel tech, please stop falling for their BS.

    • Andres Velasco

      Interesting. I atke you are a software dev? You worked with them?

      • Tonaci Tran

        Hey guys! I can assure you that our environments are running at 60 FPS in motion. (Ben Lang the author of this article can verify this as he has seen this first hand). It is true that near ground elements in the scene are static but they are static in nature. We have the capability of delivering near ground objects in motion (will have a new demo in coming months). The players are not 2d and they have volume. If they were flat it would be really apparent with the amount of 6DoF happening in the scene. We have many exciting projects in the works. Last but not least we were thrilled to have the Intel CEO himself announce our technology on their main stage earlier this year:
        I don’t blame you guys for being skeptical…really hard to tell until you see it first hand.


        • VRphotogram

          At that distance there is no way to tell if the actors have any depth to them. They can be “3D” in the sense that they are 2D cutouts, and as you move around you can see slight parallax with the background, but it is far from “volumetric”.

          What makes me skeptical is that in all of your demos actors and live video are always in the distance, where it doesn’t matter if there is any depth or volume. Why only have pre-scanned static objects in the foreground? If you can really do volumetric video why not show it off in motion.

          Right now do you have a working demo of live 4D actors in the foreground that you can move around? Something like this? https://www.youtube.com/watch?v=TBtwElmFbLA

          I don’t want to be super negative about your demos. They are great examples of combining 360 video and pre-scanned objects. But there is a difference between presenting a “proof-of-concept” of what you want the company to eventually be able to achieve, and claiming that you already have that technology working and figured out.

          But please prove me wrong. These are extremely challenging problems that need to be figured out, and it would be amazing if you can actually do what you say. I want you to prove me wrong.

    • brandon9271

      That’s why it’s call HYPE vr.. lol

  • Lucidfeuer

    This all seem dubious. They could’ve called volumetric video if the scene was actually a video, but this is a static volumetric scene (which in itself is already impressive), with added flat alpha videos sequences (the NFL players).

    This is a smart way to fake lightfield-like scenes on nowadays smartphone/computers, but certainly not a “volumetric video”. And that rig…wow not exactly an accessible creative tool.

  • Cool, but these kind of videos have sense if seen in VR, not in AR.