Overcoming Limitations

hypevr-capture-rig-3-jpgWhat HypeVR has today is a promising look at what VR video should really look like, but there’s some clear production and distribution limitations that will need to be solved before it can be used to make the sort of cinematic VR content audiences want.

Volumetric Shadows & Panning

You can think about this like normal shadows: when an object move past you and is illuminated by a light from the front, a shadow is cast behind the object because the light can’t pass through it. The location of the shadow depends on the location of the light and the object.

Since HypeVR is capturing the depth of the scene using LiDAR (which bounces lasers off of objects to determine how far away they are), objects block the laser and cast ‘volumetric shadows’. Instead of a dark area, these volumetric shadows leave holes in the geometry behind them because the camera is unable to see behind the object (just like the light can’t shine behind the object).

If you stood completely still at the exact capture point, this wouldn’t matter at all, because you’d never see the volumetric shadow behind the captured objects. But since users can move around within the space, they will potentially see behind those objects and thus see the volumetric shadows.

The HypeVR content I saw had no volumetric shadows, which means the company has found (for these specific pieces of content), effective ways of dealing with them.

The easiest way is perhaps hand-fixing the geometry to fill the holes in post-production. That could be relatively simple (in the case of a person walking by a few yards away from the camera), or extremely difficult (in the case of putting the camera in the middle of a dense forest with nearby trees and plants casting all manner of volumetric shadows across the scene).

SEE ALSO
'Racket Club' Update Brings More Flexibility with New Rules and Fan Favorite Modes

Another way is to insert a 3D model (which could be captured from the real world with photogrammetry) into the space after capture so that there were never shadows in the first place. That’s easy because the entire scene is rendered as 3D geometry already, much like a videogame environment, so inserting more 3D objects is relatively straightforward.

And that last one is probably where volumetric video capture will ultimately go in the future—a fusion of live-action scene captures combined seamlessly with CGI models in heavy post production (similar to what we see in today’s blockbuster film production).

Moving the capture rig is likely to complicate things further, as it will cause volumetric shadows to pan across the scene. HypeVR tells me they haven’t yet shot moving-camera tests to see how their tech handles panning. If it turns out that the rig can’t be moved while filming, it may or may not matter, depending upon how important moving cameras are to the language of VR filmmaking; presently, many 360 video productions for VR are shot with static cameras.

Download Sizes and Data Handling

HypeVR’s capture rig uses 14 RED cameras which each shoot at 6k and 60 FPS. There’s also the LiDAR data being captured in each frame. It takes time to process and reconstruct all of that data into its final form (currently around 6 minutes per frame, locally). After it’s all properly rendered, the experience needs to get from the web to the user somehow, but at 2GB per 30 seconds of capture, that’s going to be tough. 2GB per 30 seconds of video is actually a massive improvement over where HypeVR was not long ago, with 30 seconds of capture clocking in at a whopping 5.4 terabypes.

SEE ALSO
XR Insiders Reflect on Apple Vision Pro Development and Industry Impact

Making volumetric capture technologies like HypeVR work will require still better compression to create realistic file sizes. Faster consumer internet connections will likely also play a key role in making this sort of VR video streamable so it can start right away without the need to wait for a huge download.

– – — – –

hypevr-capture-rigThere’s definitely challenges to solve when it comes to volumetric video capture, but that’s normal for any new film production technology. HypeVR has demonstrated that it’s possible to achieve the sort of compelling live-action VR video that everyone wants, while mitigating or eliminating its limitations. Employing the technology in increasingly complex productions will require smart direction and probably the development of new techniques to achieve practical production times and a seamless end product, but if the company’s demo scenes are any indication, the results may very well be worth the effort.

1
2
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Ian Shook

    Super exciting!

  • Sponge Bob

    And how much does it cost ?

    50K, 100K , more ?

    Not for consumer or anything close to consumer VR

    Mark my words:
    General public VR usage will explode only when consumers are able to transfer their own surroundings into VR experiences which they can edit or augment in any way and share them on e.g YouTube-type of online service

    Photogrammetry is a good start and right direction to follow – they just do it all wrong ..

    Why ?

    Not gonna tell :-)

    • Daniel Gochez

      350k in cameras alone.. Well depends on the model they used. A good Red camera with lens can set you back 25k and they used 14

      • Sponge Bob

        they use lidar too (that thing on top – much like in google’s driverless cars) and good lidar is much more expensive than camera by definition

        • user

          google said its lidar costs $8000

          • Sponge Bob

            it was 50K originally

            good camera should cost less anyway

  • Foreign Devil

    Is there anywhere we can download those 30 second clips? Also could a computer meeting Oculus minimum specs have any chance of playing it back smoothly?

  • Grinch

    ridiculous hype. Looking at the rig its obvious that there will be massive holes between the camera positions unless every subject stays a considerable distance away (100′ or more) next the interaxial distance is way to big for effective stereoscopy leading to serious miniaturization – but if everything has to be far enough away to avoid the huge holes humans cant perceive depth at that range anyway. and the idea that being able to move around a couple of feet in each direction is hardly “volumetric” Once again hype and Exaggerated reality…

    • Sponge Bob

      I would put my money in Matterport capture system as opposed to this monstrosity…

      At least it costs 10 times less and does a job well with non-moving surroundings (except for running photographer – you have to hide behind a corner to not end up in the captures VR scene – lots of hassle :):):)

      Still DOA for general public :)

      • Matterport? That’s different tech targeted at a different audience. You can not for example look under or around an object from your current position, in fact you can not move within it at all other than select a predefined hotspot which it morph blends to. From what I could tell anyway.

        • Sponge Bob

          tech is similar in part – based on cameras (and also depth camera or lidar in this contraption), rotating or not
          if rotating (moving) then you need less cameras (less $$$) but can only catch fixed scenes like house interior – that’s what matterport does best
          Even with fixed scenes, with a few predefined camera locations there will be lots of gaps in VR reconstruction

          You really need a freely moving camera(s) to look at each and every object from different angles

          Coming up :-)

      • Matterport isn’t anywhere close to Hype’s solution. Matterport stitches stereoscopic 360 images – the only use of 3d is a very low res dollhouse view to establish spacial relationships between the panoramas and teleporting. Look at a Matterport mesh in Unity or a 3d modeling software and you will quickly see you fell for a parlor trick

    • muchrockness

      This system scans and builds a 3D model of the scene. You can view the scene from any perspective, and therefore you can model the virtual eyes to be any interaxial distance you want.

    • Thomas Jergel

      Maybe if they get the cost down enough they might be able to use multiple cameras to fill enough gaps.

      The amount of data would be HUGE though and might require even more specialized software to handle such a workflow and files.

      • WthLee

        i read somewhere its 8 gb of unprocessed data per frame..

  • Becca

    It captures at 60 FPS and it takes 6 minutes to process a frame, so 6 hours to process a single second of video…

    • jonas wahlbärj

      Do you know how long it takes for Disneys movies to process one frame? Hours.

      • OgreTactics

        not true since at least 10 years ago.

        • Zach Gray

          Actually more true now than ever. Final frame rendering can easily climb 30+ hours at the feature level.

          http://io9.gizmodo.com/one-animal-in-zootopia-has-more-individual-hairs-than-e-1761542252

          • Daniel Gochez

            I’ve been in 3d for 25 years, and even with moore’s law rendered frames still take about the same amount of time as they used to. Sounds crazy especially if you consider that I started with a machine with a 8 mhz processor and I am now using a machine totalling at about 48000mhz (dual Xeon 16 core at 3ghz each) The easy explanation is that we are doing much higher resolutions (DV resolution = 345,600 pixels, while 4k= 9,437,184 pixels) and much better quality than we used to by using more complex models, shaders and computationally intensive calculations that simulate realistic reflection, refraction, global illumination, caustics, light scattering etc.

          • Raphael

            I use octane and cinema 4d.. gpu rendering. Previews are very fast but any rendering higher than basic DL (direct lighting) takes a long time. Still I believe I’m getting more done in the time I’ve been using Octane but even gpu rendering isn’t that fast. Then again when I see what’s possible in realtime with unity or unreal I wonder why I even bother?

          • Raphael, don’t forget that many textures in high quality realtime games / engines have been rendered / baked in CAD previously so keep bothering :)

          • OgreTactics

            Hybrid path-tracing and ray-marching have existed for 10 years, and the only established engine that managed to approach real-time rendering was Brigade until it was vaporwaved by Octane (god I hate Octane).

            I ask myself the same thing when I see that not just the technology but even the software like C4D or Octane haven’t evolved in 20 years which is huge.

          • Daniel Gochez

            You can achieve really good results with Unreal using it as a render engine, your final frames will be almost instantaneous, but the setup process is more involved than traditional 3d rendering. So in the end it’s more man hours vrs less render time, most of us still prefer to have a render farm and let it render overnight than putting in extra hours and have it render quickly.

          • OgreTactics

            Not just crazy, but baffling. I see so many young people wanting to get into 3D like my generation got into photoshop or unity or ableton…but a never-seen before mass of young people just giving up or prefering to just go for simple motion and even code.

            3D CG is by far the WORST computing technological domain of all in terms of conception, evolution, accessibility, sense…

          • Daniel Gochez

            I have also seen many 3D artists give up. And I don’t blame them, the hours are long, the software complex, expensive, and unstable. While the quality bar is very high and the pay is meh. How did it become this way? This generation grew up with endless 3d shows and movies and it is the new and exciting art form so everyone wanted to be part of it even if schools are horribly expensive.

          • OgreTactics

            Well if the VR market says anything…maybe because there’s a lack of critical step-back, and rational practicality in the sense of what companies creating tools or hardware are doing.

            The fact that C4D or 3DSMax look, feel and are used almost EXACTLY like in the 90s by artists because of how little the interface has evolved and how crazily complicated it still is, tells a lot about the lack of conception and sense in how companies are creating tools.

            Which is disappointing also because research on the other hand, is doing tremendous job and constantly iterating and evolving, but it’s like nobody integrates their algorithms or even wants to be competitive.

  • ra51

    Demo available so that we can “believe the hype”?

  • OgreTactics

    Impractical but nice experimental rig. I wonder what’s best since processing is so heavy, a Lytro Immerge or this.

  • Actually watched it at CES, impressive.

  • HopscotchInteractive

    HypeVR’s demo at CES was great and I liked crouching down and looking under the water buffalo to see the water behind it. 360° volumetric video is an impressive experience, but I was like, “Can’t I teleport?” I already wanted to go further. Having explored virtual tours in the Vive (Realities.io, The Lab, Google Earth) I can see where this is going and how it might blend down the road with other media. Even though it’s not scalable at present, it should get there so at least experiencing it becomes more mainstream. I know consumers and even most pros can’t afford this rig, so yes @disqus_PDyszClMXc:disqus is right from a content creation perspective, Matterport is a good way to go to get .OBJ and point-cloud to play with, or experimenting with multiple POV 360°…video really changes the experience.

  • Jolly

    Sounds great. I want one of those cameras! But I will never have one. Too costly.

  • That’s an awesome project, one of the most interesting things I’ve seen come out from CES. Anyway the scene made with Intel were 3GB per frame, so there are still lots of problems to face…

  • Tomas Sandven

    OK I am HYPED!!!

  • user

    thank god. thats what i want to see. keep pushing this tech instead of 360 video

  • I see alot of interest in the tech behind it. I’m certain it will make an interesting demo.I look forward to the results, which I expect to be like Kinnect captured video on steroids.

    But as far as the future of 360 videos in general, I still don’t see it.

    Is there a huge demand for 360 videos? How many do you watch in a day? I’ve checked out a few, here and there, out of curiosity and desperation for VR content. The number of them I thought were interesting enough to see more of were small. Even the best didn’t really warrant the effort.

    Video is a passive media, and all attempts to make it interactive have failed in the past simply because people like it to remain passive. It’s something you do well eating, relaxing, or even entertaining a date . It’s a story you take in, not an event. It’s told through direction, framing, and focus. 360 video is good for events (sports, travel, concerts), but lousy for story telling. And what it demands from the audience to get that experience is too high. Even if we get our VR sunglasses in the future, it’s still mentally and physical taxing to look around constantly. I can’t be the only person who has a hard time turning 180 degrees around on a couch.

    This isn’t the future of cinema, it’s a curiosity and a tech demo.

    • Mo Last

      check out 360 3D videos, not 360 only

  • Tony a

    lols – we have been doing this for a while now .. ie: last couple of years.. want a demo ? https://www.youtube.com/watch?v=4uYkbXlgUCw
    production and distribution limitations we have taken care of ( happly stream unlimited detail over low bandwidth net easly ), however we also still have no volumetric shadows, we have to work on that still ourselves.

  • Moris974

    The same technology exists for CG only:
    – PresenZ: http://www.nozon.com/presenz
    – Dragonfly: https://www.suprawings.com/

    This removes the cost/constraint of the camera.

  • Wyatt Rappa

    Maybe the answer to solving the shadow problem is combining this with a circular ring of LiDAR flying drones which could shoot the scene from the outside, looking toward the camera rig.

    • Sponge Bob

      dude, this scene sucks and hardly justifies the expense

      the only scene justifying ring of cameras around it would be high quality porn movie where you can be inside the movie :)

  • SunnytheVV

    I have started using EF EVE http://www.ef-eve.com volumetric video platform a month ago and although the capture has some rough edges (some people might need better quality but for me it is perfect for the cost of $39.99) I am very impressed. I make volumetric video within seconds, upload the capture into my own Unity environments and stream live. It is a huge break through – render time is 0 and anyone can do who has 2 Kinect cameras. So to sum up – cheap volumetric capture, portable, streams live volumetric video and anyone can make volumetric content – well thats a real change, not a big expensive rig with insane render time.

  • dk

    it’s the same as hololens’ holotours