Jaunt VR CTO, Arthur van Hoff , presented at last week’s 10th Silicon Valley Virtual Reality meetup, sharing technical details on their approach to cinematic virtual reality. This presentation marks a milestone in the progress of virtual reality in that it’s the first SVVR talk recorded in 3D 360 degree high definition video.
You’ll recall that Jaunt VR came out of stealth mode a month ago by sharing some initial product details and an announcement they had raised $6.8 million in funding. This evening the company was prepared to share technical details.
The Jaunt camera rig features a custom 3D-printed assembly housing 14 Go Pro cameras. Each camera captures a HD (1080p) stream at 60 frames per second. Combined, this translates into a whopping 1.7 billion pixels of capture per second.
The outward-facing cameras are all mounted vertically in order to maximize field of view in the vertical.
Processing that data presents its own challenges, not only in the sheer volume of data, but also the post-processing required to render a 3D 360 image from an array of 14 2D video streams. Each image to be combined into a single frame of video needs to be color matched, and have its gain and white balance adjusted. Lens distortion needs to be corrected. Then each image set needs to be blended and stitched.
This post-processing is accelerated using CUDA (nVidia’s parallel computing platform using graphics cards), but at this point each second of video takes approximately 20 seconds to completely render into 3D 360 video. This will improve over time, of course.
The resulting video is encoded with H.264 with a spherical stereoscopic layout. When questioned about the specific format and whether it was Jaunt-proprietary, van Hoff answered in the affirmative, but clarified it’s necessarily proprietary because there’s no other capable format at this point. He went on to state that they’d be willing to work with others to define a standard format.
With a target of 4K resolution per eye (and enough data collected to generate 8K per eye), the quality bottleneck is clearly the display at this point.
Raw video still needs to be edited into a polished cinematic experience, and Jaunt’s goal is to enable professionals to use the existing tools they’re used to to add cuts, transitions, and special effects, perform color correction, and so on. Further, audio needs to be positioned within the scene. Jaunt uses ambisonic recording to capture sound, and then must map it to the appropriate place in the scene. The Jaunt player takes care of rotating the audio as the viewer rotates their head.
By using industry standard tools, the additional amount of technical training is reduced, which accelerates the uptake of people willing to adopt the Jaunt solution.
That still doesn’t solve other considerations when producing in 360 degrees. There’s no “behind the camera”, so where do you put the crew? How do you cue the viewer which direction to look in so they don’t miss a key event, such as something that may be happening behind them? And since you’re severely limited in how you can move the camera (since it’s easy to trigger VR sickness when you do so) how do you adapt your storytelling process? These production challenges will need to be figured out along the way.
So far the company is being tight-lipped on when content will be available to the general public. The VR viewer will work on the Rift, of course, and Jaunt has said they’ll support other HMD’s as well. An iOS viewer will provide a “window on the world” where you’re able to pan around the scene by rotating your iPad, similar to what Occipital does with their 360 app for static images. They’re aiming for short-form, 5 minutes experiences initially, presumably to introduce audiences to the medium.
As mentioned at the top of the article, Silicon Valley VR meetup #10 was recorded using the Jaunt camera cluster. I’ll post a new article once it’s available for viewing.