Facebook F8, the company’s annual developer conference, is currently in full swing on its second and final day. Today’s big VR announce from the social platform giant? Facebook is releasing the 360 Capture SDK tool that developers can integrate into VR apps so that you can capture and share your VR experiences through 360 photos and videos. The SDK can be integrated into Unity and Unreal Engines titles, and across NVIDIA and AMD GPUs and is headset agnostic. The 360 Capture SDK is available today on GitHub.
This hopefully also means we’re getting something Oculus promised as a feature back when Home was initially announced for the consumer Rift back in the summer of 2015: 360 video previews of games. While this is playful speculation, and would greatly improve the buying experience for a store almost entirely bereft of demos, the 360 Capture SDK will at very least let users show their in-game exploits via Facebook’s News Feed or directly in your VR headset, so that next Road to VR review can feel more immersive to prospective customers.
According to Homin Lee and Chetan Gupta, who helped develop the 360 Capture SDK, fleshing out the functionality wasn’t as simple as taking a 360 video and stitching it together though.
“We solved the problem by rethinking the way 360 content is created,” writes Lee and Gupta. “Typically, the process starts by capturing various photos, stitching them together, and then finally encoding them. Previously, we needed to capture the content within a game engine, while ensuring we could produce a high-quality image quickly and on baseline hardware for VR. Now, all that’s possible with the 360 Capture SDK.”
Instead of traditional capture and post-process stitching, a process similar to what 360 cameras do that requires serious specs to accomplish, Facebook’s 360 Capture SDK uses cube mapping, a technique which promises to run “on baseline recommended hardware for VR without compromising the experience,” meaning you can still hit those critical 90 frames per second for VR while capturing 360 video and photos.
Facebook is touting video resolution at 1080p for News Feed viewing, and 4K for in-headset VR viewing. Video is however capped at 30 fps (though developers with especially optimized games could choose to crank that up a notch), but considering conventional stitching methods take anywhere from 20 to 40 seconds to capture and stitch 360 photos alone, Oculus’ cube mapping technique seems to be best in class.
The SDK plugin is said to be compatible with multiple game engines including Unity and Unreal, and an API can allow for integration into any engine. Graphical hardware like NVIDIA and AMD GPUs are also naively supported, meaning most, if not all of the desktop VR ecosystem can capture 360 photo/video and share it with the rest of the VR community.
According to Lee and Gupta, there are a few inherent benefits of using cube maps besides speed of capture:
- They don’t have geometry distortion within the faces, so each face looks exactly like you were looking at the object head-on with a perspective camera, which warps or transforms an object and its surrounding area. This is important because video codecs assume motion vectors as straight lines. That’s why it encodes better than with bended motions in equirectangular format.
- Their pixels are well-distributed—each face is equally important. There are no poles as in equirectangular projection, which contains redundant information.
- They’re easier to project. Each face is mapped only on the corresponding face of the cube. We realized we could skip the stitching process and instead use the game engine to natively capture a cube map—saving us on performance and speeding up our efforts. As an added bonus, the cube map content was actually higher quality compared to stitch content, because we didn’t lose quality in stitching and converting to equirectangular.
Check the Facebook blog post for more info on how Facebook creates cube maps for 360 video.