We can’t believe our collective foveae (yes, that’s plural), because in only 24 hours FOVE’s Kickstarter campaign has blown past its halfway mark and beyond. If they haven’t sorted out stretch goals, maybe now is a good time.
At the time of this writing (approx. 24 hours after the campaign began), FOVE is sitting at almost $177k, leaving only around $73k until its crowdfunding goal of $250k is reached. No small feat, even in the face of Oculus’ 2012 triumph that saw 230 percent of its goal reached in just 10 hours of its release.
But lets not get bogged down in numbers here, because we’re dealing with two very different animals.
The team at FOVE has had a while to mature their eye-tracking VR headset, evidenced by the numerous meetups, expos and conferences where they showed the device nearly every step of the way up until its crowdfunding stage. They’ve likely had thousands of heads (and double that in eye balls) scrutinizing the FOVE—which is all well and good for consumer confidence. We like transparency. But some of you may be asking yourselves, “what’s all the hubbub about eye-tracking?”
Eye-tracking in VR is poised to do so much more than simply offer the world an excuse to build another ‘gaze shooter’ (shoot where you look). Road to VR‘s Executive Editor Ben Lang put it best in a round up of the many uses of gaze detection in VR, a field that FOVE has single-handedly ushered into the consumer space.
What excites me about eye-tracking are the more abstract and enabling possibilities like eye-based interface and contextual feedback, simulated depth-of-field, foveated rending, and avatar eye-mapping. Let’s go through those one by one real quick:
- Eye-based interface and contextual feedback: Imagine a developer wants to make a monster pop out of a closet, but only when the user is actually looking at it. Or perhaps an interface menu that expands around any object you’re looking at but collapses automatically when you look away.
- Simulated depth-of-field: While the current split-screen stereo view familiar to users of most consumer VR headsets accurately simulates vergence (movement of the eyes to converge the image of objects at varying depths) it cannot simulate depth of field (the blurring of out of focus objects) because the flat panel means that all light from the scene is coming from the same depth). If you know where the user is looking, you can simulate depth of field by applying a fake blur to the scene at appropriate depths.
- Foveated rendering: This is a rendering technique that’s aimed to reduce the rendering workload, hopefully making it easier for applications to achieve higher framerates which are much desired for an ideal VR experience. This works by rendering in high quality only at the very center of the user’s gaze (only a small region on the retina’s center, the fovea, picks up high detail) and rendering the surrounding areas at lower resolutions. If done right, foveated rendering can significantly reduce the computational workload while the scene will look nearly identical to the user. Microsoft Research has a great technical explanation of foveated rendering in which they were able to demonstrate 5-6x acceleration in rendering speed.
- Avatar eye-mapping: This is one I almost never hear discussed, but it excites me just as much as the others. It’s amazing how much body language can be conveyed with just headtracking alone, but the next step up in realism will come by mapping mouth and eye movements of the player onto their avatar.
Check back at Road to VR for more news, updates and stretch goals coming to the FOVE Kickstarter campaign.
Disclosure: At the time of publishing, FOVE is running advertisements on Road to VR