fove eye tracking vr headset hands on ces 2015 (1)

FOVE, a VR headset prototype in the works by a Japan-based team, is quickly closing the experience gap between itself and the Oculus Rift. If they continue at this pace, they could catch up, and with a trick up their sleeve—eye-tracking.

When I first took at look at FOVE’s VR headset back in November, my experience was less than stellar. The eye-tracking tech worked, but the overall experience left much to be desired (the presentation on inadequate hardware didn’t help the case). When I got in touch with FOVE’s CTO, Lochlainn Wilson, he told me he was dismayed that I ended up seeing that old prototype, which he said was”barely more than a mock up and is very far removed from the final product in terms of quality, accuracy and experience.” After seeing the company’s latest prototype at CES 2015, it’s clear that Wilson was not bending the truth—the latest version is a huge step in the right direction.

While the latest FOVE prototype looks similar on the outside, it’s using totally different display tech: currently a single 1440p LCD panel with a field of view that felt like it matched that of the Oculus Rift DK2. Though the current prototype lacks positional tracking (the ability to track the head through 3D space), the rotational tracking felt better than every other new VR headset I tired at CES, save for the Oculus Rift Crescent Bay prototype. It’s good to see that they’re focused on latency, because it’s the keystone that many entrants to the consumer VR headset market are currently lacking.

SEE ALSO
Oculus Job Listing Points to Eye-tracking in 'next gen AR/VR products'

On top of a 1440p display, solid headtracking, and an a decent field of view, FOVE’s real trick is its ability to track the wearer’s eyes.

fove eye tracking vr headset hands on ces 2015 (2)

Inside the headset, each lens is surrounded on the top, bottom, left, and right with IR LEDs which illuminate the eye, allowing cameras inside to detect the orientation of each. Looking through the lenses looks just like you’d expect from a VR headset without eye-tracking; Wilson told me that this is different from other VR headset eye-tracking solutions, which can have components that obscure parts of the field of view.

The calibration process has been streamlined from the last time I saw it; now you follow a green animated dot around the edges of your gaze, and pause as it does to capture calibration points at 9 or so discrete locations on the display. The whole thing takes probably no more than 20 seconds.

As it stands, FOVE is putting a fair amount of emphasis on the ability to aim with your eyes (probably because it’s easy to show and easy to understand), but to me that’s a red herring; what excites me about eye-tracking are the more abstract and enabling possibilities like eye-based interface and contextual feedback, simulated depth-of-field, foveated rending, and avatar eye-mapping. Let’s go through those one by one real quick:

  • Eye-based interface and contextual feedback: Imagine a developer wants to make a monster pop out of a closet, but only when the user is actually looking at it. Or perhaps an interface menu that expands around any object you’re looking at but collapses automatically when you look away.
  • Simulated depth-of-field: While the current split-screen stereo view familiar to users of most consumer VR headsets accurately simulates vergence (movement of the eyes to converge the image of objects at varying depths) it cannot simulate depth of field (the blurring of out of focus objects) because the flat panel means that all light from the scene is coming from the same depth). If you know where the user is looking, you can simulate depth of field by applying a fake blur to the scene at appropriate depths.
  • Foveated rendering: This is a rendering technique that’s aimed to reduce the rendering workload, hopefully making it easier for applications to achieve higher framerates which are much desired for an ideal VR experience. This works by rendering in high quality only at the very center of the user’s gaze (only a small region on the retina’s center, the fovea, picks up high detail) and rendering the surrounding areas at lower resolutions. If done right, foveated rendering can significantly reduce the computational workload while the scene will look nearly identical to the user. Microsoft Research has a great technical explanation of foveated rendering in which they were able to demonstrate 5-6x acceleration in rendering speed.
  • Avatar eye-mapping: This is one I almost never hear discussed, but it excites me just as much as the others. It’s amazing how much body language can be conveyed with just headtracking alone, but the next step up in realism will come by mapping mouth and eye movements of the player onto their avatar.
SEE ALSO
Vive Pro Eye Now Available in Europe, Starting at €1,700

fove vr headset black

When you break it down like this, ample eye-tracking actually stands to add a lot to the VR experience in both performance and functionality—it could represent a major competitive advantage.

Lochlainn Wilson, CTO, told me that FOVE’s eye-tracking system will be fast and accurate enough to pull off everything listed above. Some of it I actually saw in action.

Continue Reading on Page 2…

1
2

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • Druss

    Foveated rendering + 4K sounds like a match made in heaven. Today’s (or at least this year’s) hardware would suddenly be powerful enough for 4K at 90fps. Sadly I think it’s highly unlikely Oculus will suddenly surprise us (and more importantly, the devs) with eye-tracking at this point in time. Even if someone could hack it in at a later point in time, game engines would have to be modded… Damn. Seems like this Japanese company could actually beat Oculus, at least potentially, with this tech. Positional tracking is easily add in later, but eye tracking has so much potential… Wish they could just merge or collaborate or something! xD

    • Kennyancat

      I also doubt CV1 would be implemented with eye tracking this late in development, but if someone manages to get it just right in time, who knows… I just really like where this is going; simulated depth of field would be a total game changer imho.

      • George

        I wonder what the cost is right now though.

  • Eye tracking does sound like one of the key techniques to make VR allt he more interesting. Perhaps not so much for interaction, but for human interaction. Foveated rendering is also something passive that can just benefit anything as it ups performance. Other than that, you do mention mouth tracking, I imagine a headset which has all kinds of sensors built in eventually, eye tracking, mouth tracking, sweat, heat, pulse, breathing… so much which can be used to alter the experience or just translate your emotions. Mhmm…

    • Dan

      The brain wave trackers would be a great inclusion. They can also generally sense the temple/jaw muscles – e.g. if you clench your teeth. I think Fove plan to integrate mouth tracking. This will make such a difference to virtual reality spaces – like second life etc. Being able to communicate in a virtual world with your expressions and eyes tracked as well as your head and movement will be a game changer. You’d think Occulus would be really pushing that, with Facebook etc. wanting it be used for social applications. Surely that’s an obvious direction for the Rift under Facebook ownership? A VR social space.

  • brantlew

    Foveated distortion correction is potentially another important use for low latency eye tracking. Distortion correction today is fundamentally flawed because you can truly only correct for a single pupil position with a static function. Every eye position across the lens offers a unique distortion field. Currently the way to combat this is to improve the optics so that the distortion changes minimally within the eye-box and to create distortion correction that “averages” pupil positions. But it’s an imperfect solution and distortion flaws are evident in all headsets. In principle, eye tracking could be used for exact distortion correction at every pupil position – creating a much more “solid” world around, reducing distortion constraints on lens design, and reducing sim sickness. But as usual with VR, the devil is in the details. Latency and tracking accuracy must be near-flawless for both sacaddic and VOR motion.

    • That is actually very interesting. I have noticed that I see different amounts of the peripheral in the Rift when looking straight ahead and to the sides, so definitely a noticeable effect. Well, at least some people are working on eye-tracking, lets hope they get it right :)

  • BradNewman

    “While the current split-screen stereo view familiar to users of most consumer VR headsets accurately simulates vergence (movement of the eyes to converge the image of objects at varying depths)” Human eyes do two basic things: the eyes rotate in the sockets and the lenses focus. The rotation is known as vergence and is used to fuse binocular pairs of images. Focus is knowns as accommodation and is used to focus the images at different distances. Consumer HMD virtual cameras don’t converge but rather render looking directly forward, so as near field objects get closer at a certain point the eyes can’t use vergence to create a fused binocular image. The light from the screen is collimated (or mostly collimated at a closer distance than infinity) by the HMD lenses so the eye lenses are always focused at a fixed far distance, so any software simulation of accommodation is simply a trick on the eyes and not optically accurate. Out of all the potential solutions FOVE could provide I think vergence is the most valuable to allow near field convergence of images. The zSpace display (which does a form of convergence) is a great example of beautiful near field rendering of small objects.

    • Ben Lang

      Hey Brad,

      Thanks for your comment. I wanted to make sure I understood correctly — are you saying that the split screen view of the Rift and others don’t function in a way that allows for vergence? Or did you mean that it breaks down only with particularly close objects?

      • brantlew

        Currently the virtual cameras are represented at the pupil position of a person looking straight ahead (zero vergence). So when people point their eyes inwards the view of the pupils and virtual cameras do not agree. With eye tracking you would move the cameras closer together as vergence increases – or put more simply you just always place the camera on the pupil at all times.

      • BradNewman

        Both. The cameras point straight forward at all times. You ability to converge on mid field objects is an illusion of vergence, but once object enter the near field the disparity between the stereo image becomes so large that the brain can’t fuse the images.

  • brandon9271

    now they can create a game to see how long you can make eye contact with a female before glancing at her breasts.. :-p

  • Jacob Pederson

    Is Foveated rendering even possible? Wouldn’t the latency in the eye-tracking (even if it is very small) make it look like the world is always gaining detail just as you look at it? I find texture pop-in and detail fade-in pretty annoying as it stands in current engines, but to have something similar happening everywhere I look would seem pretty immersion breaking.

    • Jacob Pederson

      Another thought: the prediction algorithms Oculus uses to reduce perceived latency aren’t going to work with eye tracking, because eye movement is much less predictable than head movement.

  • Dejan Kacurov

    There is also release date for the new FOVE’s VR… Do you think it worth it? Here is what i found online and it seems legit https://www.futuregamereleases.com/fove-vr-eye-track-ultimate-experience-in-gaming/