fove eye tracking vr headset hands on ces 2015 (3)

One demo had me floating amidst a futuristic cityscape. When I was ready to start the action, I looked down at a button below me which was instantly ‘selected’ as soon as I looked at it, causing an army of hovering drones to pop up in front of me. As I looked at them, lasers fired immediately, destroying each one in rapid succession as my eye saccaded across the scene. It felt fairly accurate and fast, and worked well as a proof of concept, but aiming and firing with your eyes alone is not exactly natural so it also felt a little strange.

Another demo showed simulated depth-of-field. I entered a room and gazed at two soldiers to gun them down. From there, Wilson had me alternate my gaze between the one of the corpses and the wall behind it. The scene blurred appropriately based on the depth of my gaze, bringing the corpse into focus while blurring the background, or vice versa. It’s was hard to tell whether or not it was fast enough that I was convinced that it was true depth-of-field, especially as Wilson had exaggerated the blurring effect for demonstration purposes. However, if FOVE will be fast and accurate enough to pull off foveated rendering, as Wilson asserts, there should be no issue with depth of field too, though it may require some tuning to make it feel right.

Wilson even told me that measurement of pupil dilation could be possible with FOVE’s eye-tracking system, though he called it an “inexact science” for the time being.

fove eye tracking vr headset

The language of utilizing eye-tracking input effectively (especially with regard to user interaction) still has a long way to go, but FOVE’s latest prototype serves as a solid proof of concept of what eye-tracking can add to virtual reality, and they’ve so far got an impressive headset to boot. Assuming FOVE stays on this trajectory, they’re definitely worth keeping an eye on.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.

Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Druss

    Foveated rendering + 4K sounds like a match made in heaven. Today’s (or at least this year’s) hardware would suddenly be powerful enough for 4K at 90fps. Sadly I think it’s highly unlikely Oculus will suddenly surprise us (and more importantly, the devs) with eye-tracking at this point in time. Even if someone could hack it in at a later point in time, game engines would have to be modded… Damn. Seems like this Japanese company could actually beat Oculus, at least potentially, with this tech. Positional tracking is easily add in later, but eye tracking has so much potential… Wish they could just merge or collaborate or something! xD

    • Kennyancat

      I also doubt CV1 would be implemented with eye tracking this late in development, but if someone manages to get it just right in time, who knows… I just really like where this is going; simulated depth of field would be a total game changer imho.

      • George

        I wonder what the cost is right now though.

  • Eye tracking does sound like one of the key techniques to make VR allt he more interesting. Perhaps not so much for interaction, but for human interaction. Foveated rendering is also something passive that can just benefit anything as it ups performance. Other than that, you do mention mouth tracking, I imagine a headset which has all kinds of sensors built in eventually, eye tracking, mouth tracking, sweat, heat, pulse, breathing… so much which can be used to alter the experience or just translate your emotions. Mhmm…

    • Dan

      The brain wave trackers would be a great inclusion. They can also generally sense the temple/jaw muscles – e.g. if you clench your teeth. I think Fove plan to integrate mouth tracking. This will make such a difference to virtual reality spaces – like second life etc. Being able to communicate in a virtual world with your expressions and eyes tracked as well as your head and movement will be a game changer. You’d think Occulus would be really pushing that, with Facebook etc. wanting it be used for social applications. Surely that’s an obvious direction for the Rift under Facebook ownership? A VR social space.

  • brantlew

    Foveated distortion correction is potentially another important use for low latency eye tracking. Distortion correction today is fundamentally flawed because you can truly only correct for a single pupil position with a static function. Every eye position across the lens offers a unique distortion field. Currently the way to combat this is to improve the optics so that the distortion changes minimally within the eye-box and to create distortion correction that “averages” pupil positions. But it’s an imperfect solution and distortion flaws are evident in all headsets. In principle, eye tracking could be used for exact distortion correction at every pupil position – creating a much more “solid” world around, reducing distortion constraints on lens design, and reducing sim sickness. But as usual with VR, the devil is in the details. Latency and tracking accuracy must be near-flawless for both sacaddic and VOR motion.

    • That is actually very interesting. I have noticed that I see different amounts of the peripheral in the Rift when looking straight ahead and to the sides, so definitely a noticeable effect. Well, at least some people are working on eye-tracking, lets hope they get it right :)

  • BradNewman

    “While the current split-screen stereo view familiar to users of most consumer VR headsets accurately simulates vergence (movement of the eyes to converge the image of objects at varying depths)” Human eyes do two basic things: the eyes rotate in the sockets and the lenses focus. The rotation is known as vergence and is used to fuse binocular pairs of images. Focus is knowns as accommodation and is used to focus the images at different distances. Consumer HMD virtual cameras don’t converge but rather render looking directly forward, so as near field objects get closer at a certain point the eyes can’t use vergence to create a fused binocular image. The light from the screen is collimated (or mostly collimated at a closer distance than infinity) by the HMD lenses so the eye lenses are always focused at a fixed far distance, so any software simulation of accommodation is simply a trick on the eyes and not optically accurate. Out of all the potential solutions FOVE could provide I think vergence is the most valuable to allow near field convergence of images. The zSpace display (which does a form of convergence) is a great example of beautiful near field rendering of small objects.

    • Ben Lang

      Hey Brad,

      Thanks for your comment. I wanted to make sure I understood correctly — are you saying that the split screen view of the Rift and others don’t function in a way that allows for vergence? Or did you mean that it breaks down only with particularly close objects?

      • brantlew

        Currently the virtual cameras are represented at the pupil position of a person looking straight ahead (zero vergence). So when people point their eyes inwards the view of the pupils and virtual cameras do not agree. With eye tracking you would move the cameras closer together as vergence increases – or put more simply you just always place the camera on the pupil at all times.

      • BradNewman

        Both. The cameras point straight forward at all times. You ability to converge on mid field objects is an illusion of vergence, but once object enter the near field the disparity between the stereo image becomes so large that the brain can’t fuse the images.

  • brandon9271

    now they can create a game to see how long you can make eye contact with a female before glancing at her breasts.. :-p

  • Jacob Pederson

    Is Foveated rendering even possible? Wouldn’t the latency in the eye-tracking (even if it is very small) make it look like the world is always gaining detail just as you look at it? I find texture pop-in and detail fade-in pretty annoying as it stands in current engines, but to have something similar happening everywhere I look would seem pretty immersion breaking.

    • Jacob Pederson

      Another thought: the prediction algorithms Oculus uses to reduce perceived latency aren’t going to work with eye tracking, because eye movement is much less predictable than head movement.

  • Dejan Kacurov

    There is also release date for the new FOVE’s VR… Do you think it worth it? Here is what i found online and it seems legit