Just what Google has brewing in their skunkworks, we can’t say for sure, but with their most recent acquisition of Eyefluence, a company that builds eye-tracking technology for VR headsets, it seems Google is getting ever deeper into what’s largely considered ‘the next generation’ of dedicated VR hardware.

A report from Engadget published yesterday maintains that Google’s secret standalone VR headset “will integrate eye-tracking and use sensors and algorithms to map out the real-world space in front of a user.”

According to unnamed sources cited by Engadget writer Aaron Souppouris, visual processing company Movidius is providing chips to Google, and that the new project is “separate from the company’s Daydream VR platform, will not require a computer or phone to power it.”

daydream-view-hands-on-7

Daydream VR is a platform devised by Google that works with a select number of flagship smartphones from various manufacturers, the first of which is the Google Pixel. Along with Pixel’s unveiling earlier this month, the company also revealed the ‘View’, the first Daydream headset.

SEE ALSO
Google Goes on VR Hiring Spree Amid Daydream Launch

Engadget’s report however was published hours before Eyefluence quietly announced they would be joining Google for an undisclosed sum, first spotted by Mattermark.

In our hands-on piece with FOVE, the only purpose-built eye-tracking VR headset out currently, Executive Editor Ben Lang hashed out a number of use-cases where augmented and virtual reality could benefit from eye-tracking.

  • Eye-based interface and contextual feedback: Imagine a developer wants to make a monster pop out of a closet, but only when the user is actually looking at it. Or perhaps an interface menu that expands around any object you’re looking at but collapses automatically when you look away.
  • Simulated depth-of-field: While the current split-screen stereo view familiar to users of most consumer VR headsets accurately simulates vergence (movement of the eyes to converge the image of objects at varying depths) it cannot simulate depth of field (the blurring of out of focus objects) because the flat panel means that all light from the scene is coming from the same depth). If you know where the user is looking, you can simulate depth of field by applying a fake blur to the scene at appropriate depths.
  • Foveated rendering: This is a rendering technique that’s aimed to reduce the rendering workload, hopefully making it easier for applications to achieve higher framerates which are much desired for an ideal VR experience. This works by rendering in high quality only at the very center of the user’s gaze (only a small region on the retina’s center, the fovea, picks up high detail) and rendering the surrounding areas at lower resolutions. If done right, foveated rendering can significantly reduce the computational workload while the scene will look nearly identical to the user. Microsoft Research has a great technical explanation of foveated rendering in which they were able to demonstrate 5-6x acceleration in rendering speed.
  • Avatar eye-mapping: This is one I almost never hear discussed, but it excites me just as much as the others. It’s amazing how much body language can be conveyed with just headtracking alone, but the next step up in realism will come by mapping mouth and eye movements of the player onto their avatar.
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 4,000 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • DougP

    Really interesting article & topics.
    Glad roadtovr is keeping on top of developments so well!

    Eye tracking / Foveated rendering /etc –

    Re: “wants to make a monster pop out of a closet, but only when the user is actually looking at it” / “eye based interface”
    I understand this & see this as having usefulness in the future.
    However, with current gen design of optics, there is such a tiny “sweet spot” of focus that I don’t believe the eye is actually darting around enough to matter.
    We find ourselves in VR moving our head more to view objects & achieve focus than we do in the real world.
    At present, I suspect that really just choosing the *center* of the render target screen as the assumed point-of-focus, will achieve this.
    Once HMD optics improve & our eyes “dart around” more, this idea may have more benefit.

    Re: “Avatar eye-mapping”
    I think that this will eventually be very important.
    So right that eyes communicate a MUCH! As well, you feel much more *engaged* with a virtual actor when eye contact is made.
    TheBlu – a simple example here is when people play this they often mention something like “the whale looked right at me!”
    The Lab (The Secret Shop) – the little creature on the ground winds up *following* the glowing orb with its eyes. I find demo’ing this experience, people are attracted to this & watching the eyes move/follow the player.
    Basically, you can communicate more information & create a deeper sense of presence with interacting in virtual world.

    Exciting days ahead for VR!
    Very interested in seeing where Google, as well as others working on eye tracking. goes with this.

    • David Herrington

      “I don’t believe the eye is actually darting around enough to matter.”

      I think you would be surprised at how much your eyes move to see interesting parts of a scene without moving the head, regardless of the quality of the screen you are viewing. Its just human nature.

      But you are correct, we do need better optics.

      Excited as well for the future!!

      • DougP

        The optics was the thing I was primarily thinking about. That once optics are improved (heck, even wide FOV) we’ll be more inclined to move eyes more vs moving head to view/focus.

        I agree & understand our eyes move quite a lot, even within the current limited “sweet spot”, but when you consider the entire “render target” that’s sent to the display (1080×1200 per ea frame & eye), that even with eye moving – there’s still a limited area (1/3 or 1/4? not certain) in the center of that frame that the eye is actually “looking at”.
        So that was my point about “eye tracking UI”, that an object would need to be fairly small/far away to matter much for eye-focus context.
        I realize that the benefit & need for this will improve greatly as:
        1) greater FOV
        2) higher res display
        3) better optics & larger area of “sweet spot” where there’s focus
        When these 1-2 things come to our HMDs, eye-tracking UI advantages can be realized.

        Eye tracking is very exciting….and more-so, I think will be crucial to next-gen VR features & what we’ll consider one day ubiquitous capability.

  • Get Schwifty!

    Exciting stuff….

  • OgreTactics

    Boom goes another vaporware wasted by VR companies.