SMI, a company working in the field of gaze detection for over 20 years, hit CES this year with an application of their latest 250hz eye tracking solution coupled with a holy grail for 2nd generation virtual reality – FOVeated rendering.

SMI‘s (SensorMotoric Instruments) history with eye tracking is lengthy, having been on the cutting edge of the field for over 2 decades now. Up until now however, their technologies have been used in any number of fields, from consumer research – gathering how people’s eyes are drawn to particular products in a supermarket aisle all they way to informing the optimal design for eye wear for sporting professionals.

FOVeated-rendering-SMI-2

At CES this year, SMI were at the show to demonstrate their latest 250Hz eye tracking solution integrated with a VR headset. More importantly however, they demonstrated this eye tracking coupled with Foveated rendering, a technology that is generally regarded as vital for next generation VR experiences.

Foveated rendering is an image rendering technique which is born from the way we look at and process images from the world around us. Although human vision has a very wide total field of view, we really only focus on very small segments of that view at any one time. Our eyes rapidly dart from point to point, drawing information from those focal points. In the world of VR that means that, at least in theory, most of the pixels used to render the virtual world to a headset at a constant resolution is largely wasted. There’s not a lot of point drawing all of those pixels percentage of them at any one time.

SEE ALSO
FOVE Launches Pre-Orders For Eye-Tracking VR Headset FOVE 0, Starting at $549

Foveated rendering aims to reduce VR rendering load by using gaze detection to tell the VR application where the user is looking and therefore which area of the view to construct at high visual fidelity. Allowing the rest of the image, which falls into our peripheral view to be drawn at lower resolutions, the further out from the current focal point it is. The technique is largely accepted as necessary as we strive to achieve an image which is imperceptible to the human eye from reality, an image that requires a resolution in the region of 16k per eye for a 180 degree field of view, according to Oculus’ chief scientist Michael Abrash. That’s a lot of potentially wasted pixels.

I met with SMI’s Christian Villwock, Director OEM Solutions, who showed me their latest technology integrated with a modified Oculus Rift DK2. SMI had replaced the lens assembly inside the headset, with the custom headset incorporating the tech needed to see where you were looking. (We’ll have a deep dive on exactly how this works at a later date).
SMI-DK2-Eye-tracked (5)

Firstly, Villwock showed me SMI’s eye tracking solution and demonstrated its speed and accuracy. After calibration (a simple ‘look at the circles’ procedure), your profile information is stored for any future application use so this is a one-time requirement.

The first demo, comprises a scene will piles of wooden boxes in front of you. A red dot highlights your current gaze point, with different boxes highlighting when looked at. This was very quick and extremely accurate, I could very easily target and follow the edge of the boxes in question with precision. The fun bit? Once you have a box highlighted, hitting the right joypad trigger causes that box to explode high into the air. What impressed though was that, as the boxes rose higher, I could almost unconsciously and almost instantly target them and continue the same trick, blasting the box higher into the air. The system was so accurate that, even when the box was a mere few pixels across at 100s of feet in the air, I was still able to hit it as a target and continue blasting it higher. Seriously impressive stuff.

SEE ALSO
FOVE Shows Glimpse of Foveated Rendering Progress

SMI-DK2-Eye-tracked (2)

The best was yet to come though, as Villwock moved me to the their piece de resistance, Foveated rendering. Integrated into the, by now, well-worn Tuscany tech demo from the Oculus SDK, SMI’s version is able to render defined portions of the scene presented to the user at varying levels of detail defined as concentric circles round the current gaze point. Think of this like an archery target, with the bullseye representing your focal point, rendered at 100% detail, with the next segment 60% detail and the final segment 20% detail.

There were a couple of questions that I had going into this demo.

One: Is the eye tracking to Foveated pipeline quick enough to track my eye, shifting that bullseye and concentric circles of lower detail fast enough for me not to detect it? The answer is ‘yes’, it can – and it really does work well.

Two: Can I detect when Foveated rendering is on or off? The answer is ‘yes’, but it’s something you really need to look for (or, as it happens, look  away for). With SMI’s current system, the lower detail portions of the image are in your peripheral vision, and for me they caused a slight shimmering to appear at the very edge of my vision. Bear in mind however this is entirely related to the field of view of the image itself and how aggressively that outer region is reduced in detail. i.e. it’s probably a solvable issue, and one that my not even be noticed by many – especially during a more action-packed VR experience.

SEE ALSO
Preview: Explore and Plunder the Deep Sea with 'Neptune Flux'

The one thing I could not gauge is of course the very thing this technology is designed to resolve – how much performance was gained when engaging Foveated rendering versus 100% of the pixels at full fidelity. That will have to wait for another time, but cannot be ignored of course – so I wanted to be clear.

So, much to my surprise, Foveated rendering looks to already be within the grasp of second generation headsets. Christian told me that they’re discussing implementations with hardware vendors right now. It does seem clear that, for the second generation of VR headsets, and if we ever hope to reach those resolutions which allow imperceptible display artefacts, eye tracking is a must. SMI seem to have a solution that works right now, which puts them in a strong position as R&D ramps up for VR’s next gen in a couple of years.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

  • Sebastien Mathieu

    our freshly pre-ordered almost acquired CV1 are already obsolete……

    • sintheticreality2

      I have no issues at all being an early adopter. In 2-3 years I’ll sell my CV1 and buy CV2.

    • JoeD

      every fresh piece of tech is already obsolete the minute you buy it. That’s the nature of technology. Things move much too fast to have the best at any given time.

    • polysix

      cv1 is already obsolete even without new advances thanks to sticking with a crappy gamepad and seated experience (welcome to 2 years ago – DK2 owner here bored of that now). Vive at least is starting down the right path from DAY 1.

      Anyway, yes can’t wait for eye tracking and wireless in gen 2/3 (along with more FOV and high res of course)

      • yag

        Actually Valve is going to fast, we need wireless headsets before even considering roomscale, or it will be only a gimmick.

  • Tom VR

    Incredible. Wonder if it tracks your blinking, and if it does, can blinking be used as a tool in-game…

    • sintheticreality2

      Imagine horror games that know when your eyes are closed because you’re too afraid to look? NPCs could taunt you and then when you open your eyes, BAM! there could be some monster in your face.

  • VR Sverige

    Yeah just love this tech. I saw an early prototype 2 years ago from Tobii and think/hope they a pretty much on the same path as SMI. They were wery secretive when asked about future tech at that time.

  • Jeri Haapavuo

    FOVE has done this already. Google it.

    • Thiago Braga

      Has FOVE already demonstrated foveated rendering? I thought it was mole like a depth thing like crossing eyes for close objects

      • user

        i think they said something about aiming a gun with your eyes. they didnt say something about control of the camera with the eyes. i guess thats because their eye tracking isnt fast enough.

        • Thiago Braga

          Correct me if wrong but foveated rendering does not control camera settings but rendering specifications. Yes, the device must be pretty fast to follow (if they follow) saccades. According to a fast search (not sure if correct!), it must reach a speed of 900hz to follow two saccades of one degree at full speed (angular velocity of 900°/s).

          Edit: I’m aware of the 120hz limit of the perception in changes but not sure if just unconscious movement is made/impacts the subject

      • Jeri Haapavuo

        From FOVE website: “FOVE can focus representation based on realistic depth of field, adding foveated rendering, and even allow users to make eye contact with virtual characters, which greatly enhance the power of expression in VR.”

        • Thiago Braga

          In one part it says it’s a work in progress, in other it’s likely done(your citation)… I’m a bit confused, also no video in the foveated rendering section, only “what would likely to be” video. Maybe the site is missing some updates, could you link me to the proof (demonstration)? Thanks a lot!!

    • yag

      No foveated rendering with the FOVE.

  • hobel

    I don’t think it’s written “FOVeated” with capital letters – “fovea” is the medical term for the part of the retina with the highest resolution, not related to FOV (field of view).

    Precise eye tracking might become a big deal for social experiences using avatars (for eye contact)

  • francoislaberge

    Is foveated rendering really the best use of eye tracking? What about controlling the virtual directions of your eyes for proper convergence when focusing on closer things, did they have demos of that too?

    • Thiago Braga

      The problem is that you are not used to control things with your eye movement like moving direction/speed. How would you do that? Would you feel sick? Stop by blinking? How would you go backwards? Blink one eye? Wouldn’t you be unable to walk to a direction and look to another? The use in the article is damn fine in the horizon as much they can optimize the algorithm. You could probably use the data acquired by this system to project avatar’s eye movement just like FOVE too. AFAIK the eye is an input peripheral to the brain and not output (aside expressions but this is made w/ surroundings) Pupil dilation could be measured to supervisor levels of anxiety, adrenaline or stress of the user, btw.

      • francoislaberge

        Sorry, I meant, controlling the cameras so the eyes of the virtual head representing you isn’t set a fixed gaze angle, but instead has it’s eyes matching your eyes.

        Yes, yes, and yus to reading subtle reactions of pupils. There is already some interesting work on guessing facial expressions just from what an eye tracker can see of you eyes and surrounding skin shape.

        • francoislaberge

          For example currently if you stare at a butterfly sitting on a flower just inches from your virtual face you will see double vision. In reality you would have crossed your eyes to make the butterfly become focused. The headset could make the virtual cameras show you converged images too if it took your direction into consideration.

          • pixelblot

            Yeah subtle lens shift would be cool. mechanism for that would add bulk to headset though. When we have directa light projection to the eye, they should be able to adjust angle easier. one day!

        • Thiago Braga

          I apologize for the wrong answer, I misunderstood your question completely. If was stack I would be already f**ed lol. What you are saying I believe FOVE **is trying** to do (this and more).

          • francoislaberge

            Hahahah. Ok, good.

    • brandon9271

      This is a stretch but they could also move the lenses with servo motors so that focal length would match the distance of the object you are focusing on. However, It would be complicated and probably not be worth the trouble. Nevertheless I’d be interested to see somebody experiment with it.

  • yag

    250hz is a great improvement compared to the actual cheap eye trackers @60hz.
    But I always read that you need at least 1000hz to do foveated rendering ?…

  • Jonny Cook

    Question for the author of this article: Do you know if the Tuscany demo had auto-aiming enabled? You mentioned that you were able to hit the barrel even when it was only represented by a few pixels on the screen. Is it possible that this was due to some sort of auto-aiming mechanism, and not a good indication of the technologies accuracy?

    • He says he could very accurately follow the edges of the boxes with the red dot representing his gaze. That doesn’t sound like auto aim.

      • Thiago Braga

        If he had auto aim: instantly buzz killington :|

  • Matthew Lynch

    I think we will see this tech in a consumer HMD this year. It might take 2-3 years to be in every HMD, but no more as it makes such a big difference in so many areas. VR needs it to the point that it will be a standard feature by the end of this year, CV2 will have it I bet.

    • Andrew Jakobs

      Don’t count on it, this year you’ll only see the Vive, the CV1 and the PSVR, and maybe some chinese knockoff’s, but no other real consumer headsets..

  • bar10dr

    What if you move your eye during a frame render. For me this only makes sense with a super high frame render.

    • pixelblot

      That’s like saying what if you move your head during a frame render.. Obviously it is a low latency system..240hz dude..our monitors today refresh at 60-76 standard..100-120 high end 140hz top of the line.

  • T.O.

    I’m not sold on the implications of this. on peripherals is sometimes necessary for watching multiple entry points in counterstrike for example (obviously not the target for VR but still) VR is obviously a great tool and has a lot of developmental practicality but stuff becomes genuinely hard to run when its interact-able. this technology is not necessary unless the games are hard to run which means its targeted towards very detailed games even though that market is the ones who use peripheral view the most. – “especially during a more action-packed VR experience.” definitely have a problem with this statement. It seems disingenuous when talking about first person mediums.

    • pixelblot

      Huge implications actually. When you only have to render 10-15% of your (tracked)viewing area at full resolution and the subsequent outer edges at lower resolution then that means you gain so much more performance. Better quality games, more framerate = win, not to mention direct eye contact interactivity.