kinect 2.0 virtual reality xbox one

Sadly, there were no VR-related surprises from the big console manufacturers at E3 this year. However, Microsoft’s Kinect 2.0 could become a key virtual reality input device thanks to a rich set of features and substantially improved performance.

Back when Kinect launched in 2010, Microsoft positioned it as a high-tech input device that would revolutionize the way we play games. Unfortunately, it fell well short of that lofty goal. Part of the problem is that the Kinect didn’t have the technical chops to deliver the experience that Microsoft was touting.

James Iliff, producer of Project Holodeck, told me back in 2012 that his team was attempting to use a quad-Kinect setup for skeletal avatar tracking, but had to abandon it for performance reasons.

“…the Kinect hardware is extremely lacking in fidelity. Every point the Kinect tracks is filled with unmanageable jitter, rendering the data useless for anything other than the most simple of interactions. We tried very hard to get around this with several software algorithms we wrote, to get multiple Kinects to communicate with each other, however this did not really make anything more accurate unfortunately.”

While the original Kinect might not have been up to the task, the Kinect 2.0 might finally deliver on the technical promise that Microsoft made.

Wired has a great preview of the Kinect 2.0 showing off its impressive capabilities:

Thanks to greatly increased fidelity, the Kinect 2.0 can do much more than it’s predecessor. Here’s a short list of some of things that the unit can sense, and tools that developers can make use of:

  • 1080p color camera
  • Active IR for light filtering
  • Improved skeletal tracking
    • Joint rotation
    • muscle/force
  • Expressions Platform:
    • Heart rate estimation
    • Expression
    • Engagement
    • Looking Away (yes/no)
    • Talking
    • Mouth Moved
    • Mouth (open/closed)
    • Glasses (yes/no)
    • Independent right/left eye (open/closed)
SEE ALSO
20 Great VR Games for Relaxation & Meditation

Limitations of Original Kinect for Virtual Reality

kinect 1

For virtual reality, putting the player inside the game (known as ‘avatar embodiment’) is key. Avatar embodiment can’t be achieved in full without proper skeletal tracking.

Skeletal tracking is difficult with the original Kinect because of its low fidelity.

Oliver Kreylos is a PhD virtual reality researcher who works at the Institute for Data Analysis and Visualization, and the W.M. Keck Center for Active Visualization, at the University of California, Davis. He maintains a blog on his VR research at Doc-Ok.org.

Kreylos, who claims to own six Kinects (and has done his fair share of hacking), talks about the limitations of the original Kinect in a recent post on his blog.

The bottom line is that in Kinect1, the depth camera’s nominal resolution is a poor indicator of its effective resolution. Roughly estimating, only around 1 in every 20 pixels has a real depth measurement in typical situations. This is the reason Kinect1 has trouble detecting small objects, such as finger tips pointing directly at the camera. There’s a good chance a small object will fall entirely between light dots, and therefore not contribute anything to the final depth image.

Other systems for skeletal tracking are cost prohibitive. Currently, many Oculus Rift/VR developers track hands with the Razer Hydra, but that’s a long way from full skeletal tracking.

Kinect 2.0 for Virtual Reality Input

From what we’ve seen so far, the Kinect 2.0 has the fidelity to provide proper skeletal tracking. In the Wired video (embedded above) the Kinect 2.0 was able to easily see the wrinkles on the player’s shirt (and even the buttons if close enough). Microsoft has been quiet about many of the technical specifications of the Kinect 2.0, but they claim to be using “proprietary Time-of-Flight technology, which measures the time it takes individual photons to rebound off an object or person to create unprecedented accuracy and precision.”

SEE ALSO
What Any VR Game Can Learn From the 'Electronauts' Interface – Inside XR Design

Kreylos considers the implications:

In a time-of-flight depth camera, the depth camera is a real camera (with a single real lens), with every pixel containing a real depth measurement. This means that, while the nominal resolution of Kinect2′s depth camera is lower than Kinect1′s, its effective resolution is likely much higher, potentially by a factor of ten or so. Time-of-flight depth cameras have their own set of issues, so I’ll have to hold off on making an absolute statement until I can test a Kinect2, but I am expecting much more detailed depth images…

The ‘Expressions Platform’, that analyzes each player in the scene, should be very useful for developers. Heart rate estimation is particularly intriguing — imaging knowing the heart rate of the player in a horror game and changing the tempo of the music accordingly. The possibilities are plentiful!

One unfortunate road bump for the Expressions Platform is that it won’t work if the player’s face is obscured. Much of the information comes from the player’s facial expressions (even the heartrate estimation is done by looking at micro-fluctuations of the skin on the players face). This means it’ll work well for CAVEs and some other systems, but not for head mounted displays like the Oculus Rift.

Even without the Expressions Platform, high fidelity lag free skeletal tracking (at commodity pricing) will allow players to fully embody their avatars, raising the bar of immersion that much more.

Kinect 2.0 for Windows Developer Kit Program Accepting Applications

For the developers among you, Microsoft has announced a developer kit program to get your hands on the Kinect 2.0 for Windows and the upcoming SDK. You have until July 31st to apply to the program. $399 will get you pre-release hardware of the Kinect 2.0 in November, along with a final version of the hardware once it’s released.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • seanlumly

    Kinect for VR would be about as useful as a Razor Hydra with no buttons. Think about it: Regardless of the fidelity of the tracking, how are you supposed to move throughout the gaming world? With gestures? What about manipulating objects? Also with gestures? What about shooting guns? More gestures still?

    Outside of skeletal tracking, Kinect has yet to prove itself for serious games. In fact, if you look through the library of XB games that use the Kinect, the best examples are mobile-grade casual titles with better visuals.

    Sony’s Playstation Eye has it bang on, and would be PERFECT for VR. First, it is *extremely* accurate with a true one-to-one mapping as well as object orientation. Second, it has buttons. Buttons. This is the point that can not be understated. Not only can you aim and fire, but you can also move, and manipulate objects. Lastly there is a rich variety of accessories turning the Eye controller into a pistol, a rifle, a steering wheel, or wearable objects.

    It should be noted that it is also possible to do skeletal tracking with the Eye’s stereo camera, and Sony has mentioned that this is a capability of the system, and though I doubt that the tracking would compare to the Kinect 2, I would imagine it would be more than enough for in-world VR interaction. Facial tracking is possible with today’s smartphones, so it should be short work for a PS4. Even determining pulse is a computer vision problem and can be done with a standard camera.

    For Kinect to have a useful place in VR, I suspect it needs an object that can be manipulated in meat-space.

    Oh, and there is certainly lag with the new Kinect. This is by virtue of many things including the frame-rate of capture, the user-display latency, and the time it takes to route and process the results. Take a look at any youtube vid where Kinect is being filmed off of a screen with a user in the same frame, and notice that lag indeed exists. Such is the case with all inputs. The only way to get rid of this (today) is to use prediction based on temporal data.

  • Morgan

    While I think it’s reasonable to be skeptical of the Kinect 2 for latency issues, there’s still tons of stuff a 3D depth camera can do – how about make an accurate map of the surfaces of your room so you can interact with it in VR? Or taking a daily snapshot of you to match with your avatar’s clothing. Voice commands and broad gestural stuff, too. I can’t see it as the primary interface, at least not from the demos we’ve seen so far, but as a component in the larger VR ecosystem, I’m sure something cool can be done with it.

  • seanlumly

    I also think there’s a ton of stuff Kinect 2 can do, though as you suggest, they might be best suited for research and fun hacking projects rather than a key part of an actual game. As an addition that can make a game more immersing, sure. But I am very sceptical about it being a key part of a VR system due to omission of basic controls.

    Now if you couple the Kinect 2 with a one-handed wireless controller, it could be a very potent mix.

  • eyeandeye

    I’m more excited for the wireless Sixense device formerly known as Hydra. Especially if it has increased range and if that STEM device is for providing positional tracking for the Rift

  • kevin williams

    I know that many want to paint the Gen-8 consoles as the big opportunity for VR – even some sources have attempted to claim that Oculus Rift HMD support is round the corner wit the XBone and PS4 – but with the recent admittance of true performance on these platforms the claims have lost their luster.

    Now we see certain sources (unknown where thier alliances lie) claim that the KINECT 2.0 is “perfect” to run Rift and VR. Let’s deal with some of the facts – the original KINECT depth sensing technology was invented in 2005 by another company that’s assets were acquired by Microsoft – sucked into what became “Project Natal”. This 3-D sensing tech had been used in various VR research before acquisition (and some arcade games!), but the majority of new dev had moved from visual, to magnetic positional tracking.

    The KINECT 2.0. will offer a great opportunity for motion tracking and gesture recognition experiences on the XBone – but as towards dedicated VR application on the XBone, the software SDK for this new platform is too draconian for most homebrew – the old KINECT 1.1 offers a great homebrew environment able to run on PC, but is limited on performance – the 2.0 also contains some limitations – but more importantly is a Xbone-only architecture with those restriction.

    I know that this month has been a bit of a slow news day for VR stories, but bigging-up the XBone and indirectly the KINECT could send out a mixed message and start a “claims” battle between Sony and MS fanboys concerned by the lackluster interest in their respective new consoles!

  • Psuedonymous

    but the majority of new dev has moved from visual, to magneti positiona tracking.Utter nonsense. Almost ALL tracking research for the last decade has been purely visual (high-speed structured light, compact LIDAR, time-of-flight thrushold and time-of-flight pulse-LIDAR, etc), with some very recent advances in wideband RF tracking. There have been NO significant changes to magnetic positional tracking for a good 3 decades. Sixense are miniaturising and commoditising existing technology using smaller cheaper signal electronics (off the back of mobie phone development), but aren’t using any new techniques or technologies. Magnetic tracking tehnology is mature, stagnant, and ripe for miniaturisation for the consumer market.

    I’m ohping thatbeyond just being a wireless hydra with two extra buttons and a buttonless headtracker, Sixense are working on a semi-automated calibration system to compensate for the local magnetic field shaping due to nearby metal objects to the tracking space. This is one area where the current Hydra is deficient.

  • kevin williams

    @Psuedonymous
    The name of this page and thread is RoadToVR – we are discussing VR matters and the majority of the VR presence representation (motion tracking) is done using wireless all-attitude 6 Degree-Of-Freedom (6DOF) tracking system (based on electromagnetic technology and conjoined GPS MEMS motion processing gyros) [A good example being the INVENSENSE triple-axis gyro chip] – not such nonsense, but a lack of knowledge on your part me thinks.

    Systems such as Polhemus are leaders is the electromagnetic approach, also maker of one of the remaining “visual” tracking systems 3D laser scanning is gathering strong accreditation. Logitech and their 3D 6 DOF Input Devices using Ultrasonic Trackers is another approach – while the aforementioned INVENSENSE chip (used in phones and new tracking systems) is a growing dominance. While the Sixense Entertainment Razer Hydra, magnetic field positional tracker ( and others based round gyroscope, microelec- tromechanical systems (MEMS)).

    A little knowledge is a dangerous thing and I feel that its best to have an open mind to peoples opinions rather than trying to force your own on us.

  • VAL

    The Kinect 2.0 will having amazing control features for gaming – yes walking around, yes interacting with the environment, and yes they’ve even talked about shooting games which could use your own hand or a “toy gun” you could hold. I’m very excited for what they’ll be able to do alongside the Oculus Rift – for now though it does seem that the Razer Hydra or even better the wireless Sixense will be the way to go until the Kinect 2.0 can be perfected (which I believe it will) to better suit our gaming needs.