This Co-op HTC Vive Game Uses Kinect to Project Players into VR


This experimental game not only allows two players, both immersed in VR via an HTC Vive headset, to share the same virtual space, it also projects their real world image into the application. And, it’s all possible over an Internet connection.

The surface of possibilities afforded by single player virtual reality experiences has barely been scratched and yet, there are developers already working on ways in which we can all share the same virtual space – even when we’re miles apart.

When Jasper Brekelmans and collaborator Jeroen de Mooij got access to a set of HTC Vive gear, they wanted to work towards an application that addressed their belief that “the Virtual Reality revolution needs a social element in order to really take off”. So, they decided to build a multiplayer, cooperative virtual reality game which not only allowed physical space to be shared and mapped into VR, but also project the player’s bodies in as well. The result is a cool looking shared experience, only possible in VR.


The video above demonstrates the two developers working within the same physical space, with a point cloud for each person’s physical self generated using a Microsoft Kinect for Windows depth camera rendered correctly positioned in VR. The benefits for this are that, your in-game compatriot can see where you are when blind to the real world, especially handy when they’re sharing the same physical space.

'Resident Evil 4' Remake on PS5 is Getting "PSVR 2 content"

However, the real power of the technique comes when the two players are physically separated. The team purposefully build the system to be net-workable, estimating that the data required to send the projected player point cloud data should be possible over an Internet connection with around 5Mb/s bandwidth – pretty impressive.

See Also: New Oculus Touch ‘Toybox’ Videos Show Gestures, Sock Puppets, Shrink Rays, and More
See Also: New Oculus Touch ‘Toybox’ Videos Show Gestures, Sock Puppets, Shrink Rays, and More

There is something bizarrely fun about simply messing around in virtual spaces, and adding in another person to the mix, as demonstrated well with the recent Oculus Touch Toybox videos, elevates everything to 11. The possibilities of geographically diverse families for example, sharing a multiplayer space to spend time together, seems a particularly compelling example.

Whether this experiment will evolve into a full blown project isn’t clear, but the theory is certainly one worth developing in my opinion.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

  • Steve B

    Very cool. Latency would be a serious issue running through the internet. Also, we’re going to need more defined 3D point cloud generation if this is going to be in the consumer market.

    • Janosch Obenauer

      True, but latency of the user’s own actions is the most important here. Like playing a round of Battlefield can feel like no latency. Better ping is always desired, but latency of let’s say 35ms to the server should not be too bad, as long as the latency of your head- and handtracking is still very low. Motion-to-photon latency is the one that needs to be super-low. So I think this is more realistic than a telepresence robot for example.

  • Rémi Rousseau

    Very cool : room-scale is made for collaborative experiences (local or remote!) As soon as we got a Vive, we tried our remote telepresence app and it was very cool, the tracking makes a ton of difference

    • user

      why share all the video data if you can share the data of tracked facial expressions and apply them to an avatar that looks like you?

      • Rémi Rousseau

        That’s definitely an interesting lead. Research is going in that direction ( but you need some very good conditions and it’s not real-time (yet!)

        • user

          Thanks for the link

        • Brekel

          Well actually I come from a professional motion capture background and we have streamed tracked body & facial data before.

          Note that tracking faces with current gen VR helmets is another unsolved problem in itself :)

          We noticed that a rendered avatar feels very disconnected from reality, at most it only somewhat looks like you and the other user.

          Also tracking with Kinect has occlusion glitches and generally needs smoothing which adds lag, a pointcloud is less laggy.

          Other tracking methods mean you have to wear special cumbersome suits, sensors can shift and data can be susceptible to magnetic interference.

          With a pointcloud/mesh you do get artifacts and gaps but your brain perceives it as more acceptable since it has texture, lighting, clothing folds and is all in general more recognizable and personal.

          I think there is definitely a place for both scenarios in the future though.

  • Full Name

    cool, but the music???wtf

    • Brekel

      Hehehehe, fully agree.

      Any tips for royalty free music for youtube? :)

      • Derfmi Tons of high quality, royalty free music. Awesome Resource

        • Brekel

          Thanks for the tip!
          Actually I kinda used up most of the good songs from that site on my other Youtube clips :)

      • Full Name

        hehe, not “free royalty free” sorry :) I use something called videoblocks for footage, which is good, and they do have a sister site called audioblocks with royaltyfree music, but you have to pay a membership 99/yr. I haven’t tried it so I don’t know how good it is, but their videoblocks and graphicstock sites are good. (same concept, but video clips and graphics/clipart etc)

        • Brekel

          Thanks for the tip, will definitely check that out!
          Oh I don’t mind paying (at all), I just meant licensed so it can be used for things like YouTube. (not sure what the exact term is for that)

  • 刘嘉信

    I supposed kinect and htc vive would interfere with each other.