At CES 2014, SoftKinetic, the company that powers the Creative Depth Camera, was showing off a pretty cool Oculus Rift demo which used the depth camera mounted to the headset. The demo enables the user to put their own hands in the virtual world and build basic structures. The camera mount is available to be 3D printed.

Getting hands into a virtual world is a huge step up for immersion over keyboard and mouse. The most widespread implementation of virtual hands that we’ve seen so far comes from Oculus Rift demos that use the Razer Hydra motion controller to give users hand input. And while this adds significantly to intuitive control and immersion of the scene, it is still limited to showing the user mere avatar hands and relies on unnatural button presses to articulate those hands. The next step up is showing the user their own hands and fingers, not those of an avatar, and allowing them to use their hands just like they would in real life to reach out and grasp objects within the world.

And that’s exactly what SoftKinetic’s technology is enabling. Take a Creative Depth Camera, which is powered by SoftKinetic’s sensors, mount it to the Oculus Rift, and you can get your own hands and fingers into the virtual world.

When I checked out SoftKinetic’s cube building demo, I noticed immediately an extra sense of presence when greeted by my own virtual hands moving like a mirror of the real world. While not perfect for this application, the Creative Depth Camera worked fairly well.

The latency of the Creative Depth Camera was relatively impressive, but there’s still room for some improvement. As with most computer-vision based approaches to scene tracking, there’s still some jumpiness, and there’s the issue of the camera’s field of view not matching that of the Oculus Rift—not to mention the camera losing your hands if you look away from them.

Still, the demo worked as a great proof of concept. Having used the Razer Hydra extensively, I can say that there is just something different about using your own hand and fingers to grab and manipulate objects, rather than holding a controller in your hand which adds a layer of abstraction between you and your virtual input.

While this type of natural hand input will work great for casual VR experiences where training-less intuitive input is important, controllers are unlikely to go away anytime soon. The type of responsiveness and accuracy that serious gamers crave can only be handled by controllers for now, and on that front, Sixense’s STEM system is confidently paving the way.

Previously we’ve seen similar implementations with the Leap Motion sensor, though developers have expressed frustration about using Leap Motion for VR input.

Print Your Own Creative Depth Camera Mount for the Oculus Rift DK1

Live1200_preview_featuredIf you happen to own an Oculus Rift and Creative Depth Camera, you can download and print your own mount for the Oculus Rift DK1 from Thingiverse.

The download files also include a Unity demo similar to what I saw at CES 2014. Source files are also included if you’re looking to get started with some development.

At CES, SoftKinetic told me that the cube building demo was available for public download from the company’s website, but I haven’t been able to track it down. We’re in touch with the company to try to make that available to you; we’ll update this article with developments.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

  • eyeandeye

    Looks pretty cool. I’m curious how their camera avoids the occlusion issues that the Leap apparently has. It seemed to handle your hands very well when flipping them over and around. I can’t imagine they’re just “green-screening” your hands onto the Rift’s screen; how would they make it look right?

    How well would this work in various lighting conditions? How difficult would it be to overcome the two problems you illustrated in the video?

    It seems if the Oculus team could make their positional tracking solution flexible enough to track your hands as well as your head, then that would be a better way to avoid the above problems and also not require more equipment and weight to be attached to your head. The system in Crystal Cove would require us to put on tracking gloves with embedded LEDs, I imagine, but didn’t they also say that they were exploring multiple tracking solutions? Hopefully they’ll end up with an all-in-one solution that can track the important parts of our upper body.

    • crim3

      It’s a depth camera. Maybe if you think that way many of your doubts will at least diminish.

      • eyeandeye

        I wanted to know how this is succeeding as a potential VR interface where Leap apparently did not. They both seem to rely on visual information gathered from a single direction and are therefore subject to occlusion problems as I understand it, such as one hand being hidden behind your other hand or a physical prop you might be holding.

        Also, depth cameras aren’t somehow magically immune to lighting conditions; my original question still seems valid and unanswered to me. The fact that they’re depth cameras doesn’t answer the problems Ben mentioned either, like the problem of what happens when you need to do something with your hands while not looking directly at them. I don’t think it’s a very good tracking solution if the tracker is constantly losing sight of what it’s tracking. One pro to this set up though is that it would work in 360 degrees.

        I suppose a tracking system that relies on one stationary camera isn’t a perfect solution either, since you can’t turn too far away from it, you’re confined to the volume that it’s capable of tracking, and it’s probably harder to track fingers from a distance. But it seems like those flaws would be easier to improve without adding a bunch of extra sensors, batteries, wires and weight to your head/body.

        I didn’t mean to come across as doubtful or negative. I’m just curious about the technologies despite my lack of technical knowledge about them.

  • Curtrock

    I wonder if SIXENSE could weave magnetic fibres into a pair of gloves, and use them with the STEM system….

    • Druss

      I never got the Sixenxe hype. Buttons in VR are, to me, highly undesirable. Accurate hand tracking is much more immerse. As rule of thumb, if you would not do it in real life you should not do it in VR. You would not push a button on a controller to open a door in real life, so you should not use a controller with buttons in VR. Maybe if they focused on weapon type peripherals (guns, bows, axes) I could see their place. I know you can buy the STEM system without the controllers, but as long as the majority of the VR community keep holding to a legacy interface I fear we might not give superior solutions a chance.

      • eyeandeye

        I will be interested to try both Sixense and controller-less systems. I think they each have their pros and cons. Personally I think certain actions would be sloppy and difficult with no controller, or would make certain types of game play too tedious; a button push is much simpler and faster, and currently more reliable than some elaborate hand motion. But I haven’t actually had the chance to use such a system yet, so we’ll see.

        Once high precision hand tracking can provide high quality tactile and haptic feedback, and we can feel as well as see what we’re doing, then I will definitely no longer see the need for physical controllers.

  • Branden Bates

    I think this is the coolest peripheral for the Oculus Rift. I believe Oculus needs at least one fundamental type of VR input. What better way then to in bed a couple of wide angle cameras on the Rift. This gives the rift;

    1. A basic input that is part of every rift.
    2. Reduces the disconnect of physically moving an hand or arm and not seeing this in the Rift.
    3. Allow you to “see” peripherals such as keyboards, mouse or joysticks. Think about it the rift has a program where you rotate the object of interest in front of the Rift. The Rift then saves it as an object that it can recognize which *appears in the Oculus view

    *This could appear lightly outlined or perhaps only show when you shake/nod the Oculus.

    I would love to play a Total War game where I am looking at the battle field and giving orders with my hands. Or be completely surrounded in a Space Sim, Ender’s Game style.