There’s an intuitive appeal to using controller-free hand-tracking input like Leap Motion’s; there’s nothing quite like seeing your virtual hands and fingers move just like your own hands and fingers without the need to pick up and learn how to use a controller. But reaching out to touch and interact in this way can be jarring because there’s no physical feedback from the virtual world. When your expectation of feedback isn’t met, it can be unclear how to best interact with this new non-physical world. In a series of experiments, Leap Motion has been exploring how they can apply visual design to make controller-free input hand input more intuitive and immersive. Leap Motion’s Barrett Fox and Martin Schubert explain:

Guest Article by Barrett Fox & Martin Schubert

Barrett is the Lead VR Interactive Engineer for Leap Motion. Through a mix of prototyping, tools and workflow building with a user driven feedback loop, Barrett has been pushing, prodding, lunging, and poking at the boundaries of computer interaction.

Martin is Lead Virtual Reality Designer and Evangelist for Leap Motion. He has created multiple experiences such as Weightless, Geometric, and Mirrors, and is currently exploring how to make the virtual feel more tangible.

Barrett and Martin are part of the elite Leap Motion team presenting substantive work in VR/AR UX in innovative and engaging ways.

Exploring the Hand-Object Boundary in VR

When you reach out and grab a virtual object or surface, there’s nothing stopping your physical hand in the real world. To make physical interactions in VR feel compelling and natural, we have to play with some fundamental assumptions about how digital objects should behave. This is usually handled by having the virtual hand penetrate the geometry of that object/surface, resulting in visual clipping. But how can we take these interactions to the next level?

With interaction sprints at Leap Motion, our team sets out to identify areas of interaction that developers and users often encounter, and set specific design challenges. After prototyping possible solutions, we share our results to help developers tackle similar challenges in their own projects.

For our latest sprint, we asked ourselves: could the penetration of virtual surfaces feel more coherent and create a greater sense of presence? To answer this question, we experimented with three approaches to the hand-object boundary.

Experiment #1: Intersection and Depth Highlights for Any Mesh Penetration

Image courtesy Leap Motion

For our first experiment, we proposed that when a hand intersects some other mesh, the intersection should be visually acknowledged. A shallow portion of the occluded hand should still be visible but with a change color and fade to transparency.

This execution felt really good across the board. When the glow strength and and depth were turned down to a minimum level, it seemed like an effect which could be universally applied across an application without being overpowering.

Experiment #2: Fingertip Gradients for Proximity to Interactive Objects and UI Elements

For our second experiment, we decided to make the fingertips change color to match an interactive object’s surface, the closer they are to touching it. This might make it easier to judge the distance between fingertip and surface making us less likely to overshoot and penetrate the surface. Further, if we do penetrate the mesh, the intersection clipping will appear less abrupt – since the fingertip and surface will be the same color.

This experiment definitely helped us judge the distance between our fingertips and interactive surfaces more accurately. In addition, it made it easier to know which object we were closest to touching. Combining this with the effects from Experiment #1 made the interactive stages (approach, contact, and grasp vs. intersect) even clearer.

Experiment #3: Reactive Affordances for Unpredictable Grabs

Image courtesy Leap Motion

How do you grab a virtual object? You might create a fist, or pinch it, or clasp the object. Previously we’ve experimented with affordances – like handles or hand grips – hoping these would help guide users in how to grasp them.

In Weightless: Training Room the projectiles have indentations which afford more visually coherent grasping. This also makes it easier for users to reliably release the objects in a throw.

While this helped many people rediscover how to use their hands in VR, some users still ignore these affordances and clip their fingers through the mesh. So we thought – what if instead of modeling static affordances we created reactive affordances which appeared dynamically wherever and however the user chose to grip an object?

Three raycasts per finger (and two for the thumb) that check for hits on the sphere.

Bloop! The dimple follows the finger wherever it intersects the sphere.

In a variation on this concept, we tried adding a fingertip color gradient. This time, instead of being driven by proximity to an object, the gradient was driven by the finger depth inside the object.

Pushing this concept of reactive affordances even further we thought what if instead of making the object deform in response to hand/finger penetration, the object could anticipate your hand and carve out finger holds before you even touched the surface?

Basically, we wanted to create virtual ACME holes.

To do this we increased the length of the fingertip raycast so that a hit would be registered well before your finger made contact with the surface. Using two different meshes and a rendering rule, we create the illusion of a moveable ACME-style hole.

These effects made grabbing an object feel much more coherent, as though our fingers were being invited to intersect the mesh. Clearly this approach would need a more complex system to handle objects other than a sphere – for parts of the hands which are not fingers and for combining ACME holes when fingers get very close to each other. Nonetheless, the concept of reactive affordances holds promise for resolving unpredictable grabs.

Hand-centric design for VR is a vast possibility space—from truly 3D user interfaces to virtual object manipulation to locomotion and beyond. As creators, we all have the opportunity to combine the best parts of familiar physical metaphors with the unbounded potential offered by the digital world. Next time, we’ll really bend the laws of physics with the power to magically summon objects at a distance!

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.

  • Flavio Rodriguez

    Cool stuff!

  • Firestorm185

    This is some really amazing work guys, well done! Can’t wait to see how this all helps us when we finally get to full hand presence VR.

    • dk it’s already possible …..the problems left to work out r pretty small

      • Firestorm185

        Yeah, I own a Leap Motion, the stuff you can do with it is really cool! The only problem is there’s still a very limited field of view for tracking, so doing many of the things you can do with external sensors and controllers is very hard with one front facing leap. But once you gain the ability to use multiple leaps at once or they make headsets with them built in on multiple sides itll be amazing to see what we can do. ^^

        • dk

          the sensor is irrelevant …..the software is the real issue before the orion update it was just unreliable and a bit random ……and this hardware is old it was never meant for vr that’s why it’s not optimised

          ….a few months back they announced that they have a smaller board lighter version with great fov with low energy consumption that can be easily implemented in all in one units and it was integrated in the qualcomm reference design………also they have been talking about another different leap sensor with even more capabilities

          the tough part was figuring out the software

  • Lucidfeuer

    These are always addendum to the existing APIs, but I wish they would rather work on simulated/predicted collisions and grabs and by that I mean that the virtual hand would not go through the table, but the virtual fingers would actually start twisting then the virtual hand lay flat on the table which can then enable different push/pull/swipe/grab interaction depending on the meshes. As well as actual hardware and software updates to Leap Motion, I always regretted that never release their own custom hand-tracking add-on for Oculus, Vive or Gear rather than just putting out a reference model which wasn’t implemented anywhere (except OSVR).

    The agnostic surface graphic in Lone Echo is amazing (even if it suffers from the same lack of virtual hand/finger warping, but because this happens in zero G, when touching then pushing against a wall the result as expect is not that the hand goes through, but that it “pushes you” away from the wall…I think this is the best way to go because even if this is not a physical limit, it pushes the user to “virtually” act according to expectation of what a real wall, table or object would do.

    • dk

      if I have to guess the things that r stopping the implementation
      is the lack of variable focus …. holding thing close to your face is a bit flat looking and exhausting when u r trying to focus on it for a long time

      and the waving your arms in the air without feedback from a controller is not really engaging

      and the exact point of engagement when u pinch is not super clear ….if u had gloves the exact moment of pinching will be really exact

      but since the orion update the leap motion tracking is super close to being perfect

      • Lucidfeuer

        I actually stumble on very interesting concepts of nano-pneumatics mechanism (, which alongside nano-haptics surfaces and magnetic-haptics engine made me reconsider the practicality of gloves, if the reduced and optimised ergonomics work well.

  • 楊子晴

    Really looking forward to!!

  • uidesignguide

    This is by far still my favorite device to work with in tandem with web vr, vive and beyond.

    • Michael

      I think for on the go and just casual VR and AR in general, Leap Motion will take point. It’s simple and requires nothing but the sensor built in to the HMD.

  • Johnatan Blogins

    As shown in this design study:Gaze + Pinch Interaction in Virtual Reality
    Ken Pfeuffer, Benedikt Mayer, Diako Mardanbegi, Hans Gellersen at SUI2017,
    Eye tracking plus gesture is very promising, with the above additional feedback, things are getting really interesting.

  • pkiriakou

    This is really interesting. Did you have people try you r solutions?

  • Hivemind9000

    Is it me or does the dimple mesh look a little bit… um… naughty?

    It’s just me isn’t it…

  • WyrdestGeek

    It’s kind of a shame “Leap Motion” and “Magic Leap” have “Leap” in common ‘cuz sometimes I get the one confused with the other even though they’re completely separate.

  • Very cool… all the studies of Leap Motion on UX are very interesting…

  • Josh Green

    anyway these scripts could be accessible? Id love to be able to tinker around with them and experiment with different reactive mesh interactions

  • Sponge Bob

    touch screen never replaced good old computer mouse
    likewise hand tracking for VR will never replace hand-held controller

    • Christian Schildwaechter

      Take another look at the last ten years of mobile phone development. It is still possible to use Android phones with bluetooth mice, giving you a much higher accuracy. It is extremely inconvenient, requiring another device and a flat surface, so everybody besides some Galaxy Note owners uses fingers on touch screens. Notebooks use touch pads with non-linear mouse speeds, achieving somewhat lower accuracy, but at least without the extra device/surface. Touch pads work better here than touch screens, where your hand covers whatever you want to click and switching between keyboard and screen slows you down. Mice only still thrive on (hard to move) desktop PC, despite being the most accurate everywhere. And most people today use mobile phones and/or notebooks rather than desktop PCs. It boils down to a use case dependent balance between convenient and powerful.

      We can assume something similar for VR. Currently the Vive controller/Vive tracker accuracy is superior, the Oculus Touch finger tracking very neat and there are already interesting custom controllers with force feedback etc. It will be pretty much impossible to match this with something like the Leap motion. So hand tracking will never completely replace hand held controllers, but we already have a VR market with favors convenience over power. Gear VR sold > 5M units, PSVR > 2M and all the others combined probably around 1M units. And at the lowest end the Google passed 10M downloads almost 15 month ago, making simple mobile phones the largest VR platform by far. There will be use cases and a market for high end hand held controllers, but with next years iPhone (and probably a lot of other devices) getting back side depth sensing cameras by default, the majority of users will interact in VR/AR by reaching with their hands into thin air.

      • Sponge Bob

        It depends on use cases
        hand tracking is not suitable at all for e.g. simple drawing in the air
        or some drafting like 3D autocad
        I tried leap motion for stuff like that and it did not work too well
        -beyond being cool novelty and such…
        for most office productivity applications on desktop or notebook PC a mouse is a must
        And btw no flat surface is needed for VR controllers

      • Sponge Bob

        And BTW all this cool Leap Motion stuff dos not work outside in direct sunlight
        so you can forget using it for AR on the go – some google glass type of application walking down the street

        Infrared dude, too much infrared coming from the Sun