Experiment #3: Extendo Hands!

For Leap Motion’s third and final experiment, rather than simply gesturing to command a distant object to come within reach, we explored giving the user longer arms to grab the faraway object.

The idea was to project the virtual hands out to the distant object to allow a grab that functions the same as when the user is standing right next to an object. Then, once the object was held, the user would be in full control of it, able to pull it back to within their normal reach or even relocate and release it.

This approach touches on some interesting neuroscience concepts, such as body schema and peripersonal space. Our brains are constantly using incoming sensory to model both the position of our bodies and their various parts in space, as well as the empty space around us in which we can take immediate action. When we use tools, our body schema expands to encompass the tool and our peripersonal space grows to match our reach when using the tool. When we use a rake or drive a car, those tools literally become part of our body, as far as our brain is concerned.

Image courtesy Sandra Blakeslee & Matthew Blakeslee

Our body schema is a highly adaptable mental model evolved to adopt physical tools. It seems likely that extending our body schema through virtual means would feel almost as natural.

For this experiment, our approach centered on the idea of remapping the space of our physical reach onto a scaled-up projective space, effectively lengthening our arms out to a distant object. Again, the overall set of interactions could be described in stages: selecting/hovering, grabbing, and holding.

1. Selecting/Hovering

To select an object, we were able to reuse much of the logic from the previous executions: raycasting through the shoulder and palm to hit a larger proxy collider around distant objects. Once the raycast hit an object’s proxy collider, we projected a new blue graphic hand out to the object’s location as well as the underlying virtual hand, which contains the physics colliders that do the actual grabbing. We used similar snapping logic from previous executions so that when an object was hovered, the blue projected hand snapped to a position slightly in front of the object, ready to grab it.

Raycasting against a distant object’s proxy collider, sending a projected hand out and snapping it to a position directly in front of the object ready to grab it.

As an alternative, we looked at an approach without snapping where we would hover over a distant object and project out a green hand, but let the user retain full control of their hand. The idea was to allow the full freedom of manipulation, including soft contact, for distant objects.

To do this we remapped the reachable range of hands onto a preset projective space that was large enough to encompass the farthest object. Then whenever the raycast hit an object’s proxy collider, we would simply send out the projected hand to its corresponding position within the projective space, letting it move freely so long as the raycast was still hitting the proxy collider. This created a small bubble around each object where free hovering with a projected hand was possible.

Free hovering distant objects to allow soft contact for distant objects.

Regular soft contact interaction with virtual objects takes a bit of getting used to, since from the perspective of a game engine, the hands are immovable objects with unstoppable force. This effect is multiplied when operating in a projectively scaled-up space. A small movement of the real hand is magnified by how far away the projective hand is. Often, when trying to roll our hand over a distant object, we instead ended up forcefully slapping them away. After some tests we removed free hover and stuck with snap hovering.

2. Grabbing

Grabbing in this execution was as simple as grabbing any other virtual object using our underlying virtual hand model. Since we’d snapped the blue projected hand to a convenient transform directly in front of the distant object, all we had to do was grab it. Once the object was held we moved into the projective space held behavior that was the real core of this execution.

3. Holding

At the moment when grabbing a distant object, the distance to that object is used to create a projective space, onto which the actual reach is remapped. This projective space means that moving the hand in an arc will move the projected hand along the same arc within the projected space.

However, as one brings their hand towards them the projective space dynamically resizes. As the distance between the actual hand and a reference shoulder position approaches zero, the projective space approaches the actual reach space.

Once the hand is almost as far back as one can bring it, and the projective space is almost equal to actual reach space, the projected hand fuses into the actual hand and the user is left holding the object directly.

The logic behind this dynamically resizing projective space held behavior sounds quite complicated, but in practice feels like a very natural extension of one’s hands.

After a bit of practice these Extendo Hands begin to feel almost like regular hands just with a longer reach. They make distant throwing, catching, and relocation possible and fun!

Extendo Hands feel quite magical. After a few minutes adjusting to the activation heuristics, one could really feel a sense of peripersonal space expand to encompass the whole environment. Objects previously out of reach were now merely delayed by the extra second it took for the projected hand to travel over to them. More So than the other two executions this one felt like a true virtual augmentation of the body’s capabilities. We look forward to seeing more work in this area as our body schemas are stretched to their limits and infused with superpowers.

SEE ALSO
Exclusive: Leap Motion Explores Ways to Make Controller-free Input More Intuitive and Immersive
1
2
3
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


  • dk
    • Betty

      Gℴℴgle pays now 97 dollars per hour to complete esay work on apple laptop .. Do work only for few hours in a whole day and fun greater time together with your family … any individual can also join it…on weekend I bought a great new Acuraafter just earnin $7934 last four weeks .it is truly the easiest job however you won’t forgive yourself if you don’t have a peek at this.!ux902a:=>=>=> http://GoogleCashWatchOpportunities/earn/hourly ♥♥p♥♥♥x♥♥♥u♥♥♥s♥o♥♥s♥p♥♥♥f♥♥♥v♥o♥♥v♥♥b♥♥♥i♥i♥o♥♥a♥p♥♥k♥r♥x♥r♥g♥m♥o♥♥d:::::!ox53g:chgwj

  • Great interaction methods here. I also think that more needs to be done to reduce arm fatigue. So all the above done with with arms at rest and just finger / wrist flicks and no forearm movement. In addition to that, using both hands simultaneously working together for a single interaction or multi-tasking with separate but simultaneous actions.

  • SoftBody

    Leap Motion should go into software development instead. Their hand-tracking/interaction research is going to waste if nobody can use it if it ties to their hardware.

    Right now, hand-tracking is a novelty. But soon it will be an assumed feature. People will go with whatever cheapest and most accessible. Their tech will simply be left behind.

    • They already are, in part. Their SDKs work even without Leap Motion, they told me. You can use their input system even with your regular Vive or Rift

  • This series is damn interesting

  • Lucidfeuer

    Are Leap Motion the only hardware company who’s work makes sense? This is exactly the work to be done in VR/AR, and hundred times more.

    The whole industry’s set-up and future was compromised from the day Oculus varpowared all their common-sense plans to integrate NimbleBit hand-tracking and 13thLab inside-out tracking from Gen1 when they were they bought-out by the mediocre, greedy blue-collars of Facebook cost and risk managers who decide that making a crap limited product was a good idea to make money…

  • Yoan Conet

    Glad finally someone did it ! Thanx Leap motion
    Imagine grabbing your enemies at distance and release them in mid air and then suddenly stop them like puppets !
    To feel real when grabbing them, at the start, everything but their center of gravity (legs harms head) would have to try keeping its original position like in real life.
    And when suddenly stopping the flying enemy, everything but their center of gravity would keep its momentum, so the enemy moving like a puppet ! Haha

  • No Spam

    How much do you want to bet that there’s an internal demo at Leap that uses extendo-hands to simulate Vader’s Force choke?

    “VR is a fad? I find your lack of faith…disturbing.”

  • M Stauffer

    This is very cool! Is the code available? I’d like to try it in an upcoming academic data visualization project – rather than spending the time coding it myself of course.