Leap Motion continues to refine their hand-tracking tech for intuitive controller-free interactivity. The company’s latest focus has been on an ‘Interaction Engine’ which usurps standard physics engines when it comes to defining interactions between user input and virtual objects.
Leap Motion‘s Caleb Kruse demonstrated the company’s work on the Interaction Engine, which is said to form the foundation of intuitive and accurate interactions with a variety of objects. With that foundation in place, developers can focus on creating useful experiences rather than having to having to find out the best way to program interactions from the ground up. The Interaction Engine is still internal at Leap Motion, but the company tells us that they plan to release it widely to developers in the near future.
The Interaction Engine is a sort of intermediary between the user’s input and the physics engine, Kruse told us. Left to physics alone, grabbing an object too tightly might cause it to fly out of your hand as your fingers phase through it. The Interaction Engine, on the other hand, tries to establish your intent (like grabbing, throwing, or pushing) based on what the Leap Motion tracker knows about your hand movements, rather than treating your hand in VR like any other object in the physics simulation.
The result is more intuitive and consistent control when interacting with objects in VR—something that’s been a major hurdle for Leap Motion’s computer-vision based input. Now it’s easier and more predictable to grab, throw, and push objects.
While developing the Interaction Engine, Leap wanted to be able to quantify the efficacy of their hand input, so they created a simple demo task in VR where users reach out to grab a highlighted ball and place it in a randomly indicated position. Through testing hundreds of users, Kruse said the company found people to be around 96% accurate in this task when using the Interaction Engine.
Another demo which utilized the Interaction Engine allows you to create cubes of varying sizes by pinching your thumb and index finger together to form a recognizable gesture. Then, when moving your hands close together, the outline of a cube forms and you can move your hands back and forth (like a pinch zoom) to set your desired scale.
When I tried these demos myself, I noted how the system was impressively able to understand that I was still holding objects even when I occluded my fingers with the back of my hand. The cube demo was fun an easy to use (especially with gravity turned off) and while I wasn’t quite as adept as Kruse in manipulating objects, his skills are a demonstration that it’s possible to get better at using the system over time (which means, by necessity, there’s a vital aspect of consistency to the system).
Grasping virtual objects which have no physical representation is still a strange affair, but the Interaction Engine definitely enhances predictability and consistency in object interactions, which is incredibly important for the practicability of any input method.