For new computing technologies to realize their full potential they need new user interfaces. The most essential interactions in virtual spaces are grounded in direct physical manipulations like pinching and grabbing, as these are universally accessible. However, the team at Leap Motion has also investigated more exotic and exciting interface paradigms from arm HUDs and digital wearables, to deployable widgets containing buttons, sliders, and even 3D trackballs and color pickers.

Guest Article by Barrett Fox & Martin Schubert

Barrett is the Lead VR Interactive Engineer for Leap Motion. Through a mix of prototyping, tools and workflow building with a user driven feedback loop, Barrett has been pushing, prodding, lunging, and poking at the boundaries of computer interaction.

Martin is Lead Virtual Reality Designer and Evangelist for Leap Motion. He has created multiple experiences such as Weightless, Geometric, and Mirrors, and is currently exploring how to make the virtual feel more tangible.

Barrett and Martin are part of the elite Leap Motion team presenting substantive work in VR/AR UX in innovative and engaging ways.

As we move from casual VR applications to deeper and longer sessions, design priorities naturally shift toward productivity and ergonomics. One of the most critical areas of interaction design that comes up is mode switching and shortcuts.

Today we use keyboard shortcuts so often that it’s difficult to imagine using a computer without them. Ctrl+Z, Ctrl+C, and Ctrl+V are foundational to the efficiency of keyboard and mouse input. Most of you reading this have committed these to muscle memory.

In VR we’ve seen controller inputs adopt this shortcut paradigm relatively easily by remapping commands to buttons, triggers, trackpads, and analog sticks. To increase or decrease the brush size in Tilt Brush you swipe right or left on the trackpad of your brush hand.

But what happens when we think about one-handed rapid selections for bare-handed input? This requires a different kind of thinking, as we don’t have buttons or other mechanical inputs to lean on. In our previous work, we’ve mapped these kinds of commands to either world-space user interfaces (e.g. control panels) or wearable interfaces that use the palette paradigm, where one hand acts as a collection of options while the other acts as a picker.

But if we could mode switch or modify a currently active tool with just one hand instead of two we would see gains in speed, focus, and comfort that would add up over time. We could even design an embodied and spatial shortcut system without the need to look at our hands, freeing our gaze and increasing productivity further.

Direct Manipulation vs. Abstract Gestures

One way to activate a shortcut with a single hand would be to define an abstract gesture as a trigger. Essentially this would be a hand pose or a movement of a hand over time. This is an exception to a general rule at Leap Motion, where we typically favor direct physical manipulation of virtual objects as an interaction paradigm over using abstract gestures. There are a few reasons for this:

  • Abstract gestures are often ambiguous. How do we define an abstract gesture like ‘swipe up’ in three-dimensional space? When and where does a swipe begin or end? How quickly must it be completed? How many fingers must be involved?
  • Less abstract interactions reduce the learning curve for users. Everyone can tap into into a lifetime of experience with directly manipulating physical objects in the real world. Trying to teach a user specific movements so they can perform commands reliably is a significant challenge.
  • Shortcuts need to be quickly and easily accessible but hard to trigger accidentally. These design goals seem at odds! Ease of accessibility means expanding the range of valid poses/movements, but this makes us more likely to trigger the shortcut unintentionally.

To move beyond this issue, we decided that instead of using single gesture to trigger a shortcut, we would gate the action into two sequential stages.

The First Gateway: Palm Up

Our interaction design philosophy always looks to build on existing conventions and metaphors. One major precedent that we’ve set over time in our digital wearables explorations is that hand-mounted menus are triggered by rotating the palm to face the user.

This works well in segmenting interactions based on which direction your hands are facing. Palms turned away from yourself and toward the rest of the scene imply interaction with the external world. Palms turned toward yourself imply interactions in the near field with internal user interfaces. Palm direction seemed like a suitable first condition, acting as a gate between normal hand movement and a user’s intention to activate a shortcut.

The Second Gateway: Pinch

Now that your palm is facing yourself, we looked for a second action which would be easily triggered, well defined and deliberate. A pinch checks all these boxes:

  • It’s low-effort. Just move your index finger and thumb!
  • It’s well defined. You get self-haptic feedback when your fingers make contact, and the action can be defined and represented by the tracking system as reaching a minimum distance between tracked index and thumb tips.
  • It’s deliberate. You’re not likely to absent-mindedly pinch your fingers with your palm up.

Performing both of these actions, one after another, is both quick and easy, yet difficult to do unintentionally. This sequence seemed like a solid foundation for our single-handed shortcuts exploration. The next challenge was how we would afford the movement, or in other words, how someone would know that this is what they needed to do.

Thinking back on the benefits of direct manipulation versus abstract gestures we wondered if we could blend the two paradigms. By using a virtual object to guide a user through the interaction, could we make them feel like they were directly manipulating something while in fact performing an action closer to an abstract gesture?

The Powerball

Our solution was to create an object attached to the back of your hand which acts as a visual indicator of your progress through the interaction as well as a target for pinching. If your palm faces away, the object stays locked to the back of your hand. As your palm rotates toward yourself the object animates up off your hand towards a transform offset that is above but still relative to your hand.

Once your palm fully faces toward yourself and the object has animated to its end position, pinching the object – a direct manipulation – will trigger the shortcut. We dubbed this object the Powerball. After some experimentation, we had it animate into the pinch point (a constantly updating position defined as the midpoint between the index finger and thumb tips).

This blend of graphic affordance, pseudo-direct manipulation, gestural movement, and embodied action proved easy to learn and ripe with potential for extension. Now it was time to look at what kinds of shortcut interface systems would be ergonomic and reliably tracked from this palm-up-pinched-fingers position.

Continued on Page 2: Spatial Interface Selection »

1
2
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


  • Lucas Rizzotto

    Wonderful piece. Looking forward to more innovations from the lab! :)

  • Amazing amazing amazing! I love these UX posts by Leap Motion

  • Nice, looking forward to seeing some of this stuff in the wild one day.

  • Alex Butera

    The last question seems the most important for this kind of organic UI. There really needs to be a “function key” equivalent to tell the system that you are about to use a shortcut, otherwise I can see this activating involuntarily all the time.

  • Jason Hunter

    We need a one handed sign language to interact with the system, to be able to control and input text. Hinging at the wrist and elbows is using too much energy, so it’s best to avoid that.

  • david panzoli

    Nice work!
    Yet, it seems to me that there is one important drawback with the translation rail that you failed to acknowledge… VR interactions don’t necessarily imply standing up. In a seated position, with your elbows rested on a table, the translation rail will not be easily performed, whereas the arc rail (despite its issues) allowed so.

  • MatBrady

    I didn’t read this article, I only looked at the animated gifs, haha, but wow, this is terrific work. Very impressed.

  • Ria

    Thanks for sharing this, Its really amazing. AR/VR developers should also read this. very impressive.

  • Allen Bernard

    In this era, where mobile app development is increasing day by day. There is no doubt that AR/VR should be the next change.

  • Sahil

    Because of digitalization Web Development growing day by day so AR and VR should be a big change. hire dedicated php developers who make your brand website in the competitive world.

  • Developments in the digital environment have allowed people to access information and news more quickly. This situation has brought with it some harms along with many benefits. Useful information or news can reach large masses in seconds. for more detail erişimin engellenmesi