Exclusive: Scaffolding in VR – Interaction Design for Easy & Intuitive Building

16

There’s something magical about building in VR. Imagine being able to assemble weightless car engines, arrange dynamic virtual workspaces, or create imaginary castles with infinite bricks. Arranging or assembling virtual objects is a common scenario across a range of experiences, particularly in education, enterprise, and industrial training—not to mention tabletop and real-time strategy gaming.

Guest Article by Barrett Fox & Martin Schubert

Barrett is the Lead VR Interactive Engineer for Leap Motion. Through a mix of prototyping, tools and workflow building with a user driven feedback loop, Barrett has been pushing, prodding, lunging, and poking at the boundaries of computer interaction.

Martin is Lead Virtual Reality Designer and Evangelist for Leap Motion. He has created multiple experiences such as Weightless, Geometric, and Mirrors, and is currently exploring how to make the virtual feel more tangible.

Barrett and Martin are part of the elite Leap Motion team presenting substantive work in VR/AR UX in innovative and engaging ways.

Update (3/18/18): Leap Motion has released the Scaffolding demo for anyone with a Leap Motion peripheral to download and try for themselves. They’ve also published a video showing what the finished prototype looks like (see above).

For our latest interaction sprint, we explored how building and stacking interactions could feel seamless, responsive, and stable. How could we place, stack, and assemble virtual objects quickly and accurately while preserving the nuance and richness of a proper physics simulation?

The Challenge

Manipulating physically simulated virtual objects with your bare hands is an incredibly complex task. This is one of the reasons we developed the Leap Motion Interaction Engine, whose purpose is to make the foundational elements of grabbing and releasing virtual objects feel natural.

Nonetheless, the precise rotation, placement, and stacking of physics-enabled objects—while very much possible—takes a deft touch. Stacking in particular is a good example.

Stacking in VR shouldn’t feel like bomb defusal.

When we stack objects in the physical world, we keep track of many aspects of the tower’s stability through our sense of touch. Placing a block onto a tower of objects, we feel when and where the held block makes contact with the structure. In that instant we feel actual physical resistance.

The easiest way to counteract these issues in VR is to disable physics and simply move the objects around. This successfully eliminates unintended collisions and accidental nudges.

With gravity and inertia disabled, we can assemble the blocks however we want, but it lacks the realistic physics-based behavior which is an important part of how we would do the same task in the real world.

However, this solution is far from ideal, as precise rotation, placement, and alignment are still challenging. Moreover, disabling physics on virtual objects makes interacting with them far less compelling. There’s an innate richness to physically simulated virtual interactions in VR/AR that’s only amplified when you can use your bare hands.

A Deployable Scaffold

The best VR/AR interaction design often combines cues from the real world with the unique possibilities of the medium. Investigating how we make assembling things in the physical world easier, we looked at things like rulers and measuring tapes for alignment and the concept of scaffolding, a temporary structure used to support materials in aid of construction.

Snappable grids are a common feature of flat-screen 3D applications. Even in VR we see early examples like the very nice implementation in Google Blocks.

However, rather than covering the whole world in a grid, we proposed the idea of using them as discrete volumetric tools. This would be a temporary, resizable three-dimensional grid which would help create assemblies of virtual objects—a deployable scaffold! As objects are placed into the grid, they would snap into position and be held by a physics spring, maintaining physical simulation throughout the interaction. Once a user was done assembling, they could deactivate the grid. This releases the springs and returns the objects to unconstrained physics simulation.

To create this scaffolding system we needed to build two components: (1) a deployable, resizable, and snappable 3D grid, and (2) an example set of objects to assemble.

Generating A 3D Grid

Building the visual grid around which Scaffold interactions are centered is straightforward. But since we want to be able to change the dimensions of a Scaffold dynamically, we may have many of them per Scaffold (and potentially multiple Scaffolds per scene). To optimize, we created a custom GPU-instanced shader to render the points in our Scaffold grid. This type of repetitive rendering of identical objects is great to put onto the GPU because it saves CPU cycles and keeps our framerate high.

In the early stages of development it was helpful to color-code the dots. Since the grid will be dynamically resized, colors are helpful to identify what we’re destroying and recreating or whether our dot order is orderly (also it was pretty and we like rainbow things).

Shader-Based Grid Hover Affordance

In our work we strive to make things reactive to our actions—heightening the sense of presence and magic that makes VR such a wonderful medium. VR lacks many of the depth cues that we rely on in the physical world, so reactivity is also important in boosting proprioception (our sense of the relative positions of different parts of our body).

With that in mind, we didn’t stop at simply making a grid of cubes. Since we render our grid points with a custom shader, we could add features to our shader to help users better understand the position and depth of their hands. With that in mind, our grid points will grow and glow when your hand is near, making it more responsive and easy to use.

Making Scaffold-Reactive Blocks & Their Ghosts

Creating objects that can be placed within (and aligned to) our new grid starts with adding an InteractionBehaviour component to one of our block models. Combined with the Interaction Engine, this takes care of the important task of making the object graspable. To empower the block to interact with the grid, we created and added another Monobehaviour component that we called ScaffoldBehaviour. This behavior handles as much of the block-specific logic as possible so the grid classes stay less complicated and remain wieldy (yes, it’s a word).

As with the grid itself, we’ve learned to think about the affordances for our interactions right along with the interactions themselves. We designed interaction logic to create and manage a ghost of the block when it’s within the grid, so you can easily tell where the block will go when you release it:

Resizing The Grid with Interaction Engine Handles

By building handles to grasp and drag, a user can resize the Scaffold to fit within a specific area. We created spherical handles with Interaction Engine behaviors, which we constrained to the move in the axis they control. This way, if the user places blocks in the Scaffold and drags the handles to make the grid smaller, the blocks are released, dropping them. Conversely, if the handles are dragged to make the grid larger, and blocks had been placed at those grid points, then the blocks snap back into place!

Continued on Page 2: Widget Stages, States, and Shapes »

1
2

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • Johnatan Blogins

    Giant Steps for immersive interaction, and a DEMO! Thanks for sharing such insights…

  • I think this is quite interesting and covers a few aspects but I would like to see these studies more “focused” to a subject goal and then broken down into smaller research studies.

    So in VR today we have limited feedback when we interact. We currently have two ways to achieve this:

    1. Simulated feedback such as sound queues, visual snap points, gravity
    2. Hardware feedback like Vibration, dynamic weight distribution in the controller, gyro forces etc. Of course with bare hand tracking that is limited even further.

    My first questions would be “How far do we need to go” and “Who needs it the most”.

    For business application, immersion and feedback is more crucial to gain mass adoption so I would pick something in the real world and work on converting that into a VR study but breaking down each aspect into mini goals.

    I propose a more natural example such as you have a stack of household bricks and you have to build a wall. That’s it. This would gain more external interest and thus more feedback in my opinion. It feels like I am jumping the gun with this suggestion but lets see.

    Once a wall is built then you can use software to validate the wall integrity, accuracy and grade it for that “extra” that computing brings, this will help promote R&D as the benefits are now visible on a broader spectrum. This also sets a sort of benchmark of whats really important vs whats doable in a virtual environment. How far one should go towards realism vs what can computing bring that offsets the shortcomings of VR/AR.

    So, the “build a wall” R&D would have many aspects to study (and No, nothing to do with Trump) such as:

    How are the bricks interacted with (one hand, two hands, dragged etc)
    How are they placed (what this article is about)
    What are the brick properties (weight, brittleness, roughness, volume etc)
    How does that effect the placement
    Is inertia important? should it be calculated
    What is the surface that the bricks are being placed on, it is flat, smooth, solid etc
    Is a bond used to glue bricks together (e.g. springs like in this article)
    What are the properties of the bond
    What happens if you smack a brick into another brick
    What happens if you apply a huge force to the bricks at a specific point
    Is friction calculated optimally.
    How does it “feel” compared to the real thing (this is the biggie)
    Is audio feedback dynamic enough (e.g. friction grind sound)
    What are real world problems with wall building, could/should they be simulated too?

    I think this as a focussed study would help refine “interaction” going beyond just “feedback” and provide valuable and reusable data.

    Anyway, always love reading research on this.
    Cheers

    • Helen

      Gℴogle pays to new employee $98 per/hr to complete some work off a home computer … Labor only for only few hours & enjoy greater time together with your family . Any person can also join online jobs plan!!last Saturday I purchased a brand new Lotus Elise just after earning $9489 past five weeks .it’s certainly high-quality process however you will no longer forgive yourself if you don’t read it.!hf752e:∬∬∬ http://GoogleNetworkMoneyMakingOnline/get/$99/perhour ♥♥s♥y♥a♥f♥o♥♥♥b♥♥e♥g♥♥♥y♥♥♥t♥♥d♥g♥♥♥f♥♥k♥l♥♥s♥♥t♥♥k♥♥b♥c♥o♥♥♥c♥p♥♥♥y♥o:::!hf34a:fzhygg

    • Annie

      Google pays now 97 US dollars/h to complete some jobs working off of a home computer .. Work only for just few hours & spend greater time together with your loved ones … You can catch this super post!on Tuesday I bought a new Subaru Impreza just after making $6876 past month .it’s certainly the best work however you will no longer forgive yourself if you don’t take a look at it.!nh192w:↭↭↭ http://GoogleSiteOnlineBusinessOpportunities/getcash/$97/hourly ♥i♥y♥♥n♥♥n♥i♥u♥♥h♥♥q♥♥e♥r♥l♥♥♥f♥♥♥x♥o♥♥j♥♥p♥♥q♥♥e♥♥t♥f♥♥p♥m♥♥♥l♥♥k♥♥♥q:::::!ag383l:efmpdo

  • JJ

    I don’t really see the excitement in this project. Most of these functions and capabilities are normal interactions that most developers have spent time on.

    This is just a simple mechanic of grids and leap motion physics based hands. Maybe if this could efficiently scale to 10X the size or 100X the amount of items and still run well then ti’d be something. With how it is right now, this is something any dev. could make in a few days because these interactions are what we deal with everyday developing in VR/AR.

    • I think these articles are studies on natural interaction with the goal of realism. Grids and Snapping are just quick means to an end and lots of developers use it because there is no demand for realism. And it is not natural.

      You need to go through these processes as described in the article, making tools and refine it so that something as simple as placing blocks starts to feel more natural and familiar, trial and error, what works what doesn’t. That is all this is. Showing the ground work.

      The grids and gizmos created are just bespoke development tools to show their process. I would not expect these gizmos to be visible in a final build of something, it would just appear to work well when placing objects. That is my take on it anyway.

      Natural interaction is actually quite a complex area, take this challenge for example. You have three objects on the floor, a 10mm threaded bolt, a washer and a nut. How would you go about attaching the washer and the nut to the bolt in the most natural way possible?

      • jj

        Well looks like i need to start sending RVR some of my prototypes and testing.

        As for the nut, washer and bolt, they would act as independent rigid bodies until they were orientated and positioned on the end of the bolt at which they would then be attached to the bolt restricted from movement that is relative to the parent bolt and only allowed to move on one axis that the bolt is on which we will call the Z Axis.

        Just like in real life the washer will only be able to go along that axis via physical overlaps and Addforces and will disconnect if it goes past the end of the bolt.

        For the nut, once its near the end of the bolt it can be added on and childed to the nut(or just via script to keep physics separate). Now that the bolt is on it can be restricted from all movement aside from rotating around the Z axis. So you can have it rotate from physical force applied by the players overlapping hand, or if the players hand is overlapping, have the nut follow the hands rotation along that Z axis. Obviously if it rotates one way it moves down the Z axis and if it rotates another way itll move up the z axis

        This is more pseudo code, but I’ve done many things like this and im not far off.

        • Nice. You pretty much described it how I see it too.

          Here are a few other considerations:

          When the nut is placed over the bolt end it gently attracts (spring constraint?) to the bolt head, physics is then disabled on the nut as the spring holds it, the spring joint can break if the user flicks the nut away from the bolt though and gravity would need to be re-enabled.

          When the user rotates the nut you then apply assisted orientation correction to the nut as the user turns it clockwise so the nut orients with the bolts local Z axis as it goes down that initial first thread. The balance of getting this effect right so the nut doesn’t appear to rotate drastically on its own within the fingers would be an interesting test.

          The washer would need to have a tiny collision threshold for physics as it falls up and down the bolt while the user rotates the objects about. Computationally this would be quite expensive even on one axis and would cause the most issues with breaking the natural feel to it all. I think. The users fingers could also be in the way at any point so collision detection needs to be enabled at all times on the washer, also, if the washer is in the middle of the bolt and the nut comes down to it, then you have the issue here that the nut pushes the washer down. The nut has collision disabled so things would need to be managed manually and not so automated.

          To be an actual effective bolt+nut (which acts like a clamp in the real world), the nut while rotating down the thread would also need to detect when a surface nears its underside so a ray would need to be fired, any existing free rotation (e.g. the user spins the nut with a flick of their finger) needs to stop when it hits that surface then if the user continues to try and turn the nut with their fingers then a force needs to be applied to the other object and push it away until clamping forces are at their max for finger-tightened torque.

          Then the user picks up a spanner….. :D

          In this there are still many ways that “feelings” need to be simulated back to the user and this is where (in my opinion) the R&D is most helpful.

          e.g.

          * Nut at too much of an angle on the stud, do the assists break?
          * How to describe frictional forces back to the user
          * Can the stud/bolt be threaded due to off angle nut
          * How to avoid dropping things during the process, and having to pick them up off the floor again. Basically keeping frustration low but also keeping it as realistic as possible.
          * How to give feedback when the nut can not be turned clockwise any more but could be turned counter-clockwise. Rotational damping would help here.

          This is what I said in a earlier post. How far do you need to go to make it better than what exists already without ending up with an unmanageable set of complicated states.

          Fun stuff.

          • JJ

            Wow thats a fun read!!

            Thanks for elaborating, I really enjoyed that and I am so intrigued that if I get the time i’ll see if this works and share the results with you!

  • Lucidfeuer

    Leap has great underlaying software, minus adaptive physical hands interactions (fingers and hands bending on contact of virtual object independently of physical hands tracking). But they shouldn’t count on implementation: they have to realise their own hardware.

    Where are Leap Motion 2 and specific Oculus, Vive or Gear VR add-ons?

  • dk

    for some reason roadtovr.com is not updating for me ….I don’t see the new articles ….I have go to twitter to check for articles

    • FireAndTheVoid

      The same thing happened to me. Clearing the browser’s cache solved the issue.

      • dk

        hmm weird

  • amna khan

    Cresset Technologies in collaboration with Pakistan’s fashion outlet Sapphire has successfully launched the world’s first-ever 3D enabled Augmented Reality fashion app for eastern wear. visit us http://www.prweb.com/releases/2018/02/prweb15246174.htm
    Download this ios app from google https://goo.gl/T1i8Pa
    Download this Android app from play Store https://goo.gl/dvajza

  • Standing ovation for this article. Nothing else to say

  • Will you release the Unity project as well as the executable?

    (one small thing – try and avoid doing this: ” visit the Leap Motion blog later this week. ” – the chances are I won’t remember “later this week”. Offer a way to sign up for an email notification at the very least)