Exclusive: Scaffolding in VR – Interaction Design for Easy & Intuitive Building

16

Widget Stages, States, and Shapes

Now that we have a resizable 3D grid with the ability to show ghosted object positions before snapping them into place, it’s time to bundle this functionality into a widget. We wanted to be able to use multiple Scaffolds and to be able to let go of a Scaffold widget, have it animate to the nearest surface, auto-align, and auto-expand its handles on landing (phew!). To manage all of the state changes that come with this higher-level functionality, we created a Scaffold class to sit at the top of the hierarchy and control the other classes.

For this functionality, we have a simple state machine with four states:

  • Anchored: All of the Scaffold’s features are hidden expect for its graspable icon.
  • Held: The Scaffold’s grid and handles are shown. We run logic for finding a suitable surface.
  • Landing: When the Scaffold is let go, it animates and aligns to the closest surface.
  • Deployed: This is the main, active state for the Scaffold grid and its handles.

The pre-deployment anchor stage is the fully contracted state of the grid when it might be attached to a floating hand menu slot or placed somewhere in the environment, ready to be picked up. In this state we reduced the widget to a 3D icon, just three colored spheres and a larger white anchor sphere.

Once you pick up the icon widget, we move into the holding/placing state. The icon becomes the full featured widget, with its red, green and blue axis handles retracted. While holding it, we raycast out from the widget looking for a suitable placement surface. Rotating the widget lets you aim the raycast.

When a hit is registered, we show a ghosted version of the expanded widget, aligned to the target surface. Letting go of the widget while pointed toward a viable surface animates the widget to its target position and then automatically expands the axes, generating a 3D scaffold.

The deployed widget needed a few features: the ability to resize each axis by pushing or grabbing the axis handles, a way to pick up the whole scaffold and place it somewhere else, and the ability to deactivate/reactivate the scaffold.

The shape of the widget itself went through a couple of iterations, drawing inspiration from measuring tapes and other handheld construction aids as well as software-based transform gizmos. We honed in on the important direct interaction affordances of the axis handles (red, green, and blue), the anchor handle (white), and the implied directionality of the white housing.

The colored axis handles can be pushed around or grabbed and dragged:

The whole widget and scaffold can be picked up and relocated by grabbing the larger white anchor handle. This temporarily returns the widget to the holding/placing state and raycasts for new viable target positions.

And with a flick of a switch the axes can be retracted and the whole scaffold deactivated:

Now we finally get to the fun part: stacking things up and knocking them down! The grid unit size is configurable and was scaled to feel nice and manageable for hands—larger than Lego blocks, smaller than bricks. We modeled some simple shapes and created a little sloped environment to set up and knock down assemblies. Then we worked towards a balance of affordances and visual cues that would help a user quickly and accurately create an assembly without feeling overwhelmed.

When your hand approaches any block, its color lightens slightly, driven by proximity. When you pick one up it will glow brightly with an emissive highlight, making the ‘grabbed’ state very clear:

As you bring a held block into the grid, a white ghosted version of it appears, showing the closest viable position and rotation. Releasing the block when the ghost is white will snap it into place. If the ghost intersects with an occupied space, the ghost turns red. Releasing the block when the ghost is red simply won’t snap the block into the grid, letting it drop from your hand.

Once a block is snapped into the grid, notches animate in on their corners to emphasize the feeling that they’re being held in place by the scaffold:

The last piece, and perhaps the most important, was tuning the feeling of physicality throughout the entire interaction. For reference, here’s what it looks like when we disable physics on a block once it’s snapped into the scaffold.

Interaction (or lack thereof) with the block suddenly feels hollow and unsatisfying. Suddenly switching the rules of interactivity from colliding to non-colliding feels inconsistent. Perhaps if blocks became ghosted when placed in the grid, this change wouldn’t be as jarring… but what would happen if we added springs and maintain the block’s collidability?

Much better! Now it feels more like the grid is a structured force field that holds the blocks in position. However, since the blocks also still collide with each other, when the assembly is strongly disturbed the blocks can fight each other as their springs try to push them back into position.

Luckily because we’re in VR we can simply use layers to set blocks in the grid to collide only with hands and not with each other.

This feels like the right balance of maintaining physicality throughout the interaction without sacrificing speed or accuracy due to collision chaos. Now it’s time to play with our blocks!

What do you think about this concept for stacking and assembly in VR? What applications would you like to see with this functionality? Let us know in the comments! If you want to try your hand at the demo, visit the Leap Motion blog later this week. We’ll share the demo along with the full deep dive into how it was built.

More From This Series:


Photo credits: Leap Motion, CanStock, Medium, Google, Sunghoon Jung, Epic Games

1
2

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • Johnatan Blogins

    Giant Steps for immersive interaction, and a DEMO! Thanks for sharing such insights…

  • I think this is quite interesting and covers a few aspects but I would like to see these studies more “focused” to a subject goal and then broken down into smaller research studies.

    So in VR today we have limited feedback when we interact. We currently have two ways to achieve this:

    1. Simulated feedback such as sound queues, visual snap points, gravity
    2. Hardware feedback like Vibration, dynamic weight distribution in the controller, gyro forces etc. Of course with bare hand tracking that is limited even further.

    My first questions would be “How far do we need to go” and “Who needs it the most”.

    For business application, immersion and feedback is more crucial to gain mass adoption so I would pick something in the real world and work on converting that into a VR study but breaking down each aspect into mini goals.

    I propose a more natural example such as you have a stack of household bricks and you have to build a wall. That’s it. This would gain more external interest and thus more feedback in my opinion. It feels like I am jumping the gun with this suggestion but lets see.

    Once a wall is built then you can use software to validate the wall integrity, accuracy and grade it for that “extra” that computing brings, this will help promote R&D as the benefits are now visible on a broader spectrum. This also sets a sort of benchmark of whats really important vs whats doable in a virtual environment. How far one should go towards realism vs what can computing bring that offsets the shortcomings of VR/AR.

    So, the “build a wall” R&D would have many aspects to study (and No, nothing to do with Trump) such as:

    How are the bricks interacted with (one hand, two hands, dragged etc)
    How are they placed (what this article is about)
    What are the brick properties (weight, brittleness, roughness, volume etc)
    How does that effect the placement
    Is inertia important? should it be calculated
    What is the surface that the bricks are being placed on, it is flat, smooth, solid etc
    Is a bond used to glue bricks together (e.g. springs like in this article)
    What are the properties of the bond
    What happens if you smack a brick into another brick
    What happens if you apply a huge force to the bricks at a specific point
    Is friction calculated optimally.
    How does it “feel” compared to the real thing (this is the biggie)
    Is audio feedback dynamic enough (e.g. friction grind sound)
    What are real world problems with wall building, could/should they be simulated too?

    I think this as a focussed study would help refine “interaction” going beyond just “feedback” and provide valuable and reusable data.

    Anyway, always love reading research on this.
    Cheers

    • Helen

      Gℴogle pays to new employee $98 per/hr to complete some work off a home computer … Labor only for only few hours & enjoy greater time together with your family . Any person can also join online jobs plan!!last Saturday I purchased a brand new Lotus Elise just after earning $9489 past five weeks .it’s certainly high-quality process however you will no longer forgive yourself if you don’t read it.!hf752e:∬∬∬ http://GoogleNetworkMoneyMakingOnline/get/$99/perhour ♥♥s♥y♥a♥f♥o♥♥♥b♥♥e♥g♥♥♥y♥♥♥t♥♥d♥g♥♥♥f♥♥k♥l♥♥s♥♥t♥♥k♥♥b♥c♥o♥♥♥c♥p♥♥♥y♥o:::!hf34a:fzhygg

    • Annie

      Google pays now 97 US dollars/h to complete some jobs working off of a home computer .. Work only for just few hours & spend greater time together with your loved ones … You can catch this super post!on Tuesday I bought a new Subaru Impreza just after making $6876 past month .it’s certainly the best work however you will no longer forgive yourself if you don’t take a look at it.!nh192w:↭↭↭ http://GoogleSiteOnlineBusinessOpportunities/getcash/$97/hourly ♥i♥y♥♥n♥♥n♥i♥u♥♥h♥♥q♥♥e♥r♥l♥♥♥f♥♥♥x♥o♥♥j♥♥p♥♥q♥♥e♥♥t♥f♥♥p♥m♥♥♥l♥♥k♥♥♥q:::::!ag383l:efmpdo

  • JJ

    I don’t really see the excitement in this project. Most of these functions and capabilities are normal interactions that most developers have spent time on.

    This is just a simple mechanic of grids and leap motion physics based hands. Maybe if this could efficiently scale to 10X the size or 100X the amount of items and still run well then ti’d be something. With how it is right now, this is something any dev. could make in a few days because these interactions are what we deal with everyday developing in VR/AR.

    • I think these articles are studies on natural interaction with the goal of realism. Grids and Snapping are just quick means to an end and lots of developers use it because there is no demand for realism. And it is not natural.

      You need to go through these processes as described in the article, making tools and refine it so that something as simple as placing blocks starts to feel more natural and familiar, trial and error, what works what doesn’t. That is all this is. Showing the ground work.

      The grids and gizmos created are just bespoke development tools to show their process. I would not expect these gizmos to be visible in a final build of something, it would just appear to work well when placing objects. That is my take on it anyway.

      Natural interaction is actually quite a complex area, take this challenge for example. You have three objects on the floor, a 10mm threaded bolt, a washer and a nut. How would you go about attaching the washer and the nut to the bolt in the most natural way possible?

      • jj

        Well looks like i need to start sending RVR some of my prototypes and testing.

        As for the nut, washer and bolt, they would act as independent rigid bodies until they were orientated and positioned on the end of the bolt at which they would then be attached to the bolt restricted from movement that is relative to the parent bolt and only allowed to move on one axis that the bolt is on which we will call the Z Axis.

        Just like in real life the washer will only be able to go along that axis via physical overlaps and Addforces and will disconnect if it goes past the end of the bolt.

        For the nut, once its near the end of the bolt it can be added on and childed to the nut(or just via script to keep physics separate). Now that the bolt is on it can be restricted from all movement aside from rotating around the Z axis. So you can have it rotate from physical force applied by the players overlapping hand, or if the players hand is overlapping, have the nut follow the hands rotation along that Z axis. Obviously if it rotates one way it moves down the Z axis and if it rotates another way itll move up the z axis

        This is more pseudo code, but I’ve done many things like this and im not far off.

        • Nice. You pretty much described it how I see it too.

          Here are a few other considerations:

          When the nut is placed over the bolt end it gently attracts (spring constraint?) to the bolt head, physics is then disabled on the nut as the spring holds it, the spring joint can break if the user flicks the nut away from the bolt though and gravity would need to be re-enabled.

          When the user rotates the nut you then apply assisted orientation correction to the nut as the user turns it clockwise so the nut orients with the bolts local Z axis as it goes down that initial first thread. The balance of getting this effect right so the nut doesn’t appear to rotate drastically on its own within the fingers would be an interesting test.

          The washer would need to have a tiny collision threshold for physics as it falls up and down the bolt while the user rotates the objects about. Computationally this would be quite expensive even on one axis and would cause the most issues with breaking the natural feel to it all. I think. The users fingers could also be in the way at any point so collision detection needs to be enabled at all times on the washer, also, if the washer is in the middle of the bolt and the nut comes down to it, then you have the issue here that the nut pushes the washer down. The nut has collision disabled so things would need to be managed manually and not so automated.

          To be an actual effective bolt+nut (which acts like a clamp in the real world), the nut while rotating down the thread would also need to detect when a surface nears its underside so a ray would need to be fired, any existing free rotation (e.g. the user spins the nut with a flick of their finger) needs to stop when it hits that surface then if the user continues to try and turn the nut with their fingers then a force needs to be applied to the other object and push it away until clamping forces are at their max for finger-tightened torque.

          Then the user picks up a spanner….. :D

          In this there are still many ways that “feelings” need to be simulated back to the user and this is where (in my opinion) the R&D is most helpful.

          e.g.

          * Nut at too much of an angle on the stud, do the assists break?
          * How to describe frictional forces back to the user
          * Can the stud/bolt be threaded due to off angle nut
          * How to avoid dropping things during the process, and having to pick them up off the floor again. Basically keeping frustration low but also keeping it as realistic as possible.
          * How to give feedback when the nut can not be turned clockwise any more but could be turned counter-clockwise. Rotational damping would help here.

          This is what I said in a earlier post. How far do you need to go to make it better than what exists already without ending up with an unmanageable set of complicated states.

          Fun stuff.

          • JJ

            Wow thats a fun read!!

            Thanks for elaborating, I really enjoyed that and I am so intrigued that if I get the time i’ll see if this works and share the results with you!

  • Lucidfeuer

    Leap has great underlaying software, minus adaptive physical hands interactions (fingers and hands bending on contact of virtual object independently of physical hands tracking). But they shouldn’t count on implementation: they have to realise their own hardware.

    Where are Leap Motion 2 and specific Oculus, Vive or Gear VR add-ons?

  • dk

    for some reason roadtovr.com is not updating for me ….I don’t see the new articles ….I have go to twitter to check for articles

    • FireAndTheVoid

      The same thing happened to me. Clearing the browser’s cache solved the issue.

      • dk

        hmm weird

  • amna khan

    Cresset Technologies in collaboration with Pakistan’s fashion outlet Sapphire has successfully launched the world’s first-ever 3D enabled Augmented Reality fashion app for eastern wear. visit us http://www.prweb.com/releases/2018/02/prweb15246174.htm
    Download this ios app from google https://goo.gl/T1i8Pa
    Download this Android app from play Store https://goo.gl/dvajza

  • Standing ovation for this article. Nothing else to say

  • Will you release the Unity project as well as the executable?

    (one small thing – try and avoid doing this: ” visit the Leap Motion blog later this week. ” – the chances are I won’t remember “later this week”. Offer a way to sign up for an email notification at the very least)