Augmented and Mixed Reality technologies are rapidly evolving, with consumer devices on the horizon. But how will people interact with their new digitally enhanced lives? Designer Ben Frankforter visualises several ideas he’s had to help bring about the arrival of what he calls the “iPhone of mixed reality”.


Guest Article by Ben Frankforter

ben-frankforter-1Ben Frankforter is a designer passionate about connecting consumers and services via positive experiences. In the past 10 years, he’s designed and led small teams creating brands, furniture, interiors, and apps. I recently finished a position as Head of Product Design at BillGuard and now researching on user interfaces for mixed reality.


While virtual and mixed reality experiences are trending right now (we’ve seen a lot of cool examples in movies), I feel that there’s a lack in convergence of practical interaction patterns. We haven’t seen the iPhone of mixed reality yet, so I decided to explore the user experience and interface aesthetics of mixed reality and share my ideas with the community. My goal is to encourage other designers to think and publish ideas on MR interfaces.

As technology becomes invisible at all such levels, from a perceptual and cognitive point of view, interaction becomes completely natural and spontaneous. It is a kind of magic.
– Alessandro Valli

During our lifetime, we acquired skills that empowered us to interact with our environment. As Bret Victor explains, by manipulating tools that answer our needs, we can amplify our capabilities. We perform thousands of these manipulations everyday, to a point that most of them feel natural. And one of the attributes of good interaction design is allowing Natural User Interfaces: those which are invisible to the user, and remain invisible as we learn them. Some examples of these interfaces are speech recognition, direct manipulation, and gestures.

SEE ALSO
VR Attraction Expects 280 New Franchise Locations Over Next Four Years

Apps as Objects

I started by looking into an interaction that felt very natural: browsing records.

records-mr-1I found this interaction interesting because of the following:

  • Direct manipulation of the catalog
  • Perception of progress while browsing
  • Full visual of selected item
  • Minimal footprint of scrolled items

I was thinking of a way to apply these principles to an interaction for browsing and launching apps in a mixed reality environment.

Apps as cards

mr-ui-cards-1In this case, the app cards are arranged in a stack and placed below the user’s point of view, at a comfortable reach distance. The perspective allows a full view of the apps in the stack. Just browse through the cards and pick up the app you want to launch.
Being virtual, the app cards could grow into various sizes, starting from a handheld virtual device up to a floating virtual display.

animated-ui-1
Manipulating virtual devices and displays
animated-ui-2
Going from app to device to display
mr-ui-concept-collection-1
Mockup of apps and virtual devices

Switching Between Apps

It’s an interesting way to open and close apps, but what about switching between them?
Inspired by Chris Harrison’s research, I explored a system that uses simple thumb gestures to navigate between apps and views. We can easily perform these operations, even with blinded eyes, thanks to two factors: proprioception (awareness of position and weight of our body parts) and tactile feedback (contact and friction applied to the skin).

mr-ui-app-switching-ui-1
Thumb gestures occur against fingers

mr-ui-appswitching-2-animatedThanks to the friction applied by the thumb sliding on the index, we perceive a continuous tactile feedback.

Proprioception with tactile and visual feedbacks enables switching easily between views.

Tools and Controls

While the left hand controls the basic navigation, the right hand is free to execute other operations by using virtual tools. The result of these operation are displayed in a virtual display in front of the user.

SEE ALSO
Meta Acquired 9 Leading VR Studios Starting in 2019: Here’s What's Happened Since
mr-ui-mockup-large-1
A bird’s-eye view of a photo browsing environment
mr-ui-mockup-large-2
Scroll through your photos

But a planar surface is not always available, and to be able to interact with any environment the user should be able to perform other types of gestures as well. Gestures in mid-air can help, such as framing the right photo.

mr-ui-concept-camera-1
Camera app

You can follow Ben Frankforter on Twitter and Facebook as he brainstorms solutions for the future of immersive technology user interfaces.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


  • Sponge Bob

    this is bs article

    touch screen type of interaction is simply not applicable to VR

    VR requires fully 3D spatial interaction with user’s hands or small ergonomic hand-attached controllers (not Vive or Touch hand-held monstrosities)

    “Leap Motion” type of device is better but ultimately won’t do either
    – it doesn’t even work outside in direct sunlight – so no poolside VR experience

    • user

      different people want different things. i want to use voice and eye controls. and when i use my hands then because i put a virtual tape recorder on my wall.

      • Sponge Bob

        voice simply does not work for movement control

        eye control sucks even more – you will jump out of the window in five minutes if forced to control your game characters in VR using your eyes

        those are DOA

        • user

          is this article about games?

        • SHunter

          I used one solution at CES playing GTA5 and it worked pretty well.

      • OgreTactics

        Eye control is not a priority as it mainly relates to rendering optimisation (Intel RealSense eye-tracking were never adopted because they strain your eyes and tire your nerve balls faster than your hands in mid-air).

        However many people underestimate how much voice interaction is going to be part not just of VR but also computing interface interaction overall alongside hand interaction in the future. Some words are faster as commands that hand moving through interface, and for example when creating with future AI softwares that can translate natural language or interaction into code, sometimes it’ll be faster to just summon an “exterior scene with bright sky, and here a ball, but rounder please”.

        • Sponge Bob

          yeah right

          computer, show me some balls, pleeaze

          depending on who programmed it you might see some interesting pictures

          but hooli xyz mooshot division will take care of this little problem for you in a most politically correct way

          • OgreTactics

            Exactly, that’s the sole purpose of AI for the 20 years to come, not that bullshit sentient robot thing.

            Then that’s a Tayne I can get into.

        • user

          but since google bought eyefluence recently theres a good chance that we will see eye controls in consumer products.
          theres a need for discreet controls when people are in public. strange hand movements are not the solution and voice isnt either.

          • Sponge Bob

            yeah almighty googl moogl that buys whatever it can’t steal

            I hope donald gets them by the balls so they wont have too much money to splurge around on xyz moonshots

          • user

            still no meds?

          • OgreTactics

            Well eye-tracking is good solution if it can infer from focus, whether you are looking at your environment and therefor HUD and apps should set away, or if you are looking at were your interface or apps should be therefor displaying them, for example. So yes you’re right.

    • Константин Тулхов

      You are right.
      But there is now shell / environment working with natural gestures.
      Early stage prototype exist only. TMIN project for example https://www.youtube.com/watch?v=dwIK_M2TJPE

      • Sponge Bob

        I didn’t see any gestures at all, much less “natural gestures”

        • Константин Тулхов

          Gestures are planed but are not implemented

    • sntxrrr

      this is bs comment

      “touch screen type of interaction” is exactly what you do when pressing a button in VR, even when it is simulated by you using a Vive wand as stand-in.

      “…fully 3D spatial interaction with user’s hands…” this is exactly what I see in these examples

      “VR requires…” I think it still is a bit early to tell what the consensus will be this early in the game. Also, this is not about VR but MR and AR and if you look at the major current players (hololens, meta 2, magic leap) they all use hand gestures.

      This is exploratory design of what future interfaces might be like and as such is not necessarily concerned with current technical limitations. It’s a bit like a concept car design that will never see the light of day.

      Exploratory design serves several functions:
      – it stretches the design “muscles” and helps develop a creative approach to problem solving, a core part what design is about.
      – it can inspire designs and designers who do create actual, practical interfaces
      – it can help inform the direction of future R&D. If this design is an ideal way to interact with information in a virtual environment then what technology do we need to develop to achieve this? Like sensors and algorithms that can detect your thumb touching and moving over your other fingers.

      So, no this is not a bs article.

      • Sponge Bob

        “…Like sensors and algorithms that can detect your thumb touching..”

        Remotely ??? Without attaching anything to your fingers or hand ?
        Dude, you don’t have a clue
        You think you can detect fingers actually touching each other at some distance using e.g. optical cameras ?
        And I mean REAL touch you can feel, not some “almost touch” (that’s what leap motion does btw)
        makes BIG difference in user experience (that’s why Leap Motion sucks for me personally)

        “almost touching” your fingers without experiencing actual feel of touching does not count

        • sntxrrr

          I think you underestimate what can be inferred using enough data/sampling rate.

          You simply conclude it can’t be done convincingly because of several years old Leap Motion hardware (which does indeed suck for this kind of stuff, I had one on my DK2) but do not consider their current tech (which to my knowledge hasn’t been released yet so the jury is out on that).

          You also miss the fact that I was referring to future R&D which could be 1 or more generations beyond that or might use other technology like Google’s Project Soli which uses radar.

          I think it’s you who doesn’t have a clue.

          • Sponge Bob

            They taught me math and physics very well at the university so I have a clue :)

            That’s why it’s so funny to read this blog

            You can’t detect finger touch from a distance using optical cameras unless your camera angle is perfect. Period.

            You can’s see fine features with 1 mm precision using radar technology with microwave wavelength of about 1cm (soli project). Period.

            Get a clue :)

          • sntxrrr

            “You can’t detect finger touch from a distance using optical cameras unless your camera angle is perfect.” So you concede it is possible, thank you :)
            Also, limitations to good detection might still offer enough room for use in practical applications. Until we build the tech and do actual research and testing we won’t know for sure.

            But again you totally miss my original point. Even if these current technologies are inadequate as you claim, if these UI designs are a desirable direction to pursue it can stimulate future R&D to overcome these very limitations you speak of. Smart tricks, good algorithms and/or other wavelengths might all offer solutions. Or it might be something completely different.

            These UI designs make no explicit technological claims so if it eases your mind you can even imagine people using gloves for all I care.

          • SHunter
    • Hivemind9000

      I don’t agree, especially for AR where we may eventually virtualize all of the things we currently do on PCs and phones.

      This article is simply exploring the haptic/ergonomic/gestural design considerations for that transition (from real screens to virtualized ones).

      Check out this (extreme) vision of what an augmented/mixed reality might look like:

      https://vimeo.com/166807261

      • Sponge Bob

        this “pinch to select” gesture detection does not work if camera angle is not perfect – there will be a lot of mis-detections

        DOA

        • Hivemind9000

          Wait, what?

          As I said, these are explorations in design considerations. The technology for gesture recognition is still (and continuously) evolving. Your argument is only (slightly) valid if technology remains at a standstill.

          For mixed reality (which is where this is aimed at) I believe we’ll need, as a minimum, depth sensing technology (like Project Tango or MS Kinect or Project Soli) to scan the local environment in order to “mix in” the holographic rendering correctly (as seen in the Hololens). With such sensor tech, gesture detection (such as the pinch-to-select) is a lot more accurate, even if parts of the hand are occluded and the camera is not looking directly at the hand (though we generally look in the direction of our hands when manipulating something).

          DOA? Not.

    • OgreTactics

      This is one the best article on VR interaction there is so far (alongside many other papers and concepts).

      Touch-screen types interactions are exactly the first type of VR interaction that should be integrate in headset ASAP, as this is what hand-tracking technology allows us to do flawless with the environment now, but it increases 10x the potential application, experimentation, interactions, app opportunities of the VR market.

      • Sponge Bob

        dude,
        you realize that “touch screen” tech is NOT “hand tracking from outside” tech? Not even remotely close

        It is sensing exact (well, more or less without stylus) XY location where you touch some hard surface with your finger(s)

        Where is your hard surface in VR ?

        How do you “touch” in VR ?

        • OgreTactics

          Put yourself in the shoes of someone who never saw a touch-screen in 2006, and ask yourself the same questions.

        • SHunter

          it works pretty well in Tilt Brush.

  • LuckySlow

    How about a radar for gesture recognition? Google’s project Soli does a fairly decent job even in it’s early stages.

    • Sponge Bob

      yeah, right, soli from hooli moonshot xyz division

      like we really need more microwave radiation around our heads

      btw, at 64 Ghz the EM wavelength is about 8 mm – that ‘s all the resolution you can get
      in other words, no fingertip detection can possibly work

      this is just intentional hype for lemmings like you

      • user

        you seem to be very angry for whatever reason. maybe take a break and find a solution first.

      • Dotcommer

        You’re actually entirely wrong about all your assumptions (including microwave radiation), and I’m not sure where you’re getting your info. My source is I’ve actually used Soli. You can get all 10 digits accurately tracked and depth up to about 4 or 5 inches, in a footprint close to 1cm^2. Thats pretty remarkable, especially as some early testers have done things like use Soli to detect different metals, liquids, containers, etc.

        Your opinions seem severely misguided. I hope you do more research before blasting at others because their thoughts don’t align with your heavily opinionated views.

        • Sponge Bob

          “you can get all 10 digits accurately tracked and depth up to about 4 or 5 inches, in a footprint close to 1cm^2. Thats pretty remarkable, especially as some early testers have done things like use Soli to detect different metals, liquids, containers, etc.”

          dude, it’s you not me from different “hooli” aka google universe

          what 10 digits ?

          you mean fingers ?

          that’s not possible because of wavelength. period.

          metals, liquids ???

          wtf ???

  • SHunter

    My arms are already tired going through this.

  • Aaron Mahl

    Easily, a great way for advertisers to potentially leverage new ways to engage consumers. I think VR advertising experiences will be huge next year, based on everything we’ve seen from FB, VirtualSKY and other earlier pioneers in this space.