Idea Engine allows you to create and share VR and mixed reality experiences. Building such a general-purpose tool requires complex user interfaces. In this Guest Article, developer Brett Jackson shares his approaches to UI interaction.

Guest Article by Brett Jackson

Brett Jackson has been developing VR projects since 2015 and is the director of the new UK-based company X82 Ltd. His previous releases include: Dimensional (PC VR), Breath Tech (PC VR), Jigsaw 360 (PC VR & mobile VR) and 120Hz (SideQuest).

It’s common to present a UI through 2D interactive panels in XR. It’s not an exciting prospect, but it’s familiar and efficient. However, even if we accept this 2D intrusion into our XR worlds, there are still new considerations and opportunities to break free from 2D paradigms.

I quickly grew weary of laser pointers that exaggerated my hand movement on distant panels, along with their inconsistent target vectors and intermittent pinch detection. My preference is to reach out and interact with the world. I want the panel right in front of me so I can position it comfortably and use it like a real-world device.

My latest project, Idea Engine, is developed using StereoKit, an open-source, OpenXR library. It has a hands-first philosophy and provides out-of-the-box hand tracking support as well as controller support. It allows for the efficient creation of dynamic windows with typical UI controls. It’s an excellent tool for quickly creating XR projects and has many other benefits.

Panels

So my starting point is a UI panel that we can grab at any point (no special handles or edges to find) with a nice aura displayed when we are in grabbing range. Now, let’s add more XR considerations.

In XR, it’s easy for a user to end up behind a UI panel. Rather than show a blank rear or reversed UI, I flip the UI to the side the user is looking at—simple. It sounds trivial, but it’s worth considering XR specific scenarios. Another approach is to auto-rotate the panel to constantly face the player, but this removes control from the user. If they want the panel at a strange angle, let them, they may have a good reason.

An individual panel should be kept to a small size (page size / monitor size) so the user can easily absorb the contents without having to turn their head, but XR provides us with an abundance of space. I like to look for opportunities to break out of the page boundary. My scrollable areas have a handle to grab and move the content. While grabbed, you see a greatly expanded view of the content area, and you can drag and drop while in this mode, providing a greater placement range.

SEE ALSO
Why "Embodiment" is More Important Than "Immersion" – Inside XR Design

I show tips to the side of panels, with a line to the UI component they describe. This reduces the amount of text on the panel. Users can cycle through tips and hide ones they are familiar with.

In another project, I prototyped a 3D Gantt chart that scrolled off the page horizontally and faded into the distance. The user’s main focus was still on the normal-sized central panel, but they were able to optionally take in the wider context.

While panels are convenient and familiar, we shouldn’t feel constrained by their bounds and it’s fun to look for ways to break out.

Menus

StereoKit introduced me to the radial hand menu, which I then expanded on. I like this idea because you operate it with one hand, so it’s convenient, accessible. I make the same menu system available on both the right and left hand and use the same approach for popup menus on panels for consistency.

My volumetric menu takes things a step further and was driven purely by a desire to make use of that 3rd dimension. I use it to select teleport destinations (with a pointer to each destination) and to select nearby nodes to edit. I also use it for keyboard input when browsing metaverse addresses. This is quite experimental. It has the advantage that all symbols are equidistant from the centre, and you see your input without having to look away (a common issue with virtual keyboards). The drawback is that it’s unfamiliar to users, so I expect some resistance to it. Notice in the video, the letters spiral away from front to back in alphabetical order, so in a short while, their position should become familiar.

You’ll soon be able to add menus like these to your own Idea Engine projects.

3D Widgets

A colour picker offered an ideal opportunity to experiment, having three values (hue, saturation, and value) that could be mapped to 3 dimensions. In my 3D colour picker, you can change all three values at once or individually set the hue, saturation, or value. I feel it’s more interesting to interact with than sliders on a 2D page.

Similarly with locomotion, I want to move in 3D, so I made a 3D joystick for smooth hand-tracked movement. Simply drag the sphere in the direction you want to travel and roll your wrist for snap or smooth rotation. It operates in walking or flying mode and the rotation can be disabled if the user finds it too much to think about all in one control. I still support traditional controller-based movement, but this single handed control duplicates the functionality of multiple joysticks / buttons and is an interesting example of how 3d hand movement can meet requirements in new ways.

Hands

In all of my example videos, you’ll see I hide the user’s hand as soon as they start interacting with the UI. Many developers invest effort into carefully creating grab poses for different purposes, and that looks neat, but for me, a well-posed hand that doesn’t reflect my own hand position is more distracting than no hand at all. A hand can also be a visual obstruction once the interaction has started.

SEE ALSO
OpenXR 1.1 Update Shows Industry Consensus on Key Technical Features

With the hand gone, I’m also free to dampen or exaggerate hand movement without any visual conflict. I dampen hand movement in the colour picker to lower sensitivity and exaggerate hand movement when scrolling when there is a lot of content.

Text

While Idea Engine supports Sketchfab to download 3D models, AI to generate images, and photo / audio importing, it’s hard to beat the ease and accessibility of text and the spoken word to convey complex narratives. With this in mind, I needed decent support for text so users could merge all available formats to tell their stories.

Text generally doesn’t look great in VR, so I fade it out as you walk away to remove unsightly artefacts and close down text panels too. Users will be keen to explore the environment rather than read text, so I have the option of having a narrator automatically read out any block text you encounter.

Text input was a challenge without a great solution. I created mobile-style text input with cut and paste support and auto pagination using a virtual keyboard. When I finished, I thought, that’s OK, but I wouldn’t want to type a long passage in XR. Then I added voice-to-text support. That helped, but I found that I needed to do a lot of editing after my dictation and that was still slower than using traditional means. Now I allow users to connect to their headset from a browser on any devices they own and import text via a web page. I regularly use all three techniques, with the browser used for long text entry.

My lesson here was that you don’t always need to solve everything in XR. Sometimes it’s preferable to use more suitable devices and then import the results.

Try it Out

From educational mind maps, to interactive stories and games, you can leverage CC assets and import your own photos, sounds and text to build your idea. Then, bring it to life by adding states, events and high level scripting and share it on our X82 metaverse. A feature-packed, end-user tool to explore the possibilities of XR.

The public alpha is now available and free to download on App Lab, so you can come and try out any of the features discussed and give me your feedback.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.