Guest Article – Road to VR https://www.roadtovr.com Virtual Reality News Mon, 13 Jan 2020 06:48:11 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.13 https://www.roadtovr.com/wp-content/uploads/2015/01/cropped-road-to-vr-logo-for-social-media-54aabc8av1_site_icon-32x32.png Guest Article – Road to VR https://www.roadtovr.com 32 32 Delighting Users with Rich Interactions is Key to Making VR Engaging & Effective https://www.roadtovr.com/delighting-users-rich-vr-interaction-design-enrique-tromp/ https://www.roadtovr.com/delighting-users-rich-vr-interaction-design-enrique-tromp/#comments Mon, 05 Aug 2019 07:13:39 +0000 https://www.roadtovr.com/?p=89739
VR software has introduced many new challenges to developers. Among these challenges, rich interactions are at the very core of all the new elements VR software designers need to consider when creating games or general applications. Guest Article by Enrique Tromp  Enrique is cofounder and CTO of VRMADA, a tech company providing enterprise VR solutions […]

The post Delighting Users with Rich Interactions is Key to Making VR Engaging & Effective appeared first on Road to VR.

]]>

VR software has introduced many new challenges to developers. Among these challenges, rich interactions are at the very core of all the new elements VR software designers need to consider when creating games or general applications.

Guest Article by Enrique Tromp 

Enrique is cofounder and CTO of VRMADA, a tech company providing enterprise VR solutions worldwide. With a strong passion for computer graphics and digital art, his career spans nearly 20 years in simulation, video games, and live interactive experiences. These days he loves taking on challenging VR projects and creating realistic interactions. You can follow his latest work on Twitter @entromp.

It’s no surprise that all the top selling VR games have something in common: their interactions are extremely polished and fun to play with. Games like Job Simulator (2016) paved the way of modern VR interaction while Beat Saber (2019) showed us that simple but engaging interactions can beat big studio productions in this medium as well.

Even though my background includes game development, I’ve spent most of my career developing training simulators. Currently I’m working in the enterprise VR market for training and, as you can imagine, realistic interactions here are key.

When developing a particular interaction, whether it is for games or for training, I always strive for three things: making it a really enjoyable experience, anticipate all kinds of interaction ‘misuses’, and polish it until everything feels right and then some.

Quality VR interactions are key to effective training because they are more engaging and authentic, developing new neural pathways that provide the ability to learn and perform new tasks. They work the same way good professors make lessons entertaining that keep you engaged, facilitating the assimilation of new knowledge. Bad interactions will cause frustration and decrease training efficacy.

The main challenge when developing VR interactions is the amount of freedom that the medium inherently presents to its users. In traditional videogames we use action buttons to interact with the world. The object and the context determine what will happen. In VR we use natural gestures instead; we pick things up like we would do in real-life, we manipulate complex tools, we can throw things around, all using our own hands.

This is a big paradigm shift in the way we design and implement interactions and presents a big challenge because users are inherently curious and they like freedom—especially in VR. Some will follow the ‘expected’ behaviors of the application, but others will be carried by their curiosity and inevitably test the limits of the world before them. Surprising the user by anticipating the creative ways that they interact with objects can make the experience more enjoyable and increase the sense of being in a cohesive world rather than a scripted experience.

In this article I will showcase some of the interactions I’ve developed and offer some thoughts on the design approach to each.

Lab Elements

I’ll start with a sci-fi lab that is part of a sandbox we developed internally to create and test new interaction mechanics.

The lamp on the left presents 3 handles that can be grabbed using a single hand or both hands at the same time, from the inside or the outside. It is attached to the world through a mechanical IK-driven arm that hangs from the ceiling which constrains its range of movement to a sphere and gives context for its ability to be placed in any position.

From all the changes that we made, adding haptics and smoothing out the movement of the lamp had by far the biggest impact on user experience. The smoothing filter conveys the feeling of mechanical resistance (that it’s not possible to move it around that easily) and adding haptics multiplies this feeling tenfold. It is also very gratifying to see how the mechanical arm follows the lamp around when you move it, and how it keeps swinging a little when you release it. These are little things that we add for no other reason than to keep the user happy and engaged.

The laser has a different IK setup. One of the things that we tried to experiment with is the rubber joint that joins the head with the arm and gives it two degrees of rotational freedom. We got the inspiration from the avatar wrist in the game Lone Echo and thought it was a cool way to model a ball joint.

The laser beam works by casting a ray from the tip and creating a polygon strip simulating the burn if the surface has been exposed long enough. Smoke particles help to add a bit more detail as well.

Lab Battery

This clip showcases a battery swap in the same sci-fi environment. The main purpose is to study objects with different constraints that need to be manipulated correctly in order to complete the task.

The first step is opening the door, while the object can only be rotated around its axis. Being anchored to the world has a very important consequence in the grab action: instead of the object snapping to the hand, it is the hand that will snap to the handle.

The second step is slightly more complex because it involves using both hands to unlock the mechanism that keeps the battery in place. If you try to pull out the battery without unlocking it first, the hand will maintain the grip or snap back to its position if pulled away too far.

Once the lock is open the battery can be extracted, which is accomplished by constraining the position allowing it to slide only along the tube until it comes free.

Drill

The first challenge when recreating a drill in VR is to determine the necessary conditions to start the drilling process. After all, there is no physical resistance in real-life that prevents someone from moving their hand through a wall in VR.

In this example, the conditions that are required are:

  • The drill bit must be able to penetrate the material it’s pressed against
  • The drill bit needs to be oriented at a correct angle against the surface.
  • The drill needs to be slowly pushed against the surface when the user presses the trigger.

If the user tries to pull the tool any other way than out during the drilling process, the drill will be kept in place and the hand will snap back if it is moved too far away from it.

Subtle haptics play a big role in the interaction making things such as the drill rotation or the physical resistance of the medium more believable.

Continue Reading on Page 2: Cabling and Rope & Tire »

The post Delighting Users with Rich Interactions is Key to Making VR Engaging & Effective appeared first on Road to VR.

]]>
https://www.roadtovr.com/delighting-users-rich-vr-interaction-design-enrique-tromp/feed/ 27 Road to VR
The Surprising Brilliance of ‘Vacation Simulator’s’ VR Paintbrush Tech https://www.roadtovr.com/vacation-simulator-virtual-reality-paintbrush-design/ https://www.roadtovr.com/vacation-simulator-virtual-reality-paintbrush-design/#comments Tue, 16 Apr 2019 07:06:08 +0000 https://www.roadtovr.com/?p=87532
Having released the ever popular Job Simulator as a launch title for the HTC Vive back in 2016, Owlchemy Labs is one of the most veteran VR game studios that exists today. Over the years the studio has built a strong foundation of VR interaction design which is seen throughout their newest title, Vacation Simulator. Interactions which might […]

The post The Surprising Brilliance of ‘Vacation Simulator’s’ VR Paintbrush Tech appeared first on Road to VR.

]]>

Having released the ever popular Job Simulator as a launch title for the HTC Vive back in 2016, Owlchemy Labs is one of the most veteran VR game studios that exists today. Over the years the studio has built a strong foundation of VR interaction design which is seen throughout their newest title, Vacation Simulator. Interactions which might seem simple and usable to the player are often much more complex than they appear. Case in point: some surprisingly brilliant paintbrush tech that just feels right in VR. Developers from Owlchemy are here to explain how they built it.

Guest Article by Peter Galbraith & Zi Ye

Peter (Implementer of Unityisms) and Zi (Developer, Physics / Math Genius) are both developer/designer dual-wielders at Owlchemy Labs. Their work spans design ideation and prototyping to iteration, programming implementation, and testing.

Both are important contributors to the legacy of absurd and highly VR polished games at Owlchemy Labs, including: award-winning title Job Simulator, Emmy-nominated Rick and Morty: Virtual Rick-ality, and the recently released Vacation Simulator, which is also coming to PSVR and Oculus Quest later this year.

Hey everyone!

Pete and Zi here. We’re both developers at Owlchemy Labs, and we’re excited to talk to you about one of the most highly-iterated features in the entire Vacation Simulator: Painting!

Painting is one of our most colorful activities in the Forest level, a creative space tucked away in a Treehouse where you can unleash your inner artiste. Whether you wield our super-gooey paintbrush to make a masterpiece from scratch or use a photo from the in-game Camera as a starting point, aesthetic greatness is always within reach. However, like all great features, Painting went through several prototypes and iterations before we arrived at our picture-perfect result.

To kick things off, Zi will explain the tech behind the most important part of an artist’s toolkit: the paintbrush!

Simulating the Feel of [PAINTBRUSH]

One of the most challenging aspects of making Painting feel great was the paintbrush tip. As our single, powerful tool to express your creative vision in Painting, we knew it was necessary to address the expectations that come with having a paintbrush in VR, all the way down to its most technical nitty-gritty.

Like all the best features in our games, the squishy brush tip was created using fake, made-up physics! We start with a mathematical model consisting of a straight line that we shoot at the canvas, and then we figure out where the tip would bend along the surface. That bent line is used to manipulate the shape of the brush, like so:

This helped us solve one of the biggest issues we had with painting in VR: lack of feedback. With current VR hardware, we can’t accurately simulate the feedback of a flexible brush pressing against canvas. Without this feedback, we found it was difficult for players to tell if the brush was making contact, causing them to put the brush too far into the canvas and creating jittery or hiccupy motions as the brush collided with the painting easel. This behavior led to a lot of ‘squiggly lines’ and often caused the brush to break out of players’ hands entirely from excessive collisions—not exactly something that made our players feel like art pros!

By providing visual feedback in the form of the squishy tip, fewer players pushed the brush as far into the canvas, making for fewer breakouts and prettier lines. We also paired this with an auto-respawn if the brush did break out of a player’s hand, making it both more likely the player kept hold of their brush and easier to grab again if they didn’t.

SEE ALSO
Exclusive: Designing 'Lone Echo' & 'Echo Arena’s' Virtual Touchscreen Interfaces

But again: it’s all fake! The brush hair doesn’t actually collide in the physics system except at the very base. The squishing action creates the illusion of resistance and tricks the player into thinking the brush is being pushed back.

We also use the same mathematical model to determine the size of the contact area of the brush and control the size of the paint dob applied to the canvas. This specific feature allowed us to ship with a single brush size, just with variable stroke width: lightly touch for a thin precise line or swab on the colors with a firmer pass.

The brush also factors in ‘hair stiffness’, which determines how the brush hair changes direction when it’s dragged on the canvas or when the brush handle is rotated. The stiffness is also used to add a bit of jiggle to the brush hair when it’s not in contact with the canvas. These little details let us get a super ‘gooey’ feel for the brush, and from our very first playtest it was clear that folks really responded to that perceived sensation.

The squishy brush solved many technical and UX issues, and it also played into that initial desire to empower players to create beautiful art. We had several devs that were convinced they “weren’t artists,” yet when put in front of the new painting easel made things like this:

The calligraphic size changes of the brush tip combined with our artist’s hand-picked palette let everyone create images that looked painterly and fun. Voila!

Now, as the first developer to put down code on Painting, Pete’s going to share our design process behind the feature (including its many iterations!):

Design Challenges: Back to the Drawing Board!

Every feature in Vacation Simulator started from the same series of questions:

  • What do people expect when they vacation at a given destination?
  • How would Bots hilariously misinterpret this activity?

We knew sketching, painting, and artistry were fun aspects of going out into nature, and when we thought about how Bots would misinterpret painting, manipulating photos instantly came to mind. We loved the idea of editing the photos you took around the island and customizing them to express yourself as a player. That simple idea was the seed for a feature that took over a year to design and bring to life.

Job Simulator (2016)

Part of our early design ideas were influenced by what we’d previously made for Job Simulator (e.g. painting the sign in the Auto Mechanic; the painting software on the Office computer). Both were simple and fun, but not very deep. Painting in Vacation Simulator had to feel different—and bigger—to match the rest of the game. We wanted to give players more than just increased pixel density—you should be able to feel like you’re actually painting. Like real life, only better.

SEE ALSO
Exclusive: Designing Single-handed Shortcuts for VR & AR

Many works of art require revisions to turn ‘meh’ into a masterpiece, and Painting was no exception. It was one of the earliest features we developed for the Forest and one we continued working on right up until launch! Here are a few examples of Painting experiments and iterations on the feature:

‘Photoshopping’ Only

At the very beginning, Painting only involved photos. Players would place a photo on the easel and then use the paintbrush to ‘paint’ the photo onto the canvas. Players could mix and match, combining bots or scenery from multiple photos, and even (if they got clever) take closeup photos of specific colors to paint with. The more people played, the more we realized that actual painting—complete with swatches of colored paints—drastically increased players’ ability to be creative.

Filters

Filters are a common element in photo manipulation software, so of course we thought the Bots would include them in their misinterpretation. We experimented with buttons to apply filters: pixelation, invert colors, black and white, and sepia. Pressing any of these filter buttons would stack effects, and the next stroke of the brush would apply the filtered image to the canvas. While interesting, filters often lead to frustration as players struggled to anticipate what would actually appear on the canvas as they painted. For example, with inversion, if you dipped your brush in the orange swatch, your brush would paint blue! We wound up removing filters from the easel entirely, instead integrating them into your camera as lenses. All the cool effects with none of the confusion!

Paint ‘Bucket’

Our biggest feedback from early playtests was the ability to start with background colors that weren’t white, and the option to erase paintings entirely. By adding a paint bucket slider that wiped a solid color across the canvas, along with a white paint option, we resolved both of those needs with a single feature. As a surprise benefit, it also enabled the creation of more… Bot-like paintings.

Blending & Transparency

The paint bucket used hard edges, but when looking at those same hard edges on the brush, it just didn’t feel realistic—or look that great. Jagged pixels definitely aren’t part of most real life paintings. For a more realistic effect, we created a transparency gradient representing the amount of paint to apply to the canvas. Used in conjunction with the size and position of the contact area, we could determine how much the paint color blends with whatever is beneath it. The longer the brush is in contact with the canvas, the more color is applied, allowing us to simulate how real brushes transfer pigment to fabric.

Multiple Brushes

We went through several iterations of brushes and brush sizes. Before the paint bucket slider, players wanted a giant brush to make things easier to fill. Then, after we implemented the slider, players wanted a smaller paintbrush for detail work. We had a smaller paintbrush for a long time, but ultimately cut it due to one of the most important gamefeel features of Painting: the squishy tip!

– – — – –

We Hope You [EMOTION] Painting!

Sometimes there’s just no way around the amount of iteration it takes to make something look, feel, and play the way you want. Painting was a momentous exercise in VR design affordances, player expectations, and utilizing constraints for the most creativity.

Thanks for checking out this peek behind the scenes of Painting! We hope you’re inspired to channel your inner Leonardo BotVinci in Vacation Simulator.


Read more guest articles contributed by experts and insiders in AR and VR.

The post The Surprising Brilliance of ‘Vacation Simulator’s’ VR Paintbrush Tech appeared first on Road to VR.

]]>
https://www.roadtovr.com/vacation-simulator-virtual-reality-paintbrush-design/feed/ 16 Road to VR Image courtesy Owlchemy Labs
A Look Inside Consumer Perceptions of Oculus Go https://www.roadtovr.com/look-inside-consumers-saying-oculus-go/ https://www.roadtovr.com/look-inside-consumers-saying-oculus-go/#comments Tue, 17 Jul 2018 08:14:25 +0000 https://www.roadtovr.com/?p=80135
More than just the latest in a long line of niche gadgets, Oculus Go represents the linchpin of Facebook’s lofty goal of putting one billion people into virtual reality. Built with the new or casual user in mind, the device’s non-intimidating tether-free design and Snapdragon 821 mobile processor (which will be two years old in […]

The post A Look Inside Consumer Perceptions of Oculus Go appeared first on Road to VR.

]]>

More than just the latest in a long line of niche gadgets, Oculus Go represents the linchpin of Facebook’s lofty goal of putting one billion people into virtual reality. Built with the new or casual user in mind, the device’s non-intimidating tether-free design and Snapdragon 821 mobile processor (which will be two years old in August) manage to keep costs low and user-friendliness high.

Guest Article by J.C. Kuang

J.C. serves as an Analyst at Greenlight Insights in the Devices Group. He has more than three years of experience in market research and analysis and has delivered custom consultancy and presentations for global companies covering ideation, roadmap validation, market sizing, disruptive strategy, and competitor analysis, among other areas. He is based in Boston, Massachusetts.

An improved fresnel lens design addresses a major complaint of Rift owners regarding negative experiences with lens flare, while built-in stereo speakers eliminate the need to fiddle with headphones after fitting the headset—an inconvenience of the Go’s precursor, Gear VR. The headset is also offered with two tiers of non-expandable flash storage (at 32 and 64 GB).

SEE ALSO
Oculus Go Review: Standalone VR Priced for the Masses

The Go is being lauded by journalists and hardware critics as a major milestone in VR hardware, set to drive adoption to new highs. In contrast to higher-end standalone VR headsets, such as HTC’s Vive Focus and Lenovo’s Mirage Solo, the Go is largely unopposed at its low price point of $200, and has drawn interest from mainstream media outlets as a result. While it lacks an important feature offered by its competitors, 6DoF tracking, the Go represents an otherwise tempting alternative to its pricier competitors, which have not been received as favorably.

Consumer Perceptions

Initial consumer impressions of Oculus’ overall user experience are positive, according to consumer reviews at online retailers and first impressions from early adopter forums.

A lack of native media apps (such as YouTube) remains a going concern for owners of multiple headsets, who are most aware of the fragmentation currently plaguing VR content pipelines. Meanwhile, high build quality, an intuitive and hassle-free interface, and support for multimedia apps (from major players such as Netflix and Hulu, to more focused platforms such as Plex and Bigscreen), have been consistently popular among buyers. In fact, usage as a portable multimedia device was the most cited use case amongst online user reviews.

Criticisms have been levelled against the headset’s short battery life and a lack of expandable storage. These are noticeable areas where traditional tethered VR excels over the Go (having access to virtually limitless power and storage via a connected gaming PC). Oculus’s own offering, the Rift, has played a major role in setting these expectations for VR usage patterns in the first place. Presumably, these criticisms are of least concern to the company since their other hardware addresses these issues.

SEE ALSO
China's Oculus Go Variant Sells Out in Minutes, 50,000+ Await More Stock

The Go relies on a wireless connection to a smartphone for high-level content management, as well as privacy and login functions to ensure fast and functional connection to the Oculus platform. This connection naturally has deep integration with Facebook, which has sparked some infrequent criticisms regarding privacy. While these criticisms are less common among consumers, the persistent uneasiness surrounding privacy at Oculus’ parent company does little to help assuage them.

Greenlight Insights’ annual VR/AR consumer survey revealed some insights surrounding the release of the Go.

  • Approximately 1 month before release, Oculus’ new headset had low aided brand awareness* among all respondents (36%) when compared to the Rift headset during a similar period (42%).
  • Non-owners of VR headsets reached a low of 28% aided brand awareness*. This data point presents a particularly glaring weakness in the company’s marketing for the Go, which is aimed squarely at onboarding new users to VR.

* “Aided brand awareness” refers to consumer knowledge of a specific brand or product, after being prompted. Might be measured with a question such as “How familiar are you with Oculus?” as opposed to “Can you list three VR headset brands?”

Now that the Go has been available for two and a half months, Greenlight Insights has gathered data from major US electronics retailers showing how customers have received the Go and other standalone headsets, algonside high-end tethered headsets for comparison.

The Future of Go & Standalones

Up until Oculus’ 2017 developer conference, hardware initiatives from HTC, Oculus, and other leading headset makers prioritized highly detailed and demanding AAA experiences which capitalized on the novelty of VR. The Go meanwhile represents an intelligent pivot from traditional VR design philosophy, which often sacrifices accessibility for immersion. Oculus has set a new goal that focuses on adoption and onboarding as opposed to hardware brinkmanship. This trend is poised to continue as HTC and Lenovo’s standalone offerings populate higher price points on the market.

Sales of all three new standalone headsets through Q3-Q4 ‘18 will be crucial in gauging adoption rates over the next 5 years. We expect that global standalone revenues will grow from over $350 million in 2018 to $3.2 billion in 2022. This growth will be due in part to a previously untapped market that neither smartphone-based nor tethered headsets can serve: new users with no additional computing hardware. This factor will only become more compelling as content, hardware, and usability improve over time. The overall global VR industry will benefit from this growth as well; we anticipate it will grow from just under $9 billion in 2018 to $48 billion in 2022.

The impact of standalone headsets on the global VR market is becoming more and more apparent with the release of competing hardware from formidable foreign OEMs, such as the Lenovo Mirage Solo and HTC Vive Focus, each bringing with it it’s own development and distribution platforms. As new displays, sensors, and processors (such as the upcoming Snapdragon XR1 which is specially designed for low cost standalone headsets) begin to show up in subsequent iterations of standalone headsets, hardware markets will begin to expand to accomodate a much larger sector of this new, accessible form of virtual reality. Further insights into current and future VR markets can be found in the the semi-annual Virtual Reality Industry Report, published by Greenlight Insights in collaboration with Road to VR. The report contains forecasts and in-depth analysis on VR hardware and solutions, including standalone headsets such as Oculus Go.

The post A Look Inside Consumer Perceptions of Oculus Go appeared first on Road to VR.

]]>
https://www.roadtovr.com/look-inside-consumers-saying-oculus-go/feed/ 32 Road to VR Photo by Road to VR
Exclusive: Cloudhead Games Goes In-depth with Knuckles EV2 & Predecessors https://www.roadtovr.com/cloudhead-games-knuckles-ev2-predecessors-denny-unger/ https://www.roadtovr.com/cloudhead-games-knuckles-ev2-predecessors-denny-unger/#comments Tue, 26 Jun 2018 13:59:45 +0000 https://www.roadtovr.com/?p=79583
It’s hard to believe that it’s been almost two years since the SteamVR Knuckles controllers were first revealed using a demo of our game, Call of the Starseed (2016), at Steam Devs Days 2016. Now, at the reveal of Knuckles EV2 this summer, we’ve got a whole bunch more to talk about. Guest Article by […]

The post Exclusive: Cloudhead Games Goes In-depth with Knuckles EV2 & Predecessors appeared first on Road to VR.

]]>

It’s hard to believe that it’s been almost two years since the SteamVR Knuckles controllers were first revealed using a demo of our game, Call of the Starseed (2016), at Steam Devs Days 2016. Now, at the reveal of Knuckles EV2 this summer, we’ve got a whole bunch more to talk about.

Guest Article by Denny Unger

Denny is the CEO and Creative Director of Cloudhead Games. As a VR Pioneer he has spearheaded two critically acclaimed and award-winning VR experiences with The Gallery – EP1: Call of the Starseed and EP2: Heart of the Emberstone. Working closely with VR hardware leaders in the space, including Valve, HTC, and Oculus, Cloudhead Games continues to innovate, inform, and entertain.

If you’re just getting into VR, or you haven’t heard of the term ‘Knuckles’ outside of VRChat (2017), the SteamVR Knuckles is the first modern non-glove VR controller to support five-finger tracking. While Oculus Touch is known to have capacitive sensors (capsense) for the thumb and index fingers, Knuckles controllers give users tracking of all five fingers using capsense on the triggers, face buttons, and the bases of controllers. These new inputs create a more natural representation of a user’s hands in VR, and they open the door to new gameplay possibilities.

While finger tracking has many exciting implications on its own, one of the most important innovations with Knuckles is its ‘open-handed hold.’ You can fasten the controller to your hand and then let go of your grip without dropping it—you can ‘hold’ it without actually holding it.

Image courtesy Cloudhead Games

The first Knuckles prototype we received was diminutive in size compared to the EV2 we have today. The strapping mechanism was a crazy velcro wrap that you would slip into like a fingerless glove and then tighten around the palm. The controller itself was about the size of a desktop mouse with a deeply scalloped trackpad. That first prototype was an early kit for evaluation and feedback, and we wouldn’t see changes to Knuckles for nearly nine months.

The second pair of Knuckles we received (called Knuckles 1.3) focused mostly on ergonomics. We saw the velcro strap replaced by a pull-cord brace that tightened against the back of the hand instead of around the the palm; you could now slip the controller on and tighten it within a few seconds. Both the base and the sensor bar were longer for variable hand sizes and improved tracking.

But some of the biggest changes since the controller’s initial reveal came last week with Knuckles EV2.

The first thing to notice on EV2 is a completely redesigned controller face. The scalloped trackpad has gone wayward in favour of a touchstrip. The base of the controller now has grip/pressure sensitivity in addition to capsense. The redesign of the face buttons and introduction of thumbsticks align more closely with the control layout on Touch, which will make it easier to create common control schemes across platforms.

I think part of the rationale for moving from the trackpad to the thumbstick / touchstrip combo is that most trackpad interactions in VR were swiping motions rather than utilization of the full area of the pad. The new touchstrip offers the same functionality as the trackpad (lateral x-axis movement is there in a smaller footprint), plus pressure sensitivity for the thumb.

Image courtesy Cloudhead Games

As well as some major changes to input, there are also some important ergonomic changes going on in EV2. The new strap in particular is really smart; it has a push-pivot point along the top circumference so you can move it forward and back to help get ideal hand placement for the capsense. It also features a rotational pivot to seat the strap comfortably on the back of your hand; and a material change from flat-padding to a nice gripped fabric that’s much more comfortable over long periods of use—and it makes perspiration a lot less noticeable during your Beat Saber (2018) sessions.

Image courtesy Cloudhead Games

One of the more subtle changes to the hardware over time is the way in which the trigger buttons now interface with the handle of the controllers at a very slight curve. With previous iterations that curve was more dramatic, and in VR your brain had a tough time rectifying the gap between your trigger finger and the rest of your fingers. That gap has since been massaged to the point that your perception of finger separation in VR feels normalized.

Knuckles EV2 is a pretty radical shift for users coming from the Vive wands or even Oculus Touch. Vive users will find that teleportation and free locomotion is a much more comfortable experience with thumbsticks than with the old trackpad.

SEE ALSO
Cloudhead Games – Lessons Learned From Five Years of VR Locomotion Experiments

One question we get a lot is whether we prefer Touch or Knuckles, and the simple answer is that it’s not fair to compare them.

Touch comes from an Xbox origin with a goal to emulate the traditional mapping of a gamepad, while also introducing the concept of basic finger tracking. Knuckles is a next-gen solution with the goal of removing the abstractions of holding a gamepad or thinking about hand poses. These controllers are two different approaches built at two different times, and both companies have the right idea—using fingers and hands in a more intuitive way is the future of VR.

Which leads to another question: is Knuckles truly next-gen VR?

My answer is “absolutely.” I can interact in an open-handed manner with my environment; all of my fingers are unobstructed; and I don’t have to think about any hand poses, my hands just do what comes naturally. And when I need something in my hand, a controller is still there. If I grip an object or a gun, or do any other gross interaction in the environment, there is always something to meet my hand with haptics and pressure and tactility.

Obviously Knuckles is not the final step for VR input. Looking further into the future at the next four to five years, a lot of work is being done to provide exoskeleton inputs and per-finger haptics. But to get there with any success, we need to start here with hardware and software that enables developers to create new interactions with the entire hand considered.

Knuckles EV2 is a next-generation step toward whatever that future may be, and we’re so excited to be building our next VR experience toward that future too.

The post Exclusive: Cloudhead Games Goes In-depth with Knuckles EV2 & Predecessors appeared first on Road to VR.

]]>
https://www.roadtovr.com/cloudhead-games-knuckles-ev2-predecessors-denny-unger/feed/ 42 Road to VR Image courtesy Cloudhead Games
Designing ‘Virtual Virtual Reality’, One of Mobile VR’s Most Immersive Games Yet https://www.roadtovr.com/designing-virtual-virtual-reality-mitch-mastroni-tender-claws/ https://www.roadtovr.com/designing-virtual-virtual-reality-mitch-mastroni-tender-claws/#comments Tue, 19 Jun 2018 07:55:00 +0000 https://www.roadtovr.com/?p=79419
Launched initially on Daydream in early 2017, and now available on Gear VR, Oculus Go, and Oculus Rift, Virtual Virtual Reality’s smart interaction design gives players freedom and control which—combined with a narrative tying it all together—makes Virtual Virtual Reality one of the most immersive mobile VR games to date. This guest article by Mitch Mastroni, Interaction […]

The post Designing ‘Virtual Virtual Reality’, One of Mobile VR’s Most Immersive Games Yet appeared first on Road to VR.

]]>

Launched initially on Daydream in early 2017, and now available on Gear VR, Oculus Go, and Oculus Rift, Virtual Virtual Reality’s smart interaction design gives players freedom and control which—combined with a narrative tying it all together—makes Virtual Virtual Reality one of the most immersive mobile VR games to date. This guest article by Mitch Mastroni, Interaction Designer at Tender Claws, the studio behind the game, explores how the game achieved significant immersion even on more restrictive mobile VR headsets.

Guest Article by Mitch Mastroni

Mitch Mastroni is an Interaction Designer at Tender Claws, where he handles all aspects of systems design and programming across both VR and AR experiences. He pulls from his background in performance art—ranging from improv comedy to jazz percussion—to create compelling interactive experiences. He holds a B.S. in Computer Science: Game Design from UC Santa Cruz, where he developed the 2016 IndieCade finalist Séance. You can find him in the corner of a networking event, waxing poetic about theme park design.

Our game Virtual Virtual Reality is a comedic adventure that is both love letter to VR and playful commentary on the tech industry. Players are welcomed by their manager Chaz to Activitude, a virtual service where humans are tasked with assisting AI clients. These AI, which appear in various forms ranging from a tempermental artichoke to a demanding stick of butter, have increasingly bizarre requests for the player to perform. The story unfolds as the player travels between virtual realities, diving deeper and deeper into the machinations of Activitude.

If you haven’t had a chance to play Virtual Virtual Reality, check out the trailer below to get a taste of the game, which also recently launched on the Oculus Rift:

Object Interaction: The Leash

When players pick up objects in Virtual Virtual Reality, they see a curved line connecting their VR controller to the object in question. This ‘leash’ is the only tool that players have at their disposal for the full duration of the game. All other object interactions in the game (plugging a plug into a socket, watering flowers with a watering can, etc.) are performed with the leash. Even simple interactions—like tossing a ball in the air or dragging your manager by his robotic legs—are very satisfying to perform with the leash.

The leash helps the player understand the relationship between the controller’s movement and the object’s movement. It also enhances game feel by giving virtual objects weight. Instead of instantly moving the object to the position where the player’s controller is pointing, the leash applies a constant force to the object in the direction of that position. Heavier objects will take longer to arrive at their destination and will sag the leash downwards. By swiping the trackpad forward and backward, players can also push and pull objects towards and away from themselves, enabling 6DOF object control from a 3DOF controller.

Virtual Virtual Reality was originally developed for Daydream VR and its 3DOF controller, leading us to consider control schemes found on other devices with 3DOF controllers (see this article for an introduction to 3 DOF vs 6 DOF ). We were inspired by the ‘Capture Gun’ in Elebits, Konami’s 2006 Wii-exclusive title. Elebits achieved a surprisingly intuitive use of the 3DOF Wiimote that we had yet to see implemented in any game: VR or otherwise. We were pleasantly surprised to find that the leash is also comfortable while using multiple controllers and 6DOF controllers. We designed unique visual and haptic feedback for the leash to fit each of Virtual Virtual Reality’s platforms and to leverage their respective control schemes.

SEE ALSO
Exclusive: Designing 'Lone Echo' & 'Echo Arena’s' Virtual Touchscreen Interfaces

The choice of the leash was also informed by the distance between players and the objects that they interact with. Early VR experiments at Tender Claws resulted in us constraining object interactions to the “mid-range.” Most objects that the player grabs are at least one meter in front of the them and and no further than six meters away. This tends to be the most comfortable range for modern VR headsets. Some players have trouble focusing on objects closer than one meter. Further than six meters away, there is no clear sense of depth and small objects are clearly pixelated. The leash closes the mental gap between the player and their object of focus, allowing that object to become an extension of the player.

World Interaction: Headsets

The most recognizable gameplay mechanic of Virtual Virtual Reality is the ability to put on and take off any VR headsets in the game at any time. Virtual reality inside of virtual reality. Yes, in fact, it is kind of like Inception.

Early into our development of the headset transition mechanic at a 2015 hackathon, we realized that the experience of taking off and putting on headsets had potential beyond a narrative framing device. We wanted players to interact with headsets as often as possible.

One key characteristic of headset transitions is that they are completely seamless without any perceivable loading time. To achieve this, every accessible virtual reality, or level, is loaded into memory before its associated headset appears. Although this required significant performance optimizations to reduce the memory footprint of each level, it also lead us to an artistic direction that reduced the workload of our artists.

We experimented with various visual transitions to reduce the jarring effect of leaving one level and entering another. Ultimately we chose a fisheye lens effect that warps the edges of the screen, paired with a single frame cut between the two levels at the peak of the warping. The fisheye effect is accomplished through the use of a vertex shader: the geometry of the world is actually stretched away from the player to emulate the familiar look.

The interaction language and logic is consistent for the VR headsets in the game. They can be picked up like any other object in Virtual Virtual Reality. To take off their current headset, the player points their controller at their head and grabs that headset. Drawing attention to the presence of the player’s real headset does not compromise immersion, in fact it reinforces their connection to the experience.

We decided that the action of moving between virtual realities should be a valid choice at any point. Any headset in the game can be picked up and put on, and at any point you can take off your current headset to ‘go up a level’. These choices are also recognized and validated by other systems in the game. For example, characters will comment on you leaving and returning to their virtual realities, which helps reinforce the relationship between the headset system and the narrative.

Localization and Subtitles

We began the process of localizing Virtual Virtual Reality into eight languages after the game launched on Daydream. The spoken and written words of Virtual Virtual Reality are central to the experience and we wanted to give more players an opportunity to comfortably enjoy the game.

The decision to use subtitles instead of recording dialogue in new languages was a matter of resources and quality control. We worked with an extremely talented cast of voice actors who recorded over 3,000 lines of dialogue to bring the characters of Virtual Virtual Reality to life. The task of re-recording and implementing that dialogue in eight additional languages was simply beyond the scope of our team. Instead, we focused our efforts on creating the best subtitle system ever conceived by god or man. Or at least by a mobile VR game in 2017.

The Virtual Virtual Reality subtitle system was designed with two guiding principles. First, subtitles should be comfortably visible at all times. Second, it should always be clear who is speaking. Neither of these are novel concepts (see the game accessibility guidelines and this excellent article by Ian Hamilton), but at the time of development there were virtually no examples of these principles being applied in VR.

The key to our approach is dynamic positioning. The subtitles are repositioned to best fit the direction that the player is looking. When the player is looking at a speaking character, the subtitles appear directly below that character. When the player is looking elsewhere, the subtitles appear at the bottom of the player’s view with an arrow pointing in the direction of the character. The arrow is particularly helpful for players who are hard of hearing. Subtitles smoothly transition between the two states so that reading is never interrupted. Scenes with multiple speaking characters utilize different colored text for additional clarity.

Next Steps

Designing Virtual Virtual Reality was an incredible learning experience for our whole team. We all have backgrounds in gaming but none of us had ever worked on anything quite like this—a dense three-hour narrative adventure in VR. We are currently working on several new projects that leverage our lessons learned from Virtual Virtual Reality and further our integration of systems and narrative. The state of interaction design in VR has come so far in the past few years, and we’re excited to continue exploring and innovating as we create new experiences.

The post Designing ‘Virtual Virtual Reality’, One of Mobile VR’s Most Immersive Games Yet appeared first on Road to VR.

]]>
https://www.roadtovr.com/designing-virtual-virtual-reality-mitch-mastroni-tender-claws/feed/ 20 Road to VR Image courtesy Tender Claws
Exclusive: Validating an Experimental Shortcut Interface with Flaming Arrows & Paper Planes https://www.roadtovr.com/validating-experimental-vr-ar-shortcut-interface-leap-motion/ https://www.roadtovr.com/validating-experimental-vr-ar-shortcut-interface-leap-motion/#comments Fri, 08 Jun 2018 21:58:43 +0000 https://www.roadtovr.com/?p=79092
Last time, we detailed our initial explorations of single-handed shortcuts systems. After some experimentation, we converged on a palm-up pinch to open a four-way rail system. Today we’re excited to share the second half of our design exploration along with a downloadable demo on the Leap Motion Gallery. Guest Article by Barrett Fox & Martin […]

The post Exclusive: Validating an Experimental Shortcut Interface with Flaming Arrows & Paper Planes appeared first on Road to VR.

]]>

Last time, we detailed our initial explorations of single-handed shortcuts systems. After some experimentation, we converged on a palm-up pinch to open a four-way rail system. Today we’re excited to share the second half of our design exploration along with a downloadable demo on the Leap Motion Gallery.

Guest Article by Barrett Fox & Martin Schubert

Barrett is the Lead VR Interactive Engineer for Leap Motion. Through a mix of prototyping, tools and workflow building with a user driven feedback loop, Barrett has been pushing, prodding, lunging, and poking at the boundaries of computer interaction.

Martin is Lead Virtual Reality Designer and Evangelist for Leap Motion. He has created multiple experiences such as Weightless, Geometric, and Mirrors, and is currently exploring how to make the virtual feel more tangible.

Barrett and Martin are part of the elite Leap Motion team presenting substantive work in VR/AR UX in innovative and engaging ways.

We found the shortcuts system comfortable, reliable, and fast to use. It also felt embodied and spatial since the system didn’t require users to look at it to use it. Next it was time to put it to the test in a real-world setting. How would it hold up when we were actually trying to do something else with our hands?

We discussed a few types of potential use cases:

#1. Direct abstract commands. In this scenario, the system could be used to directly trigger abstract commands. For example, in a drawing application either hand could summon the shortcut system – left to undo, right to redo, forward to zoom in, or backwards to zoom out.

#2. Direct contextual commands. What if one hand could choose an action to take upon an object being held by the other hand? For example, picking up an object with your left hand and using your right hand to summon the shortcut system – forward to duplicate the object in place, backward to delete it, or left/right to change its material.

#3. Tool adjustments. The system could also be used to adjust various parameters of a currently active tool or ability. For example, in the same drawing application your dominant hand might have the ability to pinch to draw in space. The same hand could summon the shortcut system and translate left/right to decrease/increase brush size.

#4. Mode switching. Finally, the system could be used to switch between different modes or tools. Again in a drawing application, each hand could use the shortcut system to switch between free hand direct manipulation, a brush tool, an eraser tool, etc. Moreover, by independently tool-switching with each hand, we could quickly equip interesting combinations of tools.

Of these options, we felt that mode switching would test our system the most thoroughly. By designing a set of modes or abilities that required diverse hand movements, we could validate that the shortcuts system wouldn’t get in the way while still being quickly and easily accessible.

Mode Switching and Pinch Interactions

In thinking about possible abilities we’d like to be able to switch between, we kept returning to pinch-based interactions. Pinching, as we discussed in our last blog post, is a very powerful bare handed interaction for a few reasons:

  • It’s a gesture that most people are familiar with and can do with minimal ambiguity, making it simple to successfully execute for new users.
  • It’s a low-effort action, requiring only movement of your thumb and index fingers. As a result, it’s suitable for high-frequency interactions.
  • Its success is very well-defined for the user who gets self-haptic feedback when their finger and thumb make contact.

However, having an ability triggered by pinching does have drawbacks, as false triggers are common. For this reason, having a quick and easy system to enable, disable, and switch between pinch abilities turned out to be very valuable. This led us to design a set of pinch powers to test our shortcut system.

Pinch Powers!

We designed three pinch powers, leaving one shortcut direction free as an option to disable all pinch abilities and use free hands for regular direct manipulation. Each pinch power would encourage a different type of hand movement to test whether the shortcut system would still function as intended. We wanted to create powers that were interesting to use individually but could also be combined to create interesting pairs, taking advantage of each hand’s ability to switch modes independently.

The Plane Hand

For our first power, we used pinching to drive a very common action: throwing. Looking to the physical world for inspiration, we found that paper plane throwing was a very expressive action with an almost identical base motion. By pinching and holding to spawn a new paper plane, then moving your hand and releasing, we could calculate the average velocity of your pinched fingers over a certain number of frames prior to release and feed that into the plane as a launch velocity.

Using this first ability together with the shortcuts system revealed a few conflicts. A common way to hold your hand while pinching a paper plane is with your palm facing up and slightly inwards with your pinky furthest away from you. This fell into the gray area between the palm direction angles defined as ‘facing away from the user’ and ‘facing toward the user’. To avoid false positives, we adjusted the thresholds slightly until the system was not triggered accidentally.

To recreate the aerodynamics of a paper plane, we used two different forces. The first added force is upwards, relative to the plane, determined by the magnitude of the plane’s current velocity. This means a faster throw produces a stronger lifting force.

The other force is a little less realistic but helps make for more seamless throws. It takes the current velocity of a plane and adds torque to bring its forward direction, or nose, inline with that velocity. This means a plane thrown sideways will correct its forward heading to match its movement direction.

With these aerodynamic forces in play, even small variations in throwing angle and direction resulted in a wide variety of plane trajectories. Planes would curve and arc in surprising ways, encouraging users to try overhanded, underhanded, and side-angled throws.

In testing, we found that during these expressive throws, users often rotated their palms into poses which would unintentionally trigger the shortcut system. To solve this we simply disabled the ability to open the shortcut system while pinching.

Besides these fixes for palm direction conflicts, we also wanted to test a few solutions to minimize accidental pinches. We experimented with putting an object in a user’s pinch point whenever they had a pinch power enabled. The intention was to signal to the user that the pinch power was ‘always on.’ When combined with glowing fingertips and audio feedback driven by pinch strength, this seemed successful in reducing the likelihood of accidental pinches.

We also added a short scaling animation to planes as they spawned. If a user released their pinch before the plane was fully scaled up the plane would scale back down and disappear. This meant that short unintentional pinches wouldn’t spawn unwanted planes, further reducing the accidental pinch issue.

The Bow Hand

For our second ability we looked at the movement of pinching, pulling back, and releasing. This movement was used most famously on touchscreens as the central mechanic of Angry Birds and more recently adapted to three dimensions in Valve’s The Lab: Slingshot.

Virtual slingshots have a great sense of physicality. Pulling back on a sling and seeing it lengthen while hearing the elastic creak gives a visceral sense of the potential energy of the projectile, satisfyingly realized when launched. For our purposes, since we could pinch anywhere in space and pull back, we decided to use something a little more lightweight than a slingshot: a tiny retractable bow.

Pinching expands the bow and attaches the bowstring to your pinched fingers. Pulling away from the original pinch position in any direction stretches the bowstring and notches an arrow. The longer the stretch, the greater the launch velocity on release. Again we found that users rotated their hands while using the bow into poses where their palm direction would accidentally trigger the shortcut system. Once again, we simply disabled the ability to open the shortcut system, this time while the bow was expanded.

To minimize accidental arrows spawning from unintentional pinches, we again employed a slight delay after pinching before notching a new arrow. However, rather than being time-based like the plane spawning animation, this time we defined a minimum distance from the original pinch. Once reached, this spawns and notches a new arrow.

The Time Hand

For our last ability, we initially looked at the movement of pinching and rotating as a means of controlling time. The idea was to pinch to spawn a clock and then rotate the pinch to turn a clock hand, dialing the time scale down or back up. In testing, however, we found that this kind of pinch rotation actually only had a small range of motion before becoming uncomfortable.

Since there wasn’t much value in having a very small range of time-scale adjustment, we decided to simply make it a toggle instead. For this ability, we replaced the pinch egg with a clock that sits in the user’s pinch point. At normal speed the clock ticks along quite quickly, with the longer hand completing a full rotation each second. Upon pinching, the clock time is slowed to one-third normal speed, the clock changes color, and the longer hand slows to complete a full rotation in one minute. Pinching the clock again restores time to normal speed.

Continued on Page 2: Mixing & Matching

The post Exclusive: Validating an Experimental Shortcut Interface with Flaming Arrows & Paper Planes appeared first on Road to VR.

]]>
https://www.roadtovr.com/validating-experimental-vr-ar-shortcut-interface-leap-motion/feed/ 7 Road to VR
Exclusive: Designing Single-handed Shortcuts for VR & AR https://www.roadtovr.com/leap-motion-designing-single-handed-shortcuts-for-vr-ar/ https://www.roadtovr.com/leap-motion-designing-single-handed-shortcuts-for-vr-ar/#comments Thu, 10 May 2018 19:22:10 +0000 https://www.roadtovr.com/?p=78156
For new computing technologies to realize their full potential they need new user interfaces. The most essential interactions in virtual spaces are grounded in direct physical manipulations like pinching and grabbing, as these are universally accessible. However, the team at Leap Motion has also investigated more exotic and exciting interface paradigms from arm HUDs and […]

The post Exclusive: Designing Single-handed Shortcuts for VR & AR appeared first on Road to VR.

]]>

For new computing technologies to realize their full potential they need new user interfaces. The most essential interactions in virtual spaces are grounded in direct physical manipulations like pinching and grabbing, as these are universally accessible. However, the team at Leap Motion has also investigated more exotic and exciting interface paradigms from arm HUDs and digital wearables, to deployable widgets containing buttons, sliders, and even 3D trackballs and color pickers.

Guest Article by Barrett Fox & Martin Schubert

Barrett is the Lead VR Interactive Engineer for Leap Motion. Through a mix of prototyping, tools and workflow building with a user driven feedback loop, Barrett has been pushing, prodding, lunging, and poking at the boundaries of computer interaction.

Martin is Lead Virtual Reality Designer and Evangelist for Leap Motion. He has created multiple experiences such as Weightless, Geometric, and Mirrors, and is currently exploring how to make the virtual feel more tangible.

Barrett and Martin are part of the elite Leap Motion team presenting substantive work in VR/AR UX in innovative and engaging ways.

As we move from casual VR applications to deeper and longer sessions, design priorities naturally shift toward productivity and ergonomics. One of the most critical areas of interaction design that comes up is mode switching and shortcuts.

Today we use keyboard shortcuts so often that it’s difficult to imagine using a computer without them. Ctrl+Z, Ctrl+C, and Ctrl+V are foundational to the efficiency of keyboard and mouse input. Most of you reading this have committed these to muscle memory.

In VR we’ve seen controller inputs adopt this shortcut paradigm relatively easily by remapping commands to buttons, triggers, trackpads, and analog sticks. To increase or decrease the brush size in Tilt Brush you swipe right or left on the trackpad of your brush hand.

But what happens when we think about one-handed rapid selections for bare-handed input? This requires a different kind of thinking, as we don’t have buttons or other mechanical inputs to lean on. In our previous work, we’ve mapped these kinds of commands to either world-space user interfaces (e.g. control panels) or wearable interfaces that use the palette paradigm, where one hand acts as a collection of options while the other acts as a picker.

But if we could mode switch or modify a currently active tool with just one hand instead of two we would see gains in speed, focus, and comfort that would add up over time. We could even design an embodied and spatial shortcut system without the need to look at our hands, freeing our gaze and increasing productivity further.

Direct Manipulation vs. Abstract Gestures

One way to activate a shortcut with a single hand would be to define an abstract gesture as a trigger. Essentially this would be a hand pose or a movement of a hand over time. This is an exception to a general rule at Leap Motion, where we typically favor direct physical manipulation of virtual objects as an interaction paradigm over using abstract gestures. There are a few reasons for this:

  • Abstract gestures are often ambiguous. How do we define an abstract gesture like ‘swipe up’ in three-dimensional space? When and where does a swipe begin or end? How quickly must it be completed? How many fingers must be involved?
  • Less abstract interactions reduce the learning curve for users. Everyone can tap into into a lifetime of experience with directly manipulating physical objects in the real world. Trying to teach a user specific movements so they can perform commands reliably is a significant challenge.
  • Shortcuts need to be quickly and easily accessible but hard to trigger accidentally. These design goals seem at odds! Ease of accessibility means expanding the range of valid poses/movements, but this makes us more likely to trigger the shortcut unintentionally.

To move beyond this issue, we decided that instead of using single gesture to trigger a shortcut, we would gate the action into two sequential stages.

The First Gateway: Palm Up

Our interaction design philosophy always looks to build on existing conventions and metaphors. One major precedent that we’ve set over time in our digital wearables explorations is that hand-mounted menus are triggered by rotating the palm to face the user.

This works well in segmenting interactions based on which direction your hands are facing. Palms turned away from yourself and toward the rest of the scene imply interaction with the external world. Palms turned toward yourself imply interactions in the near field with internal user interfaces. Palm direction seemed like a suitable first condition, acting as a gate between normal hand movement and a user’s intention to activate a shortcut.

The Second Gateway: Pinch

Now that your palm is facing yourself, we looked for a second action which would be easily triggered, well defined and deliberate. A pinch checks all these boxes:

  • It’s low-effort. Just move your index finger and thumb!
  • It’s well defined. You get self-haptic feedback when your fingers make contact, and the action can be defined and represented by the tracking system as reaching a minimum distance between tracked index and thumb tips.
  • It’s deliberate. You’re not likely to absent-mindedly pinch your fingers with your palm up.

Performing both of these actions, one after another, is both quick and easy, yet difficult to do unintentionally. This sequence seemed like a solid foundation for our single-handed shortcuts exploration. The next challenge was how we would afford the movement, or in other words, how someone would know that this is what they needed to do.

Thinking back on the benefits of direct manipulation versus abstract gestures we wondered if we could blend the two paradigms. By using a virtual object to guide a user through the interaction, could we make them feel like they were directly manipulating something while in fact performing an action closer to an abstract gesture?

The Powerball

Our solution was to create an object attached to the back of your hand which acts as a visual indicator of your progress through the interaction as well as a target for pinching. If your palm faces away, the object stays locked to the back of your hand. As your palm rotates toward yourself the object animates up off your hand towards a transform offset that is above but still relative to your hand.

Once your palm fully faces toward yourself and the object has animated to its end position, pinching the object – a direct manipulation – will trigger the shortcut. We dubbed this object the Powerball. After some experimentation, we had it animate into the pinch point (a constantly updating position defined as the midpoint between the index finger and thumb tips).

This blend of graphic affordance, pseudo-direct manipulation, gestural movement, and embodied action proved easy to learn and ripe with potential for extension. Now it was time to look at what kinds of shortcut interface systems would be ergonomic and reliably tracked from this palm-up-pinched-fingers position.

Continued on Page 2: Spatial Interface Selection »

The post Exclusive: Designing Single-handed Shortcuts for VR & AR appeared first on Road to VR.

]]>
https://www.roadtovr.com/leap-motion-designing-single-handed-shortcuts-for-vr-ar/feed/ 7 Road to VR Image courtesy Leap Motion
Cloudhead Games – Lessons Learned From Five Years of VR Locomotion Experiments https://www.roadtovr.com/cloudhead-games-lessons-learned-five-years-vr-locomotion-experiments/ https://www.roadtovr.com/cloudhead-games-lessons-learned-five-years-vr-locomotion-experiments/#comments Tue, 17 Apr 2018 12:30:59 +0000 https://www.roadtovr.com/?p=76978
Both the Rift and the Vive first launched to consumers around this time two years ago, but their debut, and the games that launched alongside them, were the culmination of years of prior game design experimentation in a new medium that brought both new opportunities and challenges. Cloudhead Games, developers of Vive launch title The […]

The post Cloudhead Games – Lessons Learned From Five Years of VR Locomotion Experiments appeared first on Road to VR.

]]>

Both the Rift and the Vive first launched to consumers around this time two years ago, but their debut, and the games that launched alongside them, were the culmination of years of prior game design experimentation in a new medium that brought both new opportunities and challenges. Cloudhead Games, developers of Vive launch title The Gallery: Call of the Starseed, were among those leading the charge. On this occasion, the two year anniversary of modern VR headsets becoming available to consumers, the studio’s Lead Programmer, Paul White, and Narrative Designer, Antony Stevens, look back at the studio’s journey in VR development and where it has led them today.

Guest Article by Paul White and Antony Stevens

Paul is the Lead Programmer at Cloudhead Games. Bitten by the VR bug in the early 90s, Paul has been programming since fifth grade. With Cloudhead Games, Paul has more than five years experience in modern VR research and development, producing award-winning tech for The Gallery VR series.

Antony is the Narrative Designer and Community Lead at Cloudhead Games. With Cloudhead since the launch of consumer VR in 2016, Antony has helped shape and share the stories of its developers across multiple mediums, including in The Gallery: Heart of the Emberstone.

The First Climb

Fall 2013, Oculus DK1 + Razer Hydra

My journey into VR locomotion began with the sunsetting Razer Hydra in late 2013. An early motion controller system tracked by a low-power magnetic field, the Hydra was originally designed as a peripheral for flat PC gaming. But for some of us, it was also an unlikely hero—the Hydra was the first big key to unlocking presence in virtual reality, thanks to its positional tracking

It was the era of the DK1, the first of the Oculus Rift prototypes available to Kickstarters, offering only rotational head tracking during its initial foray into the rebirth of VR. Without positional tracking of the head or hands, player movement in VR projects was either bound to the analogue sticks or omitted entirely. These were the standards and limitations of the time; VR as we know it today was yet to exist.

Image courtesy Cloudhead Games

I was working on Exploration School, an early tech demo for our built-for-VR adventure game The Gallery (2016). My challenge was to use the Hydra to mimic the motions of climbing a wall without using control sticks—just reach out and grab it. It sounds straightforward now, but during those early days of VR we thought it could never be done with the available tech.

Holding the wired Hydra, you would reach out with your hand and press a button to capture the position of that arm on a surface. Any motion you made next would be countered and represented in game with our body persistence. If you let your arm down, your position would counter that movement, causing your camera and in-game body to move upward. If you raised your arm up, your position would counter, and you would climb down. It felt intuitive, all tech considered.

VR devs all around were experimenting with anything and everything, from climbing to flying to roller coasters, but there was no substantial test audience. Motion sickness was a concern internally, but there weren’t enough headsets in the wild to know how widespread its effect was. We knew what artificial movement felt like to us and other developers, but there was no way to know what was working and what wasn’t for various sensitivities.

When we brought Exploration School to public events, we gave players the best advice we had for avoiding motion sickness: “Don’t look down.”

The Bigger Picture

Spring 2014, Oculus DKHD + Razer Hydra

Those first two years saw many VR developers building single-room projects—playboxes with no need for travel or locomotion. The Oculus Rift, for all intents and purposes, was a seated experience. Our project, The Gallery, was a larger world that needed exploration, with terrain that was organic and rugged. We wanted realism where you could walk around, look at things, and feel alive in a world. VR was predominantly blocky at the time (both graphically and otherwise), and walking with the analogue stick felt like your body was a cart behind you, changing direction to chase after you each time you turned your head. It all felt unnatural.

Image courtesy Cloudhead Games

‘Tank Move’ was one alternative. This method allowed your head to deviate from the direction you were moving, so you could pan your view around an environment completely decoupled from your body direction. Think of your head as a swiveling neck turret, while your body is driven on tracks and controlled by a joystick. It was a fitting abstraction.

Tank Move was better because it meant you could look around while you moved. It was also worse because of vestibular disconnect—motion sickness caused by your brain perceiving directional movement through your eyes (the headset), without physical motion detected by your inner ear (the real one). Decoupling head movement from the body could ultimately decouple stomach contents from the body as well.

Image courtesy Cloudhead Games

More important than the freedom to look around was the freedom to move around, and we knew that the positional tracking features of the upcoming DK2 (and experimental hardware from Valve) would help dictate movement. In the meantime, we wanted to get ahead of the curve and start building for the future that VR was heading toward. Using heuristic spine modeling and a simulated height, I was able to turn the single, rotational tracking point of the DK1 into two positional tracking points: head and root.

With that inferred root, we then had the approximate location of the player’s torso in relation to their head, and could then adjust their body avatar with movements accordingly. We could tell the difference between natural displacements, from the player crouching into a tent, to peering over a balcony at the distant world around them.

In the end, the feature never made it in. Everything was about to change anyway.

Continued on Page 2 »

The post Cloudhead Games – Lessons Learned From Five Years of VR Locomotion Experiments appeared first on Road to VR.

]]>
https://www.roadtovr.com/cloudhead-games-lessons-learned-five-years-vr-locomotion-experiments/feed/ 29 Road to VR Image courtesy Cloudhead Games
Vive China President Shares 16 Lessons for a VR-First Future From ‘Ready Player One’ https://www.roadtovr.com/16-lessons-for-vr-first-future-ready-player-one-vive-china-president-alvin-wang-graylin/ https://www.roadtovr.com/16-lessons-for-vr-first-future-ready-player-one-vive-china-president-alvin-wang-graylin/#comments Thu, 29 Mar 2018 20:14:35 +0000 https://www.roadtovr.com/?p=66836
Virtual reality (VR) has been a hot topic in the media and the tech sector for the last couple of years, but it is about to hit a new level of buzz, thanks to the internationally best-selling book Ready Player One (2011) by author Ernest Cline, and movie adaptation from award winning director, Steven Spielberg, premiering […]

The post Vive China President Shares 16 Lessons for a VR-First Future From ‘Ready Player One’ appeared first on Road to VR.

]]>

Virtual reality (VR) has been a hot topic in the media and the tech sector for the last couple of years, but it is about to hit a new level of buzz, thanks to the internationally best-selling book Ready Player One (2011) by author Ernest Cline, and movie adaptation from award winning director, Steven Spielberg, premiering in theaters today. The Ready Player One (RPO) novel is often cited by industry experts as one of the top recommendations for VR-related book lists. The story’s primary backdrop is a world where VR is intertwined into every aspect of our daily lives, what I call a ‘VR-First future’. RPO follows the adventures of a teenager and his friends as they overcome untold challenges on their quest to win a global online scavenger competition and its gigantic prize.

Guest Article by Alvin Wang Graylin

alvin-wang-graylin-headshotAlvin Wang Graylin is the China President of Vive at HTC leading all aspects of the Vive/VR business in the region. He is also currently Vice-Chairman of the 300-member company Industry of Virtual Reality Alliance, President of the $15 Billion Virtual Reality Venture Capital Alliance, and oversees the Vive X VR accelerator in Asia. He has had over 22 years of business management experience in the tech industry, including 15 years operating in Greater China. Prior to HTC, Graylin was a serial entrepreneur, having founded four venture-backed startups in the mobile and internet spaces, covering mobile social, adtech, search, big data and digital media. Additionally, he has held P&L roles at several public companies.

Note: Vive is the official VR partner for the upcoming Ready Player One film.

Update (3/29/18): With today’s premiere of the Ready Player One movie, Graylin has added some additional thoughts to this article, originally published in August, 2017, after screening the film.

Image courtesy Warner Bros Pictures

I was fortunate enough to have gotten access to see the early preview of the film a couple of days ago and am happy to report that it’s a delightful film that keeps the audience on the edge of their seats for over two hours. I was originally a little concerned that the extensive references to 80’s American pop culture and complex topics related to the technology would make the film hard to appreciate for global audiences, but Spielberg has again worked his magic and found ways to give it appeal to audiences of all ages and cultures. Even in the film preview night in Beijing, the audience seemed to cheer and laugh at all the right moments, and every face coming out of the cinema bore big smiles.

The impact of this movie will go far beyond an entertaining evening for the audience. It’ll go a long way toward making VR better understood by the general public around the world and, just as important, help normalize VR headsets. I’m sure the actual VR devices of 2045 will be far less bulky that what’s depicted in the movie, but I’m actually glad Spielberg decided to use more current day form factors in the film as this will help general consumers better related and accept current devices. I’m excited to see how RPO will change the growth/adoption trajectory of the VR industry. For those that have only watched the movie, but haven’t read the book, I’d still recommend going back and reading it to get a deeper view. Both are enjoyable in different ways.

SEE ALSO
If 'Ready Player One' Doesn't Suck, It Stands to Positively Impact the VR Industry
Image courtesy Crown/Random House

Although some would describe the world of 2045 depicted in RPO as a bit bleak or dystopian, I would argue that there are numerous positive lessons we can glean from this envisioned future world that should give us hope and guidance. In fact, I have asked everyone in the China Vive team to read the RPO book and that our HR department give all new staff a copy of the book as there are so many relevant concepts and use cases I believe all can benefit from. Given it is a fictional story written more to entertain than educate, there are a number of technical details I may not fully agree with, so let us not take all the content too literally as some technical readers can tend to do. On a whole, RPO is surprisingly insightful for a novel about a technology that has the potential to create truly transformative impact on our future lives.

I was first exposed to VR over 25 years ago and as most involved in the HIT Lab, we saw its potential from our very first experience. Over the last 1.5 years in my current roles with VIVE, Vive X, IVRA and VRVCA, I’ve had the opportunity to evaluated hundreds of immersive computing products/content, reviewed thousands of VR startup companies and given exposure to the industry’s long-term roadmap. I have tried to take those learnings into account when providing my takeaways from the RPO story. This book and its future adaptations into other media will make the concept of VR accessible for a mass audience, in turn helping bring broad understanding of VR to the general public.

RPO will do for VR what Avatar (2009) did for 3D in general awareness. A form of the ‘VR-First’ future from the book will become a reality in the not too distant future, and the RPO will play an instrumental role in accelerating its realization. What form of the future is finally realized is ultimately up to us as a people, and how we decide to leverage the potential of this disruptive technology.

To help both our internal team and the broader public better understand the takeaways from the story, here is my summary of the key points from the book we can use to guide our actions in the near to mid-term to create a more positive future. It can also be used as a guide to highlight potential business opportunities for budding VR entrepreneurs. This list is not intended to be exhaustive, it’s only the ones I wanted to highlight at this time. I welcome all readers to append other lessons they think worthwhile in the comments section.

Spoiler alert: I have tried to make book references vague enough not to reveal the plot of the book in these learnings, but it may provide hints to the plot. So if you want a pure reading experience of the book, best you go read the book before reading the rest of this article.

16 Key Takeaways from the Ready Player One Novel

1. We will be more dependent on VR devices than we are our phones today

It is clear from the story how VR can be applied to all aspects of our lives from work to school to play, and is our key access point to all information we would ever need. Given that essentially no other devices are mentioned in the book, it shows the potential for a future world to have replaced all other screens/interfaces with VR devices. For those not yet familiar with VR, it’s probably worth you and your family’s time to go find a VR arcade to try some high-end VR or at least pick up a mobile VR shell to get acquainted with low-end VR.

2. VR may play a bigger role in our future lives than AR

Many analysts have forecasted that AR applications could outpace VR in the future. I believe the two technologies will naturally meld together over time into an integrated experience materialized on a single device, so there’s no real need to so clearly delineate. Some early high-end VR devices already have such capabilities built-in. However, it is notable that in the book, very few AR type use cases were actually cited. Which does make sense when most people in a VR-first future can live a large portion of their lives without leaving their homes, their dependency on AR applications will actually be less frequent than their VR requirements. Developers should take some time to start thinking about how AR concepts and technology can be applied to enhance VR experiences and vice versa.

3. Network speeds and cloud computing capacity will be the key utility of the future

In a world where most our interactions with the world are via a VR device, connectivity/computing speeds will impact our lives more than other utilities we care about today, i.e. water, gas, electricity. 5G, fiber-to-the-home and server farms will be increasingly important to our daily lives. In fact, without VR proliferation, the impetus for these capabilities becomes much less. In this future world, we will prefer to be without water or gas for an hour vs. than without constant high-speed connectivity. Startups looking into cloud computing capabilities that fully leverage the coming fat/fast data pipes to deliver innovative services could reap huge rewards in the future.

4. Everyone will become ‘Gamers’ & watching game streams will be a major pass-time

According to Mary Meeker’s recent Internet report, in 1995 there were 100 million gamers, and today, there are 2.6 billion. In the VR-first future, much of our lives will have become ‘gamified’ in a virtual world and we will essentially all be gamers. Games today is often seen as an activity that wastes times and produces little economic value. In the VR-First future, that is no longer the case when gamified work and education can make our careers more enjoyable and help use learn in new ways.

E-sports has become a phenomenon over the last few years, and its gaining momentum as more and more people spending their time playing online games and watching others compete for large cash prizes. In the book, the entire story unfolds around the playing of the largest e-sport game imaginable where essentially the entire globe is playing a single game in an effort to win the ultimate prize and hundreds of millions of users are watching real-time updates via live game streams. E-sport athlete avatars will become more famous than real-world celebrities. In fact the VR-First world, the most famous movie stars of the future, may not even be real people.

5. Virtual Schools will democratize high quality education to the world 

I deeply believe that our education model can be revolutionized by VR technology and quality education made universally accessible. Quality and quantity of education has been directly linked to one’s career success and income. In the VR-First future, every child (and adult) will have access to the best school/teachers. The planet Ludus where all children can get access to quality education via VR is a potential model that does make sense. Governments giving free VR equipment to all students worldwide to learn in virtual schools can actually be far more economical than operating physical schools around the world. Studies have already shown VR can help kids learn more and retain information longer vs. traditional teaching. Vive is already working with hundreds of schools and universities around the world to pilot VR educational methods as well as building education-focused tools and content. School administrators and governments need to start looking into how to better leverage VR to teach our future generations now.

6. Remote work via VR will become the norm

Most of the work and meeting scenarios described in the book took place in VR equipment. Although physical offices were mentioned for certain companies, given all the actual interpersonal interactions actually took place in the virtual world, we can derive that the physical need to be in the office really doesn’t exist for most cases. Think of how much time we can regain in our days be eliminating the need for commuting and business travel. Even today, many industries such as design, engineering and healthcare have already show significant increases in productivity by conducting a significant portion of their work in VR.

7. VR can erase race and gender inequality gaps

In the book, one of the characters disguises his/her race and sex by selecting an unmatched appearance in the chosen avatar in an effort to avoid the innate negative biases that exists in our society. When most of our interactions with others are conducted via our avatars, we truly can allow people to be judged solely on our creativity and intellect vs. our physical traits or social status. In our world today, females and minorities generally earn ~20% less for the same role. In the VR-First future, that doesn’t have to be.

8. Gathering experiences and access will be more important than gathering wealth

When we can have any life we want in the virtual world, gathering physical possessions becomes less important and so does gathering monetary wealth. What affects our personal or social status in the VR-First future will be our experience level granting us greater influence and access. Even in the current world, why not take guidance from the future and spend more of our time/money on life experiences vs. material goods.

9. Virtual currency will become more relevant to our lives than traditional currency

Cash today is already becoming obsolete in places like China where you can effectively live fully via only mobile payment. In the VR-first future, traditional national currency itself may also go away entirely in our daily lives, replaced by completely digital currencies such as bitcoin. This future will come much faster than we think and create massive opportunities for entrepreneurs in the fintech world.

10. A huge economy is coming for virtual goods and services

As our time spent moves increasingly from the physical world to the virtual world, virtual items and services will become a much larger part of our lives. The money we spend today on travel, entertainment, education, transportation, apparel, etc will largely move to the virtual economy. Examples of such cases were frequently cited in the book where users had to pay for virtual travel or buy virtual powers. We already see that trend happening already today where hard-core gamers and live stream audiences put a large part of their discretionary spending on virtual goods/services.

11. Home food delivery may become the most common way to eat

Even when we spend most of our time in the virtual world, there is one physical thing we will still need to do, eat! As our homes get smaller and the need to go outside reduces, the need for food delivery will increase dramatically. In Mary Meeker’s recent report, she cited US home food delivery growing at 45% YoY, and in China, that number is even higher where in most cities, customers can get essentially free delivery of food from any restaurant in about 30 minutes. I have my dinner delivered more than 50% of the time when I’m actually in Beijing. Low-cost high-speed food delivery can be an opportunity globally.

12. VR platforms should put in safe guards for managing physical health into future systems

As users spend more of their days inside the virtual world, many will worry that there will be negative health consequences. In the book, Wade turns on a system function that requires users to perform sufficient physical exercises before he can login to his VR rig. This kind of feature is less needed with room-scale VR system like the Vive, which already provides plenty of activity, but there certainly will be a market need over time for such functions. Given all the sensors and wearables that will be integrated into VR devices in the coming years, it’ll be quite easy for the systems to intelligently ensure that users have both an enjoyable and healthy experience.

13. VR can make physical distance irrelevant in our daily lives; VR natives may never meet their best friends in person

During the majority of the book, all the main characters were located physically apart from each other, and some were even constantly mobile in the physical world. However, in the virtual world, their interactions were seamless and relationships unaffected. Being able to effectively live, work or study from any location gives us a newfound freedom never afforded us in the past. This is going to have dramatic impact on the real-estate market as location will no longer be the key factor in choosing a home or office.

In the book, best friends and even siblings had never actually met each other in person, but they had the emotional connections we would expect from people who grew up together. In the VR-First future, interpersonal relationships will be redefined as we build deep friendships based on the substance of others’ souls and digital records of their lives vs. physical appearance/social status. We can already see this trend happening now for online friendships built upon connections in social networks around the world today.

14. Privacy and data security will be critical to enable an acceptable VR-First future 

RPO describes a world where essentially all users’ data are centrally kept online and nearly all interactions with other people will happen online and can be tracked. In such a world, information security and access control becomes even more critical than ever. Our avatar identity is our key to the world and how that ID integrates with all facets of our lives will not be an easy problem to solve. We will also need the ability to create anonymous identities to allow users to confidently utilize these centralized system without fear of being unknowingly tracked or our personal data being abused. These are all opportunities for security minded firms today.

15. VR can reduce our ecological footprint to enable a more sustainable environment

As the earth’s population continues to grow, our natural resources continue to be depleted, and our environment is increasingly impact climate and health, it’s clear something needs to be done. Mass adoption of VR may offer a long-term solution for us to naturally reduce our drain on limited resources. We would need to travel less, commute less, reduce the need for most office buildings, live effectively in smaller homes, and be geographically distributed in a way to lessen environmental impact. If we accomplish these things in a global scale, there’s a good chance we can reverse the damage we’ve done in the last hundred years and create a truly sustainable planet. Although in the book, the global environment wasn’t in great shape, if we take the right actions over the next 20-30 years, our real-world future can be much different.

16. Even in a virtual world of abundance, humans still have a need for greater purpose

When we do arrive at the VR-First world, it should seemingly satisfy the natural human drive for happiness by giving us access to any experience/object we can dream of, but clearly the population of the RPO future still were not satisfied. They needed the quest in the story to drive themselves forward. Humans differ from other animals in that we strive for more than just survival and pro-creation. We need to have a sense of purpose and it’s this purpose that drives us to excel and achieve more. I challenge all those who read this article to take on the purpose of taking part in realizing a positive VR-First world and giving our future generations a planet we can all be proud of. Whether the future is utopian or dystopian, it is really in our hands.

The post Vive China President Shares 16 Lessons for a VR-First Future From ‘Ready Player One’ appeared first on Road to VR.

]]>
https://www.roadtovr.com/16-lessons-for-vr-first-future-ready-player-one-vive-china-president-alvin-wang-graylin/feed/ 46 © 2017 Warner Bros. Entertainment Inc., Village Roadshow Films North America Inc. and RatPac-Dune Entertainment LLC - U.S., Canada, Bahamas & Bermuda. © 2017 Warner Bros. Entertainment Inc., Village Roadshow (BVI) Limited and RatPac-Dune Entertainment LLC - All Other Territories. All Rights Reserved. Image courtesy Entertainment Weekly
Exclusive: Scaffolding in VR – Interaction Design for Easy & Intuitive Building https://www.roadtovr.com/scaffolding-in-vr-interaction-design-for-easy-intuitive-stacking-and-assembly/ https://www.roadtovr.com/scaffolding-in-vr-interaction-design-for-easy-intuitive-stacking-and-assembly/#comments Sun, 18 Mar 2018 09:45:00 +0000 https://www.roadtovr.com/?p=75466
There’s something magical about building in VR. Imagine being able to assemble weightless car engines, arrange dynamic virtual workspaces, or create imaginary castles with infinite bricks. Arranging or assembling virtual objects is a common scenario across a range of experiences, particularly in education, enterprise, and industrial training—not to mention tabletop and real-time strategy gaming. Guest […]

The post Exclusive: Scaffolding in VR – Interaction Design for Easy & Intuitive Building appeared first on Road to VR.

]]>

There’s something magical about building in VR. Imagine being able to assemble weightless car engines, arrange dynamic virtual workspaces, or create imaginary castles with infinite bricks. Arranging or assembling virtual objects is a common scenario across a range of experiences, particularly in education, enterprise, and industrial training—not to mention tabletop and real-time strategy gaming.

Guest Article by Barrett Fox & Martin Schubert

Barrett is the Lead VR Interactive Engineer for Leap Motion. Through a mix of prototyping, tools and workflow building with a user driven feedback loop, Barrett has been pushing, prodding, lunging, and poking at the boundaries of computer interaction.

Martin is Lead Virtual Reality Designer and Evangelist for Leap Motion. He has created multiple experiences such as Weightless, Geometric, and Mirrors, and is currently exploring how to make the virtual feel more tangible.

Barrett and Martin are part of the elite Leap Motion team presenting substantive work in VR/AR UX in innovative and engaging ways.

Update (3/18/18): Leap Motion has released the Scaffolding demo for anyone with a Leap Motion peripheral to download and try for themselves. They’ve also published a video showing what the finished prototype looks like (see above).

For our latest interaction sprint, we explored how building and stacking interactions could feel seamless, responsive, and stable. How could we place, stack, and assemble virtual objects quickly and accurately while preserving the nuance and richness of a proper physics simulation?

The Challenge

Manipulating physically simulated virtual objects with your bare hands is an incredibly complex task. This is one of the reasons we developed the Leap Motion Interaction Engine, whose purpose is to make the foundational elements of grabbing and releasing virtual objects feel natural.

Nonetheless, the precise rotation, placement, and stacking of physics-enabled objects—while very much possible—takes a deft touch. Stacking in particular is a good example.

Stacking in VR shouldn’t feel like bomb defusal.

When we stack objects in the physical world, we keep track of many aspects of the tower’s stability through our sense of touch. Placing a block onto a tower of objects, we feel when and where the held block makes contact with the structure. In that instant we feel actual physical resistance.

The easiest way to counteract these issues in VR is to disable physics and simply move the objects around. This successfully eliminates unintended collisions and accidental nudges.

With gravity and inertia disabled, we can assemble the blocks however we want, but it lacks the realistic physics-based behavior which is an important part of how we would do the same task in the real world.

However, this solution is far from ideal, as precise rotation, placement, and alignment are still challenging. Moreover, disabling physics on virtual objects makes interacting with them far less compelling. There’s an innate richness to physically simulated virtual interactions in VR/AR that’s only amplified when you can use your bare hands.

A Deployable Scaffold

The best VR/AR interaction design often combines cues from the real world with the unique possibilities of the medium. Investigating how we make assembling things in the physical world easier, we looked at things like rulers and measuring tapes for alignment and the concept of scaffolding, a temporary structure used to support materials in aid of construction.

Snappable grids are a common feature of flat-screen 3D applications. Even in VR we see early examples like the very nice implementation in Google Blocks.

However, rather than covering the whole world in a grid, we proposed the idea of using them as discrete volumetric tools. This would be a temporary, resizable three-dimensional grid which would help create assemblies of virtual objects—a deployable scaffold! As objects are placed into the grid, they would snap into position and be held by a physics spring, maintaining physical simulation throughout the interaction. Once a user was done assembling, they could deactivate the grid. This releases the springs and returns the objects to unconstrained physics simulation.

To create this scaffolding system we needed to build two components: (1) a deployable, resizable, and snappable 3D grid, and (2) an example set of objects to assemble.

Generating A 3D Grid

Building the visual grid around which Scaffold interactions are centered is straightforward. But since we want to be able to change the dimensions of a Scaffold dynamically, we may have many of them per Scaffold (and potentially multiple Scaffolds per scene). To optimize, we created a custom GPU-instanced shader to render the points in our Scaffold grid. This type of repetitive rendering of identical objects is great to put onto the GPU because it saves CPU cycles and keeps our framerate high.

In the early stages of development it was helpful to color-code the dots. Since the grid will be dynamically resized, colors are helpful to identify what we’re destroying and recreating or whether our dot order is orderly (also it was pretty and we like rainbow things).

Shader-Based Grid Hover Affordance

In our work we strive to make things reactive to our actions—heightening the sense of presence and magic that makes VR such a wonderful medium. VR lacks many of the depth cues that we rely on in the physical world, so reactivity is also important in boosting proprioception (our sense of the relative positions of different parts of our body).

With that in mind, we didn’t stop at simply making a grid of cubes. Since we render our grid points with a custom shader, we could add features to our shader to help users better understand the position and depth of their hands. With that in mind, our grid points will grow and glow when your hand is near, making it more responsive and easy to use.

Making Scaffold-Reactive Blocks & Their Ghosts

Creating objects that can be placed within (and aligned to) our new grid starts with adding an InteractionBehaviour component to one of our block models. Combined with the Interaction Engine, this takes care of the important task of making the object graspable. To empower the block to interact with the grid, we created and added another Monobehaviour component that we called ScaffoldBehaviour. This behavior handles as much of the block-specific logic as possible so the grid classes stay less complicated and remain wieldy (yes, it’s a word).

As with the grid itself, we’ve learned to think about the affordances for our interactions right along with the interactions themselves. We designed interaction logic to create and manage a ghost of the block when it’s within the grid, so you can easily tell where the block will go when you release it:

Resizing The Grid with Interaction Engine Handles

By building handles to grasp and drag, a user can resize the Scaffold to fit within a specific area. We created spherical handles with Interaction Engine behaviors, which we constrained to the move in the axis they control. This way, if the user places blocks in the Scaffold and drags the handles to make the grid smaller, the blocks are released, dropping them. Conversely, if the handles are dragged to make the grid larger, and blocks had been placed at those grid points, then the blocks snap back into place!

Continued on Page 2: Widget Stages, States, and Shapes »

The post Exclusive: Scaffolding in VR – Interaction Design for Easy & Intuitive Building appeared first on Road to VR.

]]>
https://www.roadtovr.com/scaffolding-in-vr-interaction-design-for-easy-intuitive-stacking-and-assembly/feed/ 16 Road to VR Image courtesy Leap Motion
Exclusive: Designing ‘Lone Echo’ & ‘Echo Arena’s’ Virtual Touchscreen Interfaces https://www.roadtovr.com/designing-lone-echo-echo-arena-virtual-touchscreen-interfaces-robert-duncan/ https://www.roadtovr.com/designing-lone-echo-echo-arena-virtual-touchscreen-interfaces-robert-duncan/#comments Fri, 16 Mar 2018 19:51:45 +0000 https://www.roadtovr.com/?p=75847
Lone Echo nabbed our 2017 Oculus Rift Game of the Year Award for many reasons—amazing visuals, intuitive locomotion, and a strong story, to name a few—but one of the game’s unsung innovations is its virtual touchscreen interfaces. While many VR games are still using less than ideal laser pointer interfaces, developer Ready at Dawn created […]

The post Exclusive: Designing ‘Lone Echo’ & ‘Echo Arena’s’ Virtual Touchscreen Interfaces appeared first on Road to VR.

]]>

Lone Echo nabbed our 2017 Oculus Rift Game of the Year Award for many reasons—amazing visuals, intuitive locomotion, and a strong story, to name a few—but one of the game’s unsung innovations is its virtual touchscreen interfaces. While many VR games are still using less than ideal laser pointer interfaces, developer Ready at Dawn created a framework for surprisingly functional virtual interfaces which are both intuitive and immersive. The studio’s lead systems designer, Robert Duncan, joins us to explain the design approach behind the end result.

Guest Article by Robert Duncan

Duncan is the Lead Systems Designer at Ready At Dawn Studios, where he enjoys collaborating with the entire team in the pursuit of awesome. He loves to create compelling, emotionally engaging experiences and stories for others to enjoy, all while ravenously consuming those same types of experiences across a variety of mediums: games, movies, TV, anime… to name just a few. He also loves cooking, physical crafts, and tabletop games (especially with miniatures), and even more so when sharing those experiences with others. To that end, he’s excited about the storytelling power of VR and the incredible social opportunities it provides.

SEE ALSO
‘Lone Echo’ Behind-the-Scenes – Insights & Artwork from Ready At Dawn

Objective System Goals

Designing the various user interfaces in Lone Echo and Echo Arena—from simple mechanical interfaces like pull-levers, all the way up to complex virtual interfaces like Jack’s augmented-reality touchscreens—was a process that involved an exciting amount of exploration and iteration. Given the unique constraints that come out of developing for VR, we found ourselves in a situation where we were all but forced to innovate just to achieve our most basic goals.

Fortunately, that innovation rarely had to be done from-scratch; in many cases we were able to pull inspiration from related disciplines (like physical product design or mobile interface design) while making clever adaptations for VR. To illuminate what that process was like, this article will take a deep-dive look into the design of the player character’s[1] integrated ‘objective system’ from Lone Echo, followed by a brief look at the various screens used in the multiplayer lobby of Echo Arena. However, explaining the what without the why wouldn’t be nearly as helpful, so to begin we’ll examine the overall goals that guided the objective system’s development:

Comfort: As with anything we create in VR, comfort is a chief concern. Eye-strain can be quite problematic, especially when considering the use of text.

Usability: While this arguably goes without saying, it is highly important that this system be easy and intuitive for players to use. Aside from the fact that these features are almost always beneficial to a system, they became all the more important when it was realized that we weren’t going to be developing a tutorial for the objective system.

Effectiveness: Again, while this goal seems obvious, it’s important to enumerate what would make the objective system sufficiently ‘effective’. First and foremost, it needs to help players figure out what they are supposed to do in order to make progress through the experience. As such, when critical progression information needs to be delivered, the objective systems needs not just to convey this information, but to even encourage the player to see it. Beyond such critical information, this system is also expected to offer players additional details about their objectives in case they need more thorough support. Lastly, it’s key that this system does not artificially imply linear action. That is to say, when players are allowed to do things in the order of their choosing, it’s important that this system doesn’t make them think they have to do them in a specific order, as we want players to embrace that freedom of choice.

Immersiveness: Given the incredible power of presence that VR offers, one of the chief goals for Lone Echo is to leverage that power as much as possible. Naturally, this extends to all systems of the game, but it’s particularly important for the objective system because it’s most likely to be used when players are close to having their immersion broken: if a player is getting ‘stuck’ and not sure what to do, they’re probably teetering on the edge of taking their headset off… the last thing they need is for their life-line (the objective system) to push them over the edge! This is tricky, because objective systems are notoriously game-y, abstract, non-immersive systems.

Feasibility: Though often overlooked due to its obvious implication, I find it’s important to weigh this practical goal alongside all the rest for one simple reason: if we can’t build it with the time and resources we have, the rest doesn’t matter! It won’t exist!

With the goals laid out, the next step is to determine a high-level design that will meet these goals.

High-Level Design

It’s not uncommon for even the high-level design of a system to change a bit over the course of development, especially as new information is discovered about how the game plays, what players need, etc. In the case of the objective system, we waited until fairly late in Lone Echo’s development before designing it, which fortunately[2] allowed us to dodge any need for significant high-level change. Here is the high-level breakdown of features that we landed on after carefully examining what Lone Echo needed:

Text-based Display: For the sake of feasibility, the tried-and-true means of conveying objective information via text seemed like the obvious choice: it would be easy to author and expand upon, and it could leverage known lessons from traditional gaming objective systems. As it turns out, figuring out comfortable typography in VR is a process all its own, but this ultimately was still the most viable solution for us to take. It is worth noting, however, that at one point we considered allowing players to replay the dialogue audio associated with an objective, but that feature was dropped due to time constraints[3].

Augmented Reality Theming: Fortunately, given the world of Lone Echo (and even more so Jack’s character as an AI), augmented reality is a highly appropriate way to contextualize the objective system. Furthermore, given that a common modern-day use-case for AR is conveying abstract concepts (e.g. travel directions), it’s that much more appropriate for an abstract system like objectives.

Conceptualized as a ‘To-Do’ List from Jack/Liv/HERA: This idea was important for determining how objectives are written and displayed… while many games simply allow this sort of system to exist as an abstract layer ‘on top of’ the game (accessible via a non-immersive pause-menu or the like), given our immersiveness goal, that level of abstraction simply wasn’t an option. Instead, we chose to gear the list of objectives toward something that Jack and Liv might actually use: a dynamically updated list of things to do, written through their perspectives. We found that this also helps reduce any artificial implications of linearity.

Predominantly Opt-in: In order to maintain the presence and exploratory feeling we were targeting for Lone Echo, we found that we had to strike a careful balance with how ‘in your face’ the system was while ensuring it still met its effectiveness goals. What we ultimately landed on (after lots of testing and sifting through a wide variety of player preferences) was a system that is predominantly opt-in and only grabs the player’s attention when absolutely necessary[4].

Now that the big picture is squared away, let’s talk details! The objective system (from a user-interface perspective) consists of two key elements: the wrist display, and the tablet. We’ll go over both in the following pages.

Continued on Page 2: Wrist Display & Tablet »


— Footnotes —

[1] For those unfamiliar, in Lone Echo the player takes on the role of Jack, an advanced AI (with an android body) working on a mining station within the rings of Saturn.

[2] One downside to this method was that we didn’t have a usable objective system for many of our focus tests. To get around this, we had our test proctors act as de-facto objective systems, giving players very constrained hints at the right times or when requested.

[3] A simplified version of this feature was leveraged to allow players to listen to the audio logs they obtained from Cube-Sats.

[4] Given the additional cognitive load players seem to experience while in VR, determining when (and how) to explicitly tell players what to do was a new challenge in and of itself!

The post Exclusive: Designing ‘Lone Echo’ & ‘Echo Arena’s’ Virtual Touchscreen Interfaces appeared first on Road to VR.

]]>
https://www.roadtovr.com/designing-lone-echo-echo-arena-virtual-touchscreen-interfaces-robert-duncan/feed/ 9 Road to VR Image courtesy Ready at Dawn
Lead FX Artist on ‘ARKTIKA.1’ Shares Strategies for Great Effects in VR https://www.roadtovr.com/lead-fx-artist-arktika-1-nikita-shilkin-strategies-for-great-effects-vr/ https://www.roadtovr.com/lead-fx-artist-arktika-1-nikita-shilkin-strategies-for-great-effects-vr/#comments Thu, 22 Feb 2018 19:27:58 +0000 https://www.roadtovr.com/?p=67002
ARKTIKA.1, an Oculus exclusive due to launch later this year, is shaping up to be one of VR’s best looking games to date. You’ll take it as no surprise that the title is being developed by 4A Games, the developer behind the Metro series (and its stunning next installment, Metro Exodus). One important part of making a game look great […]

The post Lead FX Artist on ‘ARKTIKA.1’ Shares Strategies for Great Effects in VR appeared first on Road to VR.

]]>

ARKTIKA.1, an Oculus exclusive due to launch later this year, is shaping up to be one of VR’s best looking games to date. You’ll take it as no surprise that the title is being developed by 4A Games, the developer behind the Metro series (and its stunning next installment, Metro Exodus). One important part of making a game look great is skilled use of effects—dynamic elements like particles, smoke, muzzle flashes, explosions, and lighting. But the methods for making great looking effects for traditional games take on new challenges when it comes to VR, especially when teetering on the edge of visual fidelity and the high performance required for smooth VR rendering. In this guest article, 4A Games explores their approach to making effects in Arktika.1.

Guest Article by Nikita Shilkin

Nikita Shilkin is a Senior VFX Artist at 4A Games. Before that, he worked on films and ads as a Generalist Artist, and then as a VFX/Onset Supervisor on sci-fi and other types of films.

Update (2/22/18): Following the launch of Arktika.1, Shilkin has published a new video further detailing the effects he created for the game.

Since this article was initially published, we’ve also published our Arktika.1 Review and a behind-the-scenes article exploring the artwork and insights behind the game’s development.

Original Article (8/13/17): To get an idea of my prior work, here’s some of the scenes I’ve worked on:

At the moment, I am working on effects for the ARKTIKA.1 project. This is a sci-fi VR shooter with a—traditional for the company—focus on immersing audience through story and high-quality visuals that make it possible to talk about it as an AAA product.

To begin with, I would like to note that making effects for VR is essentially no different from producing them for ordinary games, with the exception of few nuances that I have noticed during the production.

  • The first and the most important one – player’s freedom and as a consequence, the unpredictability of almost all his actions.
  • Focus on performance. The requirement of constant 90 frames damages your technical and creative freedom, forcing you to constantly balance on the verge of game quality and player comfort.
  • The final checkpoint is a headset. Due to the difference in resolution, gamma and the features of the virtual reality, what looked wonderful and beautiful in the editor might not look so good with a headset.

Based on these three rules, we can start analyzing the production. So, let’s begin with some core things.

Weapons

Since we are talking about VR, we don’t have fixed camera, animations, timings or other constant values, which means we can never know how the player will shoot and from which side he sees the weapon. And the only way out is to make the effect work beautifully from all sides.

And the first standard mistake is trying to make one mind-blowing sequence, which unfortunately will work only with a classic fixed camera, becoming ridiculous when turning the weapon.

The solution is quite simple – no matter how complex the effect is, break it into simple fixed parts using all three directions. So you get not only volume, but also a visual randomness that will make a shot unique.

Above: (left) a muzzle flash made with volume in all directions, (right) a typical ‘first person’ muzzle flash looks great from a static camera angle behind the weapon, but breaks down if seen from other directions.

Since the VR does not feature a classic gun sight, nor the center of the screen, and aiming with a foresight or a scope is not a common thing, the projectiles of the weapon should be clearly visible. Most of the players will rely on this factor, making corrections for the bullets and their impacts.

In this regard, there are several tips:

  • The muzzle flash must not block the sight of the bullet.
  • The bullet should be clearly visible (size, brightness, length). The lower the rate of fire, the better the bullets are seen with the trails behind them. The faster, the higher the brightness is.
  • Don’t be lazy, create different bullets with variable impacts for all weapons, as this will also help the player to understand shooting direction.

And finally, a little piece of advice, if you have any firearms (or any other weapons with smoke particles), put them into a separate system, away from the flame and set free in the world, that looks interesting.

Continued on Page 2: Distortion »

The post Lead FX Artist on ‘ARKTIKA.1’ Shares Strategies for Great Effects in VR appeared first on Road to VR.

]]>
https://www.roadtovr.com/lead-fx-artist-arktika-1-nikita-shilkin-strategies-for-great-effects-vr/feed/ 5 Road to VR Image courtesy 4A Games
Exclusive: Summoning & Superpowers – Designing VR Interactions at a Distance https://www.roadtovr.com/exclusive-summoning-superpowers-designing-vr-interactions-at-a-distance/ https://www.roadtovr.com/exclusive-summoning-superpowers-designing-vr-interactions-at-a-distance/#comments Mon, 22 Jan 2018 06:28:41 +0000 https://www.roadtovr.com/?p=74013
Manipulating objects with bare hands lets us leverage a lifetime of physical experience, minimizing the learning curve for users. But there are times when virtual objects will be farther away than arm’s reach, beyond the user’s range of direct manipulation. As part of its interactive design sprints, Leap Motion, creators of the hand-tracking peripheral of […]

The post Exclusive: Summoning & Superpowers – Designing VR Interactions at a Distance appeared first on Road to VR.

]]>

Manipulating objects with bare hands lets us leverage a lifetime of physical experience, minimizing the learning curve for users. But there are times when virtual objects will be farther away than arm’s reach, beyond the user’s range of direct manipulation. As part of its interactive design sprints, Leap Motion, creators of the hand-tracking peripheral of the same name, prototyped three ways of effectively interacting with distant objects in VR.

Guest Article by Barrett Fox & Martin Schubert

Barrett is the Lead VR Interactive Engineer for Leap Motion. Through a mix of prototyping, tools and workflow building with a user driven feedback loop, Barrett has been pushing, prodding, lunging, and poking at the boundaries of computer interaction.

Martin is Lead Virtual Reality Designer and Evangelist for Leap Motion. He has created multiple experiences such as Weightless, Geometric, and Mirrors, and is currently exploring how to make the virtual feel more tangible.

Barrett and Martin are part of the elite Leap Motion team presenting substantive work in VR/AR UX in innovative and engaging ways.

Experiment #1: Animated Summoning

The first experiment looked at creating an efficient way to select a single static distant object then summon it directly into the user’s hand. After inspecting or interacting with it, the object can be dismissed, sending it back to its original position. The use case here would be something like selecting and summoning an object from a shelf then having it return automatically—useful for gaming, data visualization, and educational simulations.

This approach involves four distinct stages of interaction: selection, summoning, holding/interacting, and returning.

1. Selection

One of the pitfalls that many VR developers fall into is thinking of hands as analogous to controllers, and designing interactions that way. Selecting an object at a distance is a pointing task and well suited to raycasting. However, holding a finger or even a whole hand steady in midair to point accurately at distant objects is quite difficult, especially if a trigger action needs to be introduced.

To increase accuracy, we used a head/headset position as a reference transform, added an offset to approximate a shoulder position, and then projected a ray from the shoulder through the palm position and out toward a target (veteran developers will recognize this as the experimental approach first tried with the UI Input Module). This allows for a much more stable projective raycast.

In addition to the stabilization, larger proxy colliders were added to the distant objects, resulting in larger targets that are easier to hit. The team added some logic to the larger proxy colliders so that if the targeting raycast hits a distant object’s proxy collider, the line renderer is bent to end at that object’s center point. The result is a kind of snapping of the line renderer between zones around each target object, which again makes them much easier to select accurately.

After deciding how selection would work, next was to determine when the ‘selection mode’ should be active; since once the object was brought within reach, users would want to switch out of selection mode and go back to regular direct manipulation.

Since shooting a ray out of one’s hand to target something out of reach is quite an abstract interaction, the team thought about related physical metaphors or biases that could anchor this gesture. When a child wants something out of their immediate vicinity, their natural instinct is to reach out for it, extending their open hands with outstretched fingers.

Image courtesy Picture By Mom

This action was used as a basis for activating the selection mode: When the hand is outstretched beyond a certain distance from the head, and the fingers are extended, we begin raycasting for potential selection targets.

To complete the selection interaction, a confirmation action was needed—something to mark that the hovered object is the one we want to select. Therefore, curling the fingers into a grab pose while hovering an object will select it. As the fingers curl, the hovered object and the highlight circle around it scale down slightly, mimicking a squeeze. Once fully curled, the object pops back to its original scale and the highlight circle changes color to signal a confirmed selection.

2. Summoning

To summon the selected object into direct manipulation range, we referred to real world gestures. A common action to bring something closer begins with a flat palm facing upwards followed by curling the fingers quickly.

At the end of the selection action, the arm is extended, palm facing away toward the distant object, with fingers curled into a grasp pose. We defined heuristics for the summon action as first checking that the palm is (within a range) facing upward. Once that’s happened, we check the curl of the fingers, using how far they’re curled to drive the animation of the object along a path toward the hand. When the fingers are fully curled the object will have animated all the way into the hand and becomes grasped.

During the testing phase we found that after selecting an object—with arm extended, palm facing toward the distant object, and fingers curled into a grasp pose—many users simply flicked their wrists and turned their closed hand towards themselves, as if yanking the object towards themselves. Given our heuristics for summoning (palm facing up, then degree of finger curl driving animation), this action actually summoned the object all the way into the user’s hand immediately.

This single motion action to select and summon was more efficient than two discrete motions, though they offered more control. Since our heuristics were flexible enough to allow both, approaches we left them unchanged and allowed users to choose how they wanted to interact.

3. Holding and Interacting

Once the object arrives in hand, all of the extra summoning specific logic deactivates. It can be passed from hand to hand, placed in the world, and interacted with. As long as the object remains within arm’s reach of the user, it’s not selectable for summoning.

4. Returning

You’re done with this thing—now what? If the object is grabbed and held out at arm’s length (beyond a set radius from head position) a line renderer appears showing the path the object will take to return to its start position. If the object is released while this path is visible, the object automatically animates back to its anchor position.

Overall, this execution felt accurate and low effort. It easily enables the simplest version of summoning: selecting, summoning, and returning a single static object from an anchor position. However, it doesn’t feel very physical, relying heavily on gestures and with objects animating along predetermined paths between two defined positions.

For this reason it might be best used for summoning non-physical objects like UI, or in an application where the user is seated with limited physical mobility where accurate point-to-point summoning would be preferred.

Continued on Page 2: Telekinetic Powers »

The post Exclusive: Summoning & Superpowers – Designing VR Interactions at a Distance appeared first on Road to VR.

]]>
https://www.roadtovr.com/exclusive-summoning-superpowers-designing-vr-interactions-at-a-distance/feed/ 10 Road to VR
Exclusive: Leap Motion Explores Ways to Make Controller-free Input More Intuitive and Immersive https://www.roadtovr.com/leap-motion-explores-ways-to-make-controller-free-input-more-intuitive-and-immersive-vr-design/ https://www.roadtovr.com/leap-motion-explores-ways-to-make-controller-free-input-more-intuitive-and-immersive-vr-design/#comments Fri, 08 Dec 2017 21:35:04 +0000 https://www.roadtovr.com/?p=72342
There’s an intuitive appeal to using controller-free hand-tracking input like Leap Motion’s; there’s nothing quite like seeing your virtual hands and fingers move just like your own hands and fingers without the need to pick up and learn how to use a controller. But reaching out to touch and interact in this way can be jarring […]

The post Exclusive: Leap Motion Explores Ways to Make Controller-free Input More Intuitive and Immersive appeared first on Road to VR.

]]>

There’s an intuitive appeal to using controller-free hand-tracking input like Leap Motion’s; there’s nothing quite like seeing your virtual hands and fingers move just like your own hands and fingers without the need to pick up and learn how to use a controller. But reaching out to touch and interact in this way can be jarring because there’s no physical feedback from the virtual world. When your expectation of feedback isn’t met, it can be unclear how to best interact with this new non-physical world. In a series of experiments, Leap Motion has been exploring how they can apply visual design to make controller-free input hand input more intuitive and immersive. Leap Motion’s Barrett Fox and Martin Schubert explain:

Guest Article by Barrett Fox & Martin Schubert

Barrett is the Lead VR Interactive Engineer for Leap Motion. Through a mix of prototyping, tools and workflow building with a user driven feedback loop, Barrett has been pushing, prodding, lunging, and poking at the boundaries of computer interaction.

Martin is Lead Virtual Reality Designer and Evangelist for Leap Motion. He has created multiple experiences such as Weightless, Geometric, and Mirrors, and is currently exploring how to make the virtual feel more tangible.

Barrett and Martin are part of the elite Leap Motion team presenting substantive work in VR/AR UX in innovative and engaging ways.

Exploring the Hand-Object Boundary in VR

When you reach out and grab a virtual object or surface, there’s nothing stopping your physical hand in the real world. To make physical interactions in VR feel compelling and natural, we have to play with some fundamental assumptions about how digital objects should behave. This is usually handled by having the virtual hand penetrate the geometry of that object/surface, resulting in visual clipping. But how can we take these interactions to the next level?

With interaction sprints at Leap Motion, our team sets out to identify areas of interaction that developers and users often encounter, and set specific design challenges. After prototyping possible solutions, we share our results to help developers tackle similar challenges in their own projects.

For our latest sprint, we asked ourselves: could the penetration of virtual surfaces feel more coherent and create a greater sense of presence? To answer this question, we experimented with three approaches to the hand-object boundary.

Experiment #1: Intersection and Depth Highlights for Any Mesh Penetration

Image courtesy Leap Motion

For our first experiment, we proposed that when a hand intersects some other mesh, the intersection should be visually acknowledged. A shallow portion of the occluded hand should still be visible but with a change color and fade to transparency.

This execution felt really good across the board. When the glow strength and and depth were turned down to a minimum level, it seemed like an effect which could be universally applied across an application without being overpowering.

Experiment #2: Fingertip Gradients for Proximity to Interactive Objects and UI Elements

For our second experiment, we decided to make the fingertips change color to match an interactive object’s surface, the closer they are to touching it. This might make it easier to judge the distance between fingertip and surface making us less likely to overshoot and penetrate the surface. Further, if we do penetrate the mesh, the intersection clipping will appear less abrupt – since the fingertip and surface will be the same color.

This experiment definitely helped us judge the distance between our fingertips and interactive surfaces more accurately. In addition, it made it easier to know which object we were closest to touching. Combining this with the effects from Experiment #1 made the interactive stages (approach, contact, and grasp vs. intersect) even clearer.

Experiment #3: Reactive Affordances for Unpredictable Grabs

Image courtesy Leap Motion

How do you grab a virtual object? You might create a fist, or pinch it, or clasp the object. Previously we’ve experimented with affordances – like handles or hand grips – hoping these would help guide users in how to grasp them.

In Weightless: Training Room the projectiles have indentations which afford more visually coherent grasping. This also makes it easier for users to reliably release the objects in a throw.

While this helped many people rediscover how to use their hands in VR, some users still ignore these affordances and clip their fingers through the mesh. So we thought – what if instead of modeling static affordances we created reactive affordances which appeared dynamically wherever and however the user chose to grip an object?

Three raycasts per finger (and two for the thumb) that check for hits on the sphere.

Bloop! The dimple follows the finger wherever it intersects the sphere.

In a variation on this concept, we tried adding a fingertip color gradient. This time, instead of being driven by proximity to an object, the gradient was driven by the finger depth inside the object.

Pushing this concept of reactive affordances even further we thought what if instead of making the object deform in response to hand/finger penetration, the object could anticipate your hand and carve out finger holds before you even touched the surface?

Basically, we wanted to create virtual ACME holes.

To do this we increased the length of the fingertip raycast so that a hit would be registered well before your finger made contact with the surface. Using two different meshes and a rendering rule, we create the illusion of a moveable ACME-style hole.

These effects made grabbing an object feel much more coherent, as though our fingers were being invited to intersect the mesh. Clearly this approach would need a more complex system to handle objects other than a sphere – for parts of the hands which are not fingers and for combining ACME holes when fingers get very close to each other. Nonetheless, the concept of reactive affordances holds promise for resolving unpredictable grabs.

Hand-centric design for VR is a vast possibility space—from truly 3D user interfaces to virtual object manipulation to locomotion and beyond. As creators, we all have the opportunity to combine the best parts of familiar physical metaphors with the unbounded potential offered by the digital world. Next time, we’ll really bend the laws of physics with the power to magically summon objects at a distance!

The post Exclusive: Leap Motion Explores Ways to Make Controller-free Input More Intuitive and Immersive appeared first on Road to VR.

]]>
https://www.roadtovr.com/leap-motion-explores-ways-to-make-controller-free-input-more-intuitive-and-immersive-vr-design/feed/ 22 Road to VR Image courtesy Leap Motion
Exclusive: How NVIDIA Research is Reinventing the Display Pipeline for the Future of VR, Part 2 https://www.roadtovr.com/exclusive-nvidia-research-reinventing-display-pipeline-future-vr-part-2/ https://www.roadtovr.com/exclusive-nvidia-research-reinventing-display-pipeline-future-vr-part-2/#comments Thu, 30 Nov 2017 19:01:06 +0000 https://www.roadtovr.com/?p=71885
In Part 1 of this article we explored the current state of CGI, game, and contemporary VR systems. Here in Part 2 we look at the limits of human visual perception and show several of the methods we’re exploring to drive performance closer to them in VR systems of the future. Guest Article by Dr. […]

The post Exclusive: How NVIDIA Research is Reinventing the Display Pipeline for the Future of VR, Part 2 appeared first on Road to VR.

]]>

In Part 1 of this article we explored the current state of CGI, game, and contemporary VR systems. Here in Part 2 we look at the limits of human visual perception and show several of the methods we’re exploring to drive performance closer to them in VR systems of the future.

Guest Article by Dr. Morgan McGuire

Dr. Morgan McGuire is a scientist on the new experiences in AR and VR research team at NVIDIA. He’s contributed to the Skylanders, Call of Duty, Marvel Ultimate Alliance, and Titan Quest game series published by Activision and THQ. Morgan is the coauthor of The Graphics Codex and Computer Graphics: Principles & Practice. He holds faculty positions at the University of Waterloo and Williams College.

Note: Part 1 of this article provides important context for this discussion, consider reading it before proceeding.

Reinventing the Pipeline for the Future of VR

We derive our future VR specifications from the limits of human perception. There are different ways to measure these, but to make the perfect display you’d need roughly the equivalent to 200 HDTVs updating at 240 Hz. This equates to about 100,000 megapixels per second of graphics throughput.

Recall that modern VR is around 450 Mpix/sec today. This means we need a 200x increase in performance for future VR. But with factors like high dynamic range, variable focus, and current film standards for visual quality and lighting in play, the more realistic need is a 10,000x improvement… and we want this with only 1ms of latency.

We could theoretically accomplish this by committing increasingly greater computing power, but brute force simply isn’t efficient or economical. Brute force won’t get us to pervasive use of VR. So, what techniques can we use to get there?

Rendering Algorithms

Foveated Rendering
Our first approach to performance is the foveated rendering technique—which reduces the quality of images in a user’s peripheral vision—takes advantage of an aspect of human perception to generate an increase in performance without a perceptible loss in quality.

Because the eye itself only has high resolution right where you’re looking, in the fovea centralis region, a VR system can undetectably drop the resolution of peripheral pixels for a performance boost. It can’t just render at low resolution, though. The above images are wide field of view pictures shrunk down for display here in 2D. If you looked at the clock in VR, then the bulletin board on the left would be in the periphery. Just dropping resolution as in the top image produces blocky graphics and a change in visual contrast. This is detectable as motion or blurring in the corner of your eye. Our goal is to compute the exact enhancement needed to produce a low-resolution image whose blurring matches human perception and appears perfect in peripheral vision (Patney, et al. and Sun et al.)

Light Fields
To speed up realistic graphics for VR, we’re looking at rendering primitives beyond just today’s triangle meshes. In this collaboration with McGill and Stanford we’re using light fields to accelerate the lighting computations. Unlike today’s 2D light maps that paint lighting onto surfaces, these are a 4D data structure that stores the lighting in space at all possible directions and angles.

They produce realistic reflections and shading on all surfaces in the scene and even dynamic characters. This is the next step of unifying the quality of ray tracing with the performance of environment probes and light maps.

Real-time Ray Tracing
What about true run-time ray tracing? The NVIDIA Volta GPU is the fastest ray tracing processor in the world, and its NVIDIA Pascal GPU siblings are the fastest consumer ones. At about 1 billion rays/second, Pascal is just about fast enough to replace the primary rasterizer or shadow maps for modern VR. If we unlock the pipeline with the kinds of changes I’ve just described, what can ray tracing do for future VR?

The answer is: ray tracing can do a lot for VR. When you’re tracing rays, you don’t need shadow maps at all, thereby eliminating a latency barrier Ray tracing can also natively render red, green, and blue separately, and directly render barrel-distorted images for the lens. So, it avoids the need for the lens warp processing and the subsequent latency.

In fact, when ray tracing, you can completely eliminate the latency of rendering discrete frames of pixels so that there is no ‘frame rate’ in the classic sense. We can send each pixel directly to the display as soon as it is produced on the GPU. This is called ‘beam racing’ and eliminates the display synchronization. At that point, there are zero high-latency barriers within the graphics system.

Because there’s no flat projection plane as in rasterization, ray tracing also solves the field of view problem. Rasterization depends on preserving straight lines (such as the edges of triangles) from 3D to 2D. But the wide field of view needed for VR requires a fisheye projection from 3D to 2D that curves triangles around the display. Rasterizers break the image up into multiple planes to approximate this. With ray tracing, you can directly render even a full 360 degree field of view to a spherical screen if you want. Ray tracing also natively supports mixed primitives: triangles, light fields, points, voxels, and even text, allowing for greater flexibility when it comes to content optimization. We’re investigating ways to make all of those faster than traditional rendering for VR.

In addition to all of the ways that ray tracing can accelerate VR rendering latency and throughput, a huge feature of ray tracing is what it can do for image quality. Recall from the beginning of this article that the image quality of film rendering is due to an algorithm called path tracing, which is an extension of ray tracing. If we switch to a ray-based renderer, we unlock a new level of image quality for VR.

Real-time Path Tracing
Although we can now ray trace in real time, there’s a big challenge for real-time path tracing. Path tracing is about 10,000x more computationally intensive than ray tracing. That’s why movies takes minutes per frame to generate instead of milliseconds.

Under path tracing, the system first traces a ray from the camera to find the visible surface. It then casts another ray to the sun to see if that surface is in shadow. But, there’s more illumination in a scene than directly from the sun. Some light is indirect, having bounced off the ground or another surface. So, the path tracer then recursively casts another ray at random to sample the indirect lighting. That point also requires a shadow ray cast, and its own random indirect light…the process continues until it has traced about about 10 rays for each single path.

But if there’s only one or two paths at a pixel, the image is very noisy because of the random sampling process. It looks like this:

Film graphics solves this problem by tracing thousands of paths at each pixel. All of those paths at ten rays each are why path tracing is a net 10,000x more expensive than ray tracing alone.

To unlock path tracing image quality for VR, we need a way to sample only a few paths per pixel and still avoid the noise from random sampling. We think we can get there soon thanks to innovations like foveated rendering, which makes it possible to only pay for expensive paths in the center of the image, and denoising, which turns the grainy images directly into clear ones without tracing more rays.

We released three research papers this year towards solving the denoising problem. These are the result of collaborations with McGill University, the University of Montreal, Dartmouth College, Williams college, Stanford University, and the Karlsruhe Institute of Technology. These methods can turn a noisy, real-time path traced image like this:

Into a clean image like this:

Using only milliseconds of computation and no additional rays. Two of the methods use the image processing power of the GPU to achieve this. One uses the new AI processing power of NVIDIA GPUs. We trained a neural network for days on denoising, and it can now denoise images on its own in tens of milliseconds. We’re increasing the sophistication of that technique and training it more to bring the cost down. This is an exciting approach because it is one of several new methods we’ve discovered recently for using artificial intelligence in unexpected ways to enhance both the quality of computer graphics and the authoring process for creating new, animated 3D content to populate virtual worlds.

Computational Displays

The displays in today’s VR headsets are relatively simple output devices. The display itself does hardly any processing, it simply shows the data that is handed to it. And while that’s fine for things like TVs, monitors, and smartphones, there’s huge potential for improving the VR experience by making displays ‘smarter’ about not only what is being displayed but also the state of the observer. We’re exploring several methods of on-headset and even in-display processing to push the limits of VR.

Solving Vergence-Accommodation Disconnect
The first challenge for a VR display is the focus problem, which is technically called the ‘vergence-accommodation disconnect’. All of today’s VR and AR devices force you to focus about 1.5m away. That has two drawbacks:

  1. When you’re looking at a very distant or close up object in stereo VR, the point where your two eyes converge doesn’t match the point where they are focused (‘accommodated’). That disconnect creates discomfort and is one of the common complaints with modern VR.
  2. If you’re using augmented reality, then you are looking at points in the real world at real depths. The virtual imagery needs to match where you’re focusing or it will be too blurry to use. For example, you can’t read augmented map directions at 1.5m while you’re looking 20m into the distance while driving.

We created a prototype computational light field display allows you to focus at any depth by presenting light from multiple angles. This display represents an important break with the past because computation is occurring directly in the display. We’re not sending mere images: we’re sending complex data that the display converts into the right form for your eye. Those tiny grids of images that look a bit like a bug’s view of the world have to be specially rendered for the display, which incorporates custom optics—a microlens array—to present them in the right way so that they look like the natural world.

That first light field display was from 2013. Next week, at the ACM SIGGRAPH Asia 2018 conference, we’re presenting a new holographic display that uses lasers and intensive computation to create light fields out of interfering wavefronts of light. It is harder to visualize the workings here, but relies on the same underlying principles and can produce even better imagery.

We strongly believe that this kind of in-display computation is a key technology for the future. But light fields aren’t the only approach that we’ve taken for using computation to solve the focus problem. We’ve also created two forms of variable-focus, or ‘varifocal’ optics.

This display prototype projects the image using a laser onto a diffusing hologram. You look straight through the hologram and see its image as if it was in the distance when it reflects off a curved piece of glass:

We control the distance at which the image appears by moving either the hologram or the sunglass reflectors with tiny motors. We match the virtual object distance to the distance that you’re looking in the real world, so you can always focus perfectly naturally.

This approach requires two pieces of computation in the display: one tracks the user’s eye and the other computes the correct optics in order to render a dynamically pre-distorted image. As with most of our prototypes, the research version is much larger than what would become an eventual product. We use large components to facilitate research construction. These displays would look more like sunglasses when actually refined for real use.

Here’s another varifocal prototype, this one created in collaboration with researchers at the University of North Carolina, the Max Planck Institute, and Saarland University. This is a flexible lens membrane. We use computer-controlled pneumatics to bend the lens as you change your focus so that it is always correct.

Hybrid Cloud Rendering
We have a variety of new approaches for solving the VR latency challenge. One of them, in collaboration with Williams College, leverages the full spread of GPU technology. To reduce the delay in rendering, we want to move the GPU as close as possible to the display. Using a Tegra mobile GPU, we can even put the GPU right on your body. But a mobile GPU has less processing power than a desktop GPU, and we want better graphics for VR than today’s games… so we team the Tegra with a discrete GeForce GPU across a wireless connection, or even better, to a Tesla GPU in the cloud.

This allows a powerful GPU to compute the lighting information, which it then sends to the Tegra on your body to render final images. You get the benefit of reduced latency and power requirements while actually increasing image quality.

Reducing the Latency Baseline
Of course, you can’t push latency to less than the frame rate. If the display updates at 90 FPS, then it is impossible to have latency less than 11 ms in the worst case, because that’s how long the display waits between frames. So, how fast can we make the display?

We collaborated with scientists at the University of North Carolina to build a display that runs at sixteen thousand binary frames per second. Here’s a graph from a digital oscilloscope showing how well this works for the crucial case of a head turning. When you turn your head, latency in the screen update causes motion sickness.

In the graph, time is on the horizontal axis. When the top green line jumps, that is the time at which the person wearing the display turned their head. The yellow line is when the display updated. It jumps up to show the new image only 0.08ms later…that’s about 500 times better than the 20ms you experience in the worst case on a commercial VR system today.

The renderer can’t run at 16,000 fps, so this kind of display works by Time Warping the most recent image to match the current head position. We speed that Time Warp process up by running it directly on the head-mounted display. Here’s an image of our custom on-head processor prototype for this:

Unlike regular Time Warp which distorts the 2D image or the more advanced Space Warp that uses 2D images with depth, our method works on a full 3D data set as well. The picture on the far right shows a case where we’ve warped a full 3D scene in real-time. In this system, the display itself can keep updating while you walk around the scene, even when temporarily disconnected from the renderer. This allows us to run the renderer at a low rate to save power or increase image quality, and to produce low-latency graphics even when wirelessly tethered across a slow network.

The Complete System

As a reminder, in Part 1 of this article we identified the rendering pipeline employed by today’s VR headsets:

Putting together all of the techniques just described, we can sketch out not just individual innovations but a completely new vision for building a VR system. This vision removes almost all of the synchronization barriers. It spreads computation out into the cloud and right onto the head-mounted display. Latency is reduced by 50-100x and images have cinematic quality. There’s a 100x perceived increase in resolution, but you only pay for pixels where you’re looking. You can focus naturally, at multiple depths.

We’re blasting binary images out of the display so fast that they are indistinguishable from reality. The system has proper focus accommodation, a wide field of view, low weight, and low latency…making it comfortable and fashionable enough to use all day.

By breaking ground in the areas of computational displays, varifocal optics, foveated rendering, denoising, light fields, binary frames and others, NVIDIA Research is innovating for a new system for virtual experiences. As systems become more comfortable, affordable and powerful, this will become the new interface to computing for everyone.

All of the methods that I’ve described can be found in deep technical detail on our website.

I encourage everyone to experience the great, early-adopter modern VR systems available today. I also encourage you to join us in looking to the bold future of pervasive AR/VR/MR for everyone, and recognize that revolutionary change is coming through this technology.

The post Exclusive: How NVIDIA Research is Reinventing the Display Pipeline for the Future of VR, Part 2 appeared first on Road to VR.

]]>
https://www.roadtovr.com/exclusive-nvidia-research-reinventing-display-pipeline-future-vr-part-2/feed/ 53 Road to VR