presence-immersive-augmented-reality

In a paper published by the International Symposium on Mixed and Augmented Reality, it is demonstrated that stylized rendering can help blur the boundary between real and virtual when it comes to immersive augmented reality applications. 

One of the biggest draws in virtual reality is the idea that a person can finally feel like they’ve been transported into another world entirely. In the past, hardware limitations and poorly designed user experiences have kept people from achieving that notion. However, the recent advancements of VR headsets have certainly opened up the possibilities of creating content that will give a user a real sense of presence.

In order to genuinely feel present in a virtual space, sensory patterns must stimulate the brain in specific ways resulting in a psychological response. To achieve this, latency needs to be low across the entire pipeline. Field of view needs to be large as well. This gives the person the ability to perceive like they are somewhere else.

William Steptoe is senior researcher in the Virtual Environments and Computer Graphics group at University College London; he recently posed the question, “What is presence in immersive augmented reality” in a post on his research blog. Previously, William detailed the process of how to design and build a stereo camera rig for the Oculus Rift known as the “AR-Rift”. With this type of rig, which has cameras on the front to let users “see through” the headset, developers can begin to explore mixing real-life objects with virtual ones in an immersive way.

Several experiments were established using the the AR-Rift which tested users’ ability to distinguish between physical items and virtual models. Conventional rendering, where the real and virtual objects are displayed as-is, produced an error rate of 27%. For the most part, people were able to pick out the difference based on subtle graphical inconsistencies at first. However, the amount of uncertainty increased to 44% as a stylized rendering condition blended non-photorealistic effects together. This made it more difficult to point out the contrasting elements. Inaccuracy jumped even further to 72% during a virtualized simulation where sketch-like outline and cartoonish filters were applied.

SEE ALSO
This is What a Vision Pro Competitor From Meta Could Look Like

People wearing the pass-through camera in the experiments would attempt to figure out whether the objects were legitimate by shaking their heads. This gave them perceptual tracking clues that would help point out visual flaws in the virtual objects. Still, even when the users knew that the objects were fake, their behavior remained altered. For example, users avoided simulated boxes in one of the experiments when walking around despite knowing that they were real.

Steptoe argues that the outcome of these experiments show that incorporating AR into VR could provide people with a greater sense presence. He published his findings in a paper titled Presence and Discernability in Conventional and Non-Photorealistic Immersive Augmented RealityThe paper was published by the International Symposium on Mixed and Augmented Reality last month. The abstract follows:

Non-photorealistic rendering (NPR) has been shown as a powerful way to enhance both visual coherence and immersion in augmented reality (AR). However, it has only been evaluated in idealized prerendered scenarios with handheld AR devices. In this paper we investigate the use of NPR in an immersive, stereoscopic, wide field-of-view head-mounted video see-through AR display. This is a demanding scenario, which introduces many real-world effects including latency, tracking failures, optical artifacts and mismatches in lighting. We present the AR-Rift, a low-cost video see-through AR system using an Oculus Rift and consumer webcams. We investigate the themes of consistency and immersion as measures of psychophysical non-mediation. An experiment measures discernability and presence in three visual modes: conventional (unprocessed video and graphics), stylized (edge-enhancement) and virtualized (edge-enhancement and color extraction). The stylized mode results in chance-level discernability judgments, indicating successful integration of virtual content to form a visually coherent scene. Conventional and virutalized rendering bias judgments towards correct or incorrect respectively. Presence as it may apply to immersive AR, and which, measured both behaviorally and subjectively, is seen to be similarly high over all three conditions.

Additional experiments like this one will certainly emerge in the near future as more and more developers start integrating AR elements with VR ones. The knowledge gleaned from Steptoe’s experiments could be particularly useful for incorporating users’ real arms and hands into the virtual world using something like the Leap Motion controller.

SEE ALSO
Meta Releases New Mixed Reality Showcase for Unreal Engine Developers

A natural next step would be to add haptic feedback allowing users to touch virtual objects. Users could pick up physical items and computer generated ones at the same time while still thinking both are real. Adding the ability to walk around would expand one’s sense of presence as well. This allows individuals to explore computer generated environments further immersing them into the experience.

The “AR-Rift” is a stepping stone towards achieving unimaginable levels of presence. The technology only gets better from here.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


  • cly3d

    Matthew, thank you for giving this v.important research an audience through R2VR.
    I’ve been having to “validate” my knowledge on twitter, against self proclaimed “VR experts” (granted they have impressive credentials)… who preach what can and cannot be done in VR.

    Especially in VR filmmaking.
    >>”…Conventional rendering, where the real and virtual objects are displayed as-is, produced an error rate of 27%. For the most part, people were able to pick out the difference based on subtle graphical inconsistencies at first. However, the amount of uncertainty increased to 44% as a stylized rendering condition blended non-photorealistic effects together. This made it more difficult to point out the contrasting elements. Inaccuracy jumped even further to 72% during a virtualized simulation where sketch-like outline and cartoonish filters were applied…”

    This is the exact reason I chose to use a stylized form of rendering for the Motion comic I’m working on for the rift. (Dk1 demo scene downloadable here: http://realvision.ae/blog/2014/08/maya-a-360-motion-comic-for-the-oculus-rift-and-vr-devices/ )

    Stereoscopic real world panoramic Images will be used for world building in the unity game engine, and characters will be CG.
    There’s so much to learn about the language of Virtual Reality storytelling, and such research is gold!
    I only hope self proclaimed VR gurus evolve and not play the “20 to 40” years experience card – willy nilly.

    Worth mentioning … VR, like stereoscopic 3D, is part art and part science.

    Kind Regards.

    • Paul James

      Sounds like an interesting project @cly3d – keep us in the loop on any developments – we’d love to feature more of this.

      • cly3d

        Thanks Paul. I’ll be keeping regular blog updates on the tech, and progress.
        Kind Regards.