ARCore, Google’s developer platform for building augmented reality experiences for mobile devices, is unveiling today a new tool called Depth API, which not only lets mobile devices create depth maps using a single RGB camera, but also aims to make the AR experience more natural, as virtual imagery is more realistically placed in the world.

Shahram Izadi, Director of Research and Engineering at Google, says in a blog post the new Depth API now enables occlusion for mobile AR applications, and also the chance of creating more realistic physics and surface interactions.

To demonstrate, Google created a number of demos to shows off the full set of capabilities the new Depth API brings to ARCore. Keep an eye on the virtual objects as they’re accurately occluded by physical barriers.

“The ARCore Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera,” Izadi says. “The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel.”

Full-fledged AR headsets typically use multiple depth sensors to create depth maps like this, which Google says was created on device with a single sensors. Here, red indicates areas that closer, while blue is for farther areas:


“One important application for depth is occlusion: the ability for digital objects to accurately appear in front of or behind real world objects,” Izadi explains. “Occlusion helps digital objects feel as if they are actually in your space by blending them with the scene. We will begin making occlusion available in Scene Viewer, the developer tool that powers AR in Search, to an initial set of over 200 million ARCore-enabled Android devices today.”

Indie Dev Experiment Brings Google Lens to VR, Showing Real-time Text Translation

Additionally, Izadi says Depth API does’t require specialized cameras and sensors, and that with the addition of time-of-flight (ToF) sensors to future mobile devices, ARCore’s depth mapping capabilities could eventually allow for virtual objects to occlude behind moving, physical objects.

The new Depth API follows Google’s release of its ‘Environmental HDR’ tool back at Google I/O in May, which brought more realistic lighting to AR objects and scenes, something which aims at enhancing immersion with more realistic reflections, shadows, and lighting.

Update (12:10): In a previous version of this article, it was claimed that Google was releasing Depth API today, however the company is only now putting out a form for developers interested in using the tool. You can sign up here.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

  • Xron

    Seems awesome, now we just need better depth sensors on more phones.

  • Adil H

    I hope it to be a first step to the possibility of saving virtual object in the real world.

    • Angila

      A tension free work place and boss who rewards you for all your hardwork and you obtain good enough time to invest with your family members. It definitely sounds like a dream. Well, It is possible to acquire all the things above. We bring to you an internet based venture that operates towards your growth. This on line work can provide you flexible time. It is possible to get the job done from anywhere. Get around $17000 per week by just dedicating handful of hours daily. Look at this amazing opportunity and transform your life completely >>>

  • Oh wow, amazing!

  • Jack H

    I haven’t come across such a paper yet, but it I think using the phase-based focus on some phones should be able to help get the real-world scale and distance to objects i.e. as metres, instead of arbitrary units approximating metres.