Augmented Reality has played a huge role at the developer conferences for Microsoft, Apple, Facebook, and Google, which is a great sign that the industry is moving towards spatially-aware computing. Microsoft is the only company to start with head-mounted AR with the HoloLens while the other three companies are starting with phone-based AR. They are using machine learning with the phone camera to do six degree-of-freedom tracking, but Google’s Project Tango is the only phone solution that’s starting with a depth-sensor camera.
LISTEN TO THE VOICES OF VR PODCAST
This allows the Tango to do more sophisticated depth-sensor compositing and area learning where virtual objects can be placed within a spatial memory context that is persistent across sessions. They also have a sophisticated virtual positional system (VPS) that will help customers locate products within a store, which is going through early testing with Lowes.
I had a chance to talk with Tango Engineering Director Johnny Lee at Google I/O about the unique capabilities of the Tango phones including tracking, depth-sensing, and area learning. We cover the underlying technology in the phone, world locking & latency comparisons to HoloLens, virtual positioning system, privacy, future features of occlusions, object segmentation, & mapping drift tolerance, and the future of spatially-aware computing. I also compare and contrast the recent AR announcements from Apple, Google, Microsoft, and Facebook in my wrap-up.
The Asus ZenPhone AR coming out in July will also be one of the first Tango & Daydream-enabled phones.
- A video of one of the Tango Demos at Google I/O
- Demo video of Tango’s Virtual Positioning System
- Video of “Into the Wild” 10,000 square foot Tango AR installation at the Marina Bay Sands Art & Science Museum
- Here’s a Twitter thread discussing the different AR SDKs from the major tech companies
- Here’s the What’s New on Tango presentation from Google I/O 2017