Project Tango is project from the labs of Google that represents the cutting edge in realtime environment capture and modelling. The system, currently running only on dedicated hardware prototypes (codenamed ‘Peanut’), uses advanced depth and plus high resolution optical cameras to grab spatial and visual data from your environment. This data is fused with orientation and positional information to enable a spatially accurate representation of a captured environment.

A new video demonstrates just how cool this technology is in action. The user wanders the target environment, aiming the phone at  areas to capture whilst monitoring the data collated in realtime. ‘Meshing’ can be paused at any time, with the captured data available for inspection and review at any time. Once you’ve done your first pass, walk the environment again (the view adjusted using positional and orientation information) to grab any gaps in the mesh.

The video comes from Ivan Dryanovski, a Research Assistant at the CCNY Robotics Lab (New York) who is lucky enough to be working with a prototype Project Tango device. The Ph.D student has published interesting work on the use of 3D mapping techniques using micro-UAVs – which is an interesting potential use for this technology – mapping of remote environments using robotics.

The Project Tango launch video below provides a good introduction to the technology below, should you not have seen it.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Based in the UK, Paul has been immersed in interactive entertainment for the best part of 27 years and has followed advances in gaming with a passionate fervour. His obsession with graphical fidelity over the years has had him branded a ‘graphics whore’ (which he views as the highest compliment) more than once and he holds a particular candle for the dream of the ultimate immersive gaming experience. Having followed and been disappointed by the original VR explosion of the 90s, he then founded RiftVR.com to follow the new and exciting prospect of the rebirth of VR in products like the Oculus Rift. Paul joined forces with Ben to help build the new Road to VR in preparation for what he sees as VR’s coming of age over the next few years.
  • mptp

    Put this on the front of a HMD, then view the 3D model from where the users eyes are, not from where the cameras are: perfect HMD look-through.
    The problem I have with look-through on HMDs, where you have say, two webcams mounted to a HMD is the view you get is a good 5-10cm in front of where your eyes are, so everything looks a little off, which lowers confidence in interacting with the world through those cameras.
    Obviously this kind of thing is for static environments and not for a real-time view of the world in front of the cameras and depth-sensors, but with some modifications, I think this kind of technology could totally eliminate that lack of confidence and also be a huge asset to the creation of virtual-augmented-reality.

  • Farfar

    I would like to know what aplications this has for VR when it comes to for example motion tracking. Does this mean we could have a HMD that uses this technology for head tracking?