Matterport, a company specializing in the capture and digital reconstruction of real-world environments, has announced a $30 million Series C investment round led by Qualcomm Ventures.

The company says that the new investment brings its total funding to $56 million. With the new funds the company intends to:

Accelerate the development of mobile capture for VR and AR content (Matterport has been an early developer on Google’s Project Tango)

Expand its developer platform for web and mobile apps, including the Matterport Developer Program, also launching today

Continue to scale its operations to support strong demand for its growing professional business

Matterport’s current business uses a proprietary $4,500 camera to quickly scan a real world space, building out a 3D model and layering detailed imagery on top of it. By mapping multiple points in the same area, the company’s software can construct an accurate 3D model, allowing the users to move through them and look around in all directions. It’s a bit like Google Maps Street View on steroids for interior spaces.

Given that the result is a 3D model with photorealistic visuals, the space can be easily imported into a VR environment to immerse users in the space and allow them to explore almost as if they were there. And the company has done just that—a Matterport demo app on Gear VR offers up a selection of scenes captured with the company’s tech. At its best, it looks a lot like the high quality panoramic photos you’ll find in Oculus 360 Photos, but with the added benefit of stereoscopy and the ability to navigate throughout the scene from one capture node to the next.

SEE ALSO
Seasoned VR Devs Raise $1.6 Million for New Studio Focused on Virtual Pet Experience

Give it a try (above) and it won’t take you more than a minute to realize its potential for real-estate, hotels, museums, and much more (if someone doesn’t use this tech to make a real-world point and click adventure game, I’ll be quite upset).

With the new funding, Matterport hopes to move away from their proprietary camera and eventual deploy consumer-facing mobile capture options, though it isn’t clear if modern smartphones will be capable or if we’ll need to wait for phones with next-gen depth sensors.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • kalqlate

    I’m sure that eventual Google will use deep learning to decompose scenes into facets and objects and impart default/reassignable behaviors upon them that allow natural/expected interaction. For example. pick up a ball and drop it; it will bounce. Knock a vase off a table, and it will crash to the floor in many pieces. Press a key on a piano, and it plays a note. When the tech advances here, we’ll have capture of fully interactive environments. ETA: two years for basics, with continual enhancement and growth of behavior repertoires thereafter.

    • Jarom Madsen

      Yes please