Google released a preview version of ARCore for Android yesterday, the company’s answer to Apple’s ARKit. Since ARKit was released a few months ago, we’ve seen a bevy of really cool experiments and potential apps to come from developers from all over the world, but now it’s ARCore’s turn to shine.

Developers looking to use ARCore to create augmented reality apps can start building right now on both the Pixel line and Samsung S8 line. The platform will be targeting 100 million devices by the end of preview access however, coming from various hardware manufacturers and potentially making ARCore the defacto largest AR platform when it launches on other devices later this year.

While the critical mass of inspiring (and hopefully useful) ARCore apps has probably yet to come, here are some cool early experiments that get us excited about the potential of AR to inject something magical into every day life.

Morph Face

Morph Face is an experiment that lets you morph any surface around you into a new shape. It uses shaders to achieve the morphing effect.

Built by George Michael Brower with friends at Google Creative Lab. Built with Unity and ARCore.

Portal Painter

Portal Painter gives you a fun way to create portals into other dimensions. Just point your device at a nearby surface, then use your finger to paint a portal into another world.

Built by Jane Friedhoff with friends at Google Creative Lab using Unity and ARCore.

Hidden World

Hidden World is a simple experiment that combines hand-drawn animation with augmented reality. Point your device at the ground, then tap anywhere to reveal an animated world at your feet.

Built by Rachel Park Goto and Jane Friedhoff with friends at Google Creative Lab using Unity and ARCore.

Draw and Dance

Draw and Dance lets you create your very own dancing AR stick figure that reacts to the music and sound around it – your voice, your dog’s bark, and best of all, your playlist. This character can also augment your Google Home by taking its place on top of the speaker and moving in response to whatever sound comes out.

Built by Judith Amores Fernandez and Anna Fusté Lleixà with friends at the Google Creative Lab using Unity, ARCore, Vuforia and API.AI

ARCore Drawing

This is a simple demo that lets you draw lines in 3d space. It was made as a quick example of how to combine openFrameworks and ARCore. You can get the source code here.

Built by Jonas Jongejan using openFrameworks and ARCore.

Keep an eye on Google’s AR Experiments page for more in the coming weeks.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.

Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 3,500 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • Firestorm185

    How do you play these? I’ve got an Android but there’s no launch experiment button. Does it need a Pixel phone?

    • Pixel, Pixel XL or Samsung S8.

      • Firestorm185

        Man, thought that might be the case. I’ve got a Zuk Z2, bleb. XD

        • IHaveAPixel

          I have a google pixel and the only option is to get the source code soo… although Bobbo-Net ARCORE on the playstore is fun

  • Olivers Pavicevics
    • Lucidfeuer

      Ahaha what a great idea. We need app for a all major covers in their respective contexts.

  • Lucidfeuer

    What’s crazy is how Google started the ground engineering work before vaporwaring Tango after 4 years of development, and once again it had to be Apple that did the actual conception and implementation of mobile AR.

    • MGK

      ARcore IS project Tango’s API from which the depth sensor was removed…

      • Lucidfeuer

        I know, makes them a huge bunch of losers that weren’t capable of actually conceived a usable technology then implement it and instead vaporwar it, until Apple did their job in the smartest way, and Google lost a huge competitive advantage and business by being mediocre.

        • MGK

          But, but they did. They just didn’t release it feeling the market wasn’t ready or whatever. Now Apple pushed, so they followed but their technology is at least 2 years ahead of development. I haven’t read all of it yet but here is a very interesting article about it:

          • Lucidfeuer

            Interesting, will read it up, thanks.

          • Michael Balzer

            Also take some time read about Occipital and their work since 2014 with the unbounded tracking using their 3D sensor (tech from PrimeSense also bought by Apple) which allowed us to create a mobile experience like the Hololens on iOS. Also take some time to read about Metaio, the company Apple bought, that ARKit is built upon. No, Apple doesn’t do anything other than buy another company’s IP, promote and market it and call it the greatest thing since sliced bread. Also, ARCore from my own development demos (SolARKit & SolARCore) ARCore is the stabler of the two platforms, but regardless the tools to create mixed reality for iOS and Android is pretty similar using either Unity or Unreal, will lead to better penetration into the mobile eco-systems,.

          • Lucidfeuer

            I knew about these companies and had read they were bought out, but the huge difference between Apple and Google as of now, is that Apple actually used theses assets and implemented them into real-world packages.

            Not only Tango was there way before which means it had a competitive advantage that Google ruined by “vaporwaring” it, but also the combination and integration with 3D/IR sensors should have made it a better, faster and more precise tech for the future. VSlam can only take us so far, but having real world points and distances with IR. In fact through-smartphone AR is going to need that.

          • Occipital has not been bought out (not that I am aware of since I consult with them), but yes depth sensing with TOF or structured light is extremely important if you want to provide real-time shadows and lighting, as well as occlusion. But as others have pointed out TOF or structure light emission consume precious battery resources, but TOF maybe the better candidate here. Also, parallax based depth perception is also an option, and with the distance of the two cameras on the MS Mixed Reality Headset, limited depth can be obtained on its own or enhance current time/distance sample derived from snapshots of past & current camera, plus IMU data to determine distance and angular displacement.

            However, as I have written elsewhere I more intrigued on my comparison of the bare Occipital sensors & sensors and the assumed to be in the front facing 3D facial detection sensors, which appear to be an evolved sensors since both the Structure sensor and Apple’s sensor were more than likely designed by PrimeSense. This is exciting to me since this would provide Occipital like 3D scanning and positional tracking within the iPhone 8. Of course this only will happen if the scanner API is open to other parts of iOS like ARKit or maybe a new one call RGBDKit. If not, this might be a very expensive way to do security just for sake of doing something different.

          • Lucidfeuer

            Parallax adjustments with dual-camera system (and IMU datas interpolation) is the best way to offload TOF operations with a minimum array of emitters/sensors.

            That, as well as light-based depth mapping which is a quick-way of determining what’s where and how it’s lit. This should have been done years ago with Tango.

  • Nice experiments, but… they’re just experiments