Google today released a new spatial audio software development kit called ‘Resonance Audio’, a cross-platform tool based on technology from their existing VR Audio SDK. Resonance Audio aims to make VR and AR development easier across mobile and desktop platforms.

Google’s spatial audio support for VR is well-established, having introduced the technology to the Cardboard SDK in January 2016, and bringing their audio rendering engine to the main Google VR SDK in May 2016, which saw several improvements in the Daydream 2.0 update earlier this year. Google’s existing VR SDK audio engine already supported multiple platforms, but with platform-specific documentation on how to implement the features. In February, a post on Google’s official blog recognised the “confusing and time-consuming” battle of working with various audio tools, and described the development of streamlined FMOD and Wwise plugins for multiple platforms on both Unity and Unreal Engine.

Image courtesy Google

The new Resonance Audio SDK consolidates these efforts, working ‘at scale’ across mobile and desktop platforms, which should simplify development workflows for spatial audio in any VR/AR game or experience. According to the press release provided to Road to VR, the new SDK supports “the most popular game engines, audio engines, and digital audio workstations” running on Android, iOS, Windows, MacOS, and Linux. Google are providing integrations for “Unity, Unreal Engine, FMOD, Wwise, and DAWs,” along with “native APIs for C/C++, Java, Objective-C, and the web.”

This broader cross-platform support means that developers can implement one sound design for their experience that should perform consistently on both mobile and desktop platforms. In order to achieve this on mobile, where CPU resources are often very limited for audio, Resonance Audio features scalable performance using “highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality.” A new feature in Unity for precomputing reverb effects for a given environment also ‘significantly reduces’ CPU usage during playback.

Apple Announces ARKit 3 with Body Tracking & Human Occlusion

Much like the existing VR Audio SDK, Resonance Audio is able to model complex sound environments, allowing control over the direction of acoustic wave propagation from individual sound sources. The width of each source can be specified, from a single point to a wall of sound. The SDK will also automatically render near-field effects for sound sources within arm’s reach of the user. Near-field audio rendering takes acoustic diffraction into account, as sound waves travel across the head. By using precise HRTFs, the accuracy of close sound source positioning can be increased. The team have also released an ‘Ambisonic recording tool’ to spatially capture sound design directly within Unity, which can be saved to a file for use elsewhere, such as game engines or YouTube videos.

Resonance Audio documentation is now available on the new developer site.

For PC VR users, Google just dropped Audio Factory on Steam, letting Rift and Vive owners get a taste of an experience that implements the new Resonance Audio SDK. Daydream users can try it out here too.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

  • Cool, will have to try this out. I thought binaural audio using HRTF had to be simulated via headphones that support it though?

    • If anybody wants to hear what (I think) this SDK attempts to achieve, then PUT YOUR HEADPHONES ON and click the below link. Wait till the guy says “Are you weirded out yet” and tell me that’s not cool :) Now imagine a VR horror game with this.

      • Luke

        isn’t the majority of the VR games already in binaural audio?

        • dogtato

          Yeah, steam audio does this, so the question is is this better. It seems like has broader platform support and _maybe_ better performance. Seems like it will be hard to compare the quality though, unless someone makes a demo that lets you toggle between them.

        • That depends which engine version a game uses, what that engine supports in relation to 3D audio and how it compares to the Google SDK.

          It needs a developer review really, with a comparison to inbuilt engine audio.

  • Adrian Meredith

    Just tried the app, came away really impressed. How on earth this works through such simple headphones as on oculus is beyond me. You really can place where the sound is coming from. One of things I disliked about Arizona sunshine is you could never tell where the zombies were from the sound

  • GigaSora

    SDKs and research are where the real action is at. This is a big win.

  • Peter Hansen

    Does this SDK take into account material properties for sound reflection?

  • Ragbone

    He’s got 2 halves of coconuts and he’s banging them together!

  • yag

    Is it a better solution than Oculus’ audio SDK plugin ?

  • Hi! very nice article. I m pleased to read it. You wrote everything in a great manner and useful ways. Good work, Keep up. I m waiting for your next article.

  • Google always brings something new and innovative.

  • Great Article worth reading, Thanks for sharing.

  • BatsHub

    At BatsHub we provide Enterprise Solutions , and we use a lot of Immersive media and this is definitely going to be a great addition in developing a holistic solution. great share.