Valve today announced the Steam Audio SDK, a spatial audio plugin that the company says is made is designed to “enhance all interactive products, specifically VR applications.” The company has launched a beta of the SDK today supporting Unity, and support for Unreal Engine is on the way.

Realistic sound is an important but often underappreciated aspect of the immersion equation in VR. Basic stereo output works well enough for gaming on monitors, where the world is abstracted away from an immersive first-person view. But when your eyes are convinced they’re inside the game thanks to VR, your brains has certain expectations for how the world around you should sound. Our brain literally uses audio cues to understand spatial information, especially about what’s not currently in our line-of-site.

VR audio isn’t a ‘solved’ problem, but some of the solutions available today make a huge difference. And now, developers wanting to implement spatial audio in their VR apps have a free option, thanks to Valve’s newly announced Steam Audio SDK. Available now with support for Unity, the Steam Audio SDK will also soon support Unreal Engine. The company says that use of the spatial audio tool is completely open and supports Windows, Linux, MacOS, and Android, and is not restricted to any particular VR device or to Steam, which means developers building VR apps for the Oculus Rift or Gear VR, for instance, are welcome to use the tool. The SDK also currently includes a C API for integration into other game engines and middleware.

SEE ALSO
Oculus Chief Scientist Dives Deep Into the Near Future of AR & VR

The technology powering the Steam Audio SDK is a continuation of the work of Impulsonic, who developed the Phonon audio tools, which has been acquired by Valve.

“Steam Audio is an advanced spatial audio solution that uses physics-based sound propagation in addition to HRTF-based binaural audio for increased immersion,” Valve wrote in a statement issued to Road to VR. “Spatial audio significantly improves immersion in VR; adding physics-based sound propagation further improves the experience by consistently recreating how sound interacts with the virtual environment.”

Historically, realistic physics-based sound calculations have been computationally restrictive, especially for real-time applications, and efforts to simplify some of the underlying physics surrounding sound wave interaction with 3D geometry have results in decent, but not perfect spatial audio. With the rise of VR, more interest has been given to this area, and companies like NVIDIA and others have taken a stab at the problem.

According to Valve, one of the biggest benefits of the Steam Audio SDK is automatic real-time sound propagation:

In reality, sound is emitted from a source, after which it bounces around through the environment, interacting with and reflecting off of various objects before it reaches the listener. Developers have wanted to model this effect, and tend to manually (and painstakingly!) approximate sound propagation using hand-tuned filters and scripts. Steam Audio automatically models these sound propagation effects.

You can find more details on the features supported by the Steam Audio SDK here.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • So do you need special headphones for this?

    Assuming no :)

  • So do you need special headphones for this?

    Assuming no :)

  • psuedonymous

    Looks similar in capability to Oculus Audio SDK with similar licensing terms.

  • psuedonymous

    Looks similar in capability to Oculus Audio SDK with similar licensing terms.

    • Mike

      Makes sense that they’re focusing on this just as they’re releasing the new strap with the built-in headphones. People are more likely to wear headphones more often when putting them on isn’t an extra step.

    • Mike

      Makes sense that they’re focusing on this just as they’re releasing the new strap with the built-in headphones. People are more likely to wear headphones more often when putting them on isn’t an extra step.

    • kontis

      It already supports occlusion while Oculus Audio does not:

      http://i.imgur.com/uRpNkFK.png

    • kontis

      It already supports occlusion while Oculus Audio does not:

      http://i.imgur.com/uRpNkFK.png

  • Timotheus

    Well unfortunately when I read HRTF-based binaural audio, 99% of the time the HRTF is a standardized one.
    A standardized HRTF is useless for 80% of the peoples.
    To get a realistic sound, where you can not only differentiate well between left and right, but also front and back and up and down and everything inbetween, you need a HRTF that’s calibrated to your head/ears.
    Or at least, you have to be able to choose “your” HRTF out of a dozen of different samples, where you can locate the sound source best.

    But with those SDKs or some other well known approaches, most of the time a standardized HRTF is used, which doesn’t work really good. In some cases when the HRTF is way off, of your personal HRTF, the sound even gets worse than normal.

    The ear shapes and head sizes differ widely between people. That changes how sound propagates to the inner ears and also leads to different needed HRTFs.

    Therefore, for REAL 3D binaural audio, something like the OSSIC X has to be used.
    Headphones which measure your personal head/ear data and build your personal HRTF out of it.

    I ordered OSSIC X in the kickstarter programm and I hope it can meet my expectations of phenomenal good 3D Audio.

    I hope that such SDKs or hardware Audio calculation approaches from NVIDIA and AMD, take into account, that headphones like OSSIC X or other similar working headphones exist, so that they can function together and deliver a good 3D Sound.

    • As far as I can tell, OSSIC X is more about licenced software algorithms rather than hardware HRTF calibration. Ok, it measures the distance between each ear using sensors but thats it as far as I can tell?

      So, based on that we should all be able to measure the distance from one ear to the next manually and hope that software SDK calibration can do the rest. I like the fact that Ossic X has multiple drivers and it looks a really nice headset that does this for you though.

      • Timotheus

        Yea well, if you have your personal HRTF, that’s not enough. Games or other software still need to use that HRTF in order for 3D sound to work.
        That’s where the OSSIC X software comes into play.

        However software has no use, if the HRTF isn’t calculated correctly before, and that is done by the OSSIC X and no other headphones I know of.
        I own a couple of stereo headphones and theoretically they could play perfect 3D sound if I am able to pick my HRTF out of many samples and use that HRTF in software. The problem is, that no game or most other software implements my HRTF then.
        99,99% of games and software implement their own “Headphone settings”, if at all, which simulate 5.1 to stereo or try to do binaural, just with a standardized and thus useless HRTF.

        If there was a standard, which all games implemented in which you can pick an HRTF out of a huge list and use that HRTF throughout all games, then every bland stereo headphone could do “perfect” 3D Audio.
        However that’s not the case. Therefore I not only bought the OSSIC X to get some good 3D Audio right now, but also to “invest” in 3D Audo to make it “visible” to the industry. That they see, that you need a personalized HRTF for real 3D sound.
        Also OSSIC X, is the most money I’ve spend on headphones, so maybe as a bonus I get the best sound drivers I ever had and can hear the difference between them and my ordinary stereo headphones. ^^

    • As far as I can tell, OSSIC X is more about licenced software algorithms rather than hardware HRTF calibration. Ok, it measures the distance between each ear using sensors but thats it as far as I can tell?

      So, based on that we should all be able to measure the distance from one ear to the next manually and hope that software SDK calibration can do the rest. I like the fact that Ossic X has multiple drivers and it looks a really nice headset that does this for you though.

    • NotAGuest

      You fail to realize one thing. With VR we don’t need a specialized headphone to measure the distance between our ears. We can simply use the tracked controllers. Sorry buddy, you just wasted $200+.

      • Timotheus

        Ok, I’m dumb. How exactly, do my controllers measure ear shape and ear to ear distance?
        And how do my controllers change a standardized HRTF to a personalized one?

        • James Butlin

          Well all I can tell you that with this SDK you can definitely hear sound in full 3D. Including behind/in front and above/below. It’s rather impressive!

          • Timotheus

            I don’t doubt, that you can hear sound from different directions and even front/behind, above/below.
            Sometimes for some people the standardized HRTF works really well, that’s why the standardized HRTF is chosen, because it tries to work for as many people as possible.
            However many times, especially for front/back it’s hard to discern if the HRTF doesn’t fit.

            The virtual barbershop for example is also only recorded one way, but also works extremely well for many people.

            And when you have such an experience it’s really impressive, when you before always have “normal” sound.

            However, a real experience calibrated to your HRTF, should always beat any other experience you can get.
            You should be able to not only tell the general direction of above/below, behind/fron, left/right, but you can exactly pinpoint the direction and even distance of the sound source.

            There are some papers on this topic. And I looked for some diagrams I remembered, which showed the error of non-ideal HRTFs, but can’t find them, unfortunately.

  • NooYawker

    Sound is handled very differently by devs. One of my favorite games Arizona Sunshine has pretty bad 3d sound. It always sounds like there’s creatures right next to me no matter how far away they are. Vanishing Realms and especially Call of the StarSeed has much more dynamic and accurate 3D sound.

  • NooYawker

    Sound is handled very differently by devs. One of my favorite games Arizona Sunshine has pretty bad 3d sound. It always sounds like there’s creatures right next to me no matter how far away they are. Vanishing Realms and especially Call of the StarSeed has much more dynamic and accurate 3D sound.

  • Nadim Alam

    wow this is great, thanks alot valve! This will improve games by a lot and also help with audio development, especially by making it faster to implement with 3d sound.

  • Really amazing! I’m curious how much this is better than Oculus Audio SDK…

  • Raphael

    Well… does the sound ever appear in front of you in this sound demo using headphones? I think we can call it spatial sound but it’s not 3d sound.

  • Raphael

    Well… does the sound ever appear in front of you in this sound demo using headphones? I think we can call it spatial sound but it’s not 3d sound.

    • James Butlin

      It’s 3D sound. You can tell the difference between in front/behind and above/below. :)

      • Raphael

        Nop. There is no frontal projection.

  • hyperskyper

    I have the Sennheiser PC363D surround sound gaming headset which tries to simulate some of the same stuff. It works really well. I’m wondering if it would sound best if I turned off the surround sound when games support 3D audio and/or binaural audio by default. Does anyone know?

    • Timotheus

      A surround sound headset usually has multiple drivers, which try to emulate the surround experience of a 5.1 or 7.1 system. Therefore it accepts 5.1 and 7.1 system and lets you hear surround sound. I once had a surround headset from Logitech and the experience was very nice. One driver for example is placed “behind” your ear to display sound behind you. And so on.
      Although it’s a nice experience, normally as you only have at most 7.1, you can’t locate sound as good, as you can in a binaural or 3D Audio system. In a surround headset you have limited sound directions (max 7.1) and in addition the sound is only emulated from a 7.1 system and doesn’t take into account head size/ear shape (your HRTF).

      A 3D audio or binaural audio signal is a “bland” stereo signal.
      Your ears can locate sound, by the distance from ear to ear -> sound shift and by the way sound bounces in/off your ear.
      So in a real binaural/3D Audio solution a HRTF is calculated of your personal data or is chosen by you out of many samples and is then used to calculate the correct stereo signal.

      So, Yes, if u have some real binaural system, you should deactivate any surround or 7.1 or whatever settings on your headphones.

      • A good way to describe 7.1, 5.1 surround sound vs 3D sound is that with surround sound, all sound (or speakers) are attached to an invisible sphere that you are inside of. soft sounds and loud sounds are still attached and stuck to that invisible sphere, Near sounds are simply louder. With 3D sound the sphere is not there and sound can appear anywhere e.g. somebody could whisper in your ear and you would absolutely freak out as it really would sound like somebody just whispered down your ear, no traditional hi-fi sound system can do that. No longer are you inside a nice safe surround sound bubble. That is my take on 3D sound anyway and it will make things 10 x more immersive.

  • hyperskyper

    I have the Sennheiser PC363D surround sound gaming headset which tries to simulate some of the same stuff. It works really well. I’m wondering if it would sound best if I turned off the surround sound when games support 3D audio and/or binaural audio by default. Does anyone know?

  • JMM21

    “it bounces around through the environment” does that mean in our headset? which would be bouncing around in our skull? Sounds like a headache waiting to happen…rofl