Today during Apple’s WWDC 2022 keynote, the company announced that iOS 16 will allow users of modern iPhones to scan the shape of their ear to create more accurate spatial audio. Likely implemented as an HRTF, creating custom HRTFs for consumers was once impractical due to the need for sophisticated equipment, but advances in computer vision are making the technology much more accessible.

When it comes to digital spatial audio, there’s a limit to how accurate the sense of ‘position’ or ‘3D’ the audio can be without taking into account the unique shape of the user’s head and ears.

Because everybody has a uniquely shaped head, and especially ears, elements of incoming sound from the real world bounce off your head and into your ears in different and very subtle ways. For instance, when a sound is behind you, the precise geometry of the folds in your ear reflect sound from that angle in a unique way. And when you hear sound coming to your ear in that particular way, you’re attuned to understand that the source of the sound is behind you.

SEE ALSO
'Kat Walk C2' VR Treadmill Blasts Past $1M with 3 Weeks Left in Kickstarter

To create a highly accurate sense of digital spatial audio, you need a model which accounts for these factors, such that the audio is mixed with the correct cues that are created by the unique shape of your head and ears.

Audiologists have described this phenomena mathematically in a model known as a Head-related Transfer Function (also known as an HRTF). Using an HRTF, digital audio can be modified to replicate the spatial audio cues that are unique to an individual’s ear.

So while the math is well studied and the technology to apply an HRTF in real-time is readily available today, there’s still one big problem: every person needs their own custom HRTF. This involves accurately measuring each ear of each person, which isn’t easy without specialized equipment.

But now Apple says it will make use of advanced sensors in its latest iPhones to allow anyone to scan their head and ears, and create a custom spatial audio profile from that data.

Apple isn’t the first company to offer custom HRTFs based on a computer-vision model of the ear, but having it built into iOS will certainly make the technology much more widespread than it ever has been.

During the WWDC 2022 keynote, Apple announced the feature as part of the forthcoming iOS 16 update which is due out later this year. It will work on iPhones with the TrueDepth camera system, which includes the iPhone 10 and beyond.

But just having an accurate model of the ear isn’t enough. Apple will need to have developed an automated process to simulate the way that real sound would interact with the unique geometry of the ear. The company hasn’t specifically said this will be based on an HRTF implementation, but it seems highly likely as it’s a known quantity in the spatial audio field.

SEE ALSO
HTC's Vision of the Metaverse is Heavy on Buzzwords, Light on Substance

Ultimately this should result in more accurate digital spatial audio on iPhones (and very likely future Apple XR headsets). That means that a sound 10 feet from your left ear will sound more like it should at that distance, making it easier to differentiate between a sound 2 feet from your left ear, for instance.

This will pair well with the existing spatial audio capabilities of Apple products; especially when used with AirPods which can track the movement of your head for a head-tracked spatial audio experience. Apple’s iOS and MacOS both support spatial audio out of the box, which is capable of taking standard audio and making it sound as if it’s coming from speakers in your room (instead of in your head), and accurately playing back sound that’s specially authored for spatial audio, such as Dolby Atmos tracks on Apple Music.

And there’s another potential upside to this feature to. If Apple makes it possible for users to download their own custom HRTF profile, they may be able to take it and use it on other devices (like on a VR headset for instance).

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • That’s the ONLY thing even remotely spatial computerally-connected thing
    in this giant nothing-burger of an event …. lol It’s now time for everyone to
    stop wringing their hands and standing around doing nothing,
    waiting for Apple to make their move. Starting today and into the foreseeable
    future, Meta has COMPLETE & UTTER control of the XR industry.

    • Arno van Wingerde

      AFAS Apple is concerned: yes… at least for now!
      But there is this company in japan with a console and afew VR plans of its own, you know!

    • Octogod

      Apple’s videos dominated YouTube yesterday, taking 1, 5 and 10 spots. So, no this event was a massive success and their tech on mobile indicates they’re significantly ahead of Meta on both software and hardware.

  • knuckles625

    Sony’s had the “take a picture of your ear” gimmick in their headphones app for at least a generation or two of headphones/earbuds, not to mention the flood of nameless earbuds you see on Amazon. It’s mostly a novelty, and I can’t imagine there’s Apple actually needs the extra depth sensors to accomplish the same thing. Just another case of Apple marketing catch-up as innovation

    • kontis

      Execution can change everything. We had smartphones (and PDAs) with more capabilities than iPhone 1 since the 90s, but they were never mainstream, because of many other flaws.

      It’s also ironic to write comment like that on a VR website. Did you laugh at Oculus announcing their technology in 2012, because we technically already had “VR headsets” (or at least “HMDs”) for 20 years before that…?

      Technology is NOT about checkbox or cool sounding tech phrases. It’s a giant set of details that matter.

      HRTF and binaural audio is proven “magical” technology that even casual people know about. Mostly from the ASMR stuff, but it has a well known value. Not some obscure geeky stuff.

      • philingreat

        Thank you! 100% agree!

  • “If Apple makes it possible for users to download their own custom HRTF profile, they may be able to take it and use it on other devices”. The key word being, of course, “if”. Apple will sit on the tech exclusively for a few years. Unless of course someone in China manages to reverse-engineer it within a year.

  • VR5

    Obviously not buying an iPhone for this but getting my ear data and better spatial audio would indeed be nice additional value to VR. Hopefully this catches on and we get that feature also on Android and implemented in Unity.

  • Rainy

    This is not new technology, no matter how “revolutionary” apple wants to state it is. Sony has had it for years, so have companies like Creative with Super-XFI. This does not need advanced sensors to do, any functional camera should be able to scan your ears well enough. The issue with this is that it does not always portray the artist’s original intent, so sounds may be presented in an inaccurate way, but from my personal experience I love using Super-XFI for gaming, and I feel much more present.

    I don’t see how this would work with the Apple headset since it does not have headphones, since from leaked renders has speakers similar to the quest, which aren’t a good candidate to do spatial audio with, which is most likely why quest itself doesn’t have it.

  • That’s a cool thing… I know, we expected something more, but it is still a cool thing