The 2D to 3D photo conversion feature coming to Vision Pro in VisionOS 2.0 makes a novel capability meaningful for the first time.

Cue “Apple didn’t even do it first!” in the comments.

You’re not wrong. There’s been seemingly a hundred different startups over the years that have promised to turn 2D photos into 3D.

Even Meta had a go at it when it added 2D to 3D photo conversion to Facebook several years ago. But they never really caught on… probably because seeing 3D photos on a smartphone isn’t that exciting, even if Facebook added a little ‘wiggle’ animation to show the depth on 2D displays.

When it comes to features that people actually want to use—it doesn’t matter who does it first. It matter who does it well.

This headline says “the real deal,” because Apple has, in fact, actually done it well with Vision Pro. The 2D to 3D conversion doesn’t just look good, the feature is actually implemented in a way that takes it beyond the novelty of previous attempts.

The feature is part of visionOS 2.0, which is currently available in a developer beta. Apple says the feature creates “spatial photos” from your existing 2D images (which of course just means stereoscopic ‘3D’).

Granted, even though it’s “just stereoscopic,” seeing your own photos in 3D really adds a layer of depth to them (figuratively and literally). While a 2D photo can remind us of memories, a 3D photo feels much closer to actually visiting the memory… or at least seeing it through a window.

In VisionOS 2.0, just go to the usual Photos app, then open any photo and spot the little cube icon at the top left. Click it and the headset analyzes and converts it to 3D in just two or three seconds. With a click you can also return to the original.

The results aren’t perfect but they’re very impressive. It’s unfortunate I can’t actually show them to you here—since I have no way to embed a 3D photo in this page, and 99.9% of you are probably reading this on a 2D display anyway—but it’s the best automatic 2D to 3D photo conversion that I’ve personally seen.

The speed and accuracy is doubly impressive because the conversion is happening 100% on-device. Apple isn’t sending your photos off to a server to crank out a 3D version with cloud processing resources and then sending it back to your headset. That makes the feature secure by default (and available offline), which is especially important when it comes to a dataset that’s as personal as someone’s photo library.

Across the photos you’d find in the average person’s library—pictures of people, pets, places, and occasionally things—the conversion algorithm seems to handle a wide range of these very well.

While the feature works best on real-life photography, you can also use it on synthetic imagery, like digital artwork, AI-generated photos, 3D renderings, and the like. Results vary, but I overall I was impressed with the feature’s ability to create plausible 3D depth even from synthetic imagery which itself never actually had any 3D depth in the first place.

The thing the algorithm seems to struggle with the most is highly reflective and translucent surfaces. It often ends up ‘painting’ the reflections right onto the reflecting object, rather than projecting them ‘into’ the object with correct depth.

Netflix is Selling the '3 Body Problem' Headset, But Sadly It's Just a Prop

The only major limitation at the moment is that 2D to 3D photo conversion doesn’t seem to want to work on panoramic images. On Vision Pro panoramas can already be blown up and wrapped around you in a way that feels life-sized, but they would still get another layer of emotional impact from being 3D-ified.

It’s unclear why this limitation exists at present, but it’s likely either because panoramas tend to be very high resolution (and would take longer than a few seconds to convert), or Apple’s 2D to 3D algorithm needs more training on wide field-of-view imagery.

Beyond that limitation, the thing that really makes this feature… a feature (not just a ‘technical possibility’), is that it’s built right in and works in the places and ways you’d expect.

Not only can you send spatial photos to other users who can view them in 3D on their own headset, you can also start a SharePlay session and view them together—an incredible way to share moments and memories with the people that matter to you.

And its easy to actually get the photos you want onto your headset for viewing.

Many people will have their iCloud photos library synced with their headset, so they’ll already have all their favorite photos ready to view in 3D. I personally don’t use iCloud photos, but I was easily able to select some of my favorite photos from my iPhone and AirDrop them, which automatically opened the Photos app so they were right in front of me in the headset.

Further, you can just save any old photo to your headset—be it from Facebook, a website, or another app—and use the 2D to 3D conversion feature to view them with a new layer of intrigue.

And this is what makes this visionOS 2.0 feature different than 2D to 3D conversion software that has come before it. It’s not that Apple has any groundbreaking quality advantage in the conversion… it’s the fact that they made the experience good enough and easy enough that people will actually want to use it.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.

Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Brian Elliott Tate

    The quality is still the best I've seen though for most things. It can do some pretty crazy popout effects. Most 2d to 3d only make depth sunken into the image, not anything popping out of it.

    • ViRGiN

      How many millions of downloads UEVR has seen so far?

      • Tanix Tx3

        Answered wrong thread?

    • Stephen Bard

      There is nothing at all special about Apple's photo conversion. Even the incredible Lume Pad 2 glasses-free 3D tablet has incredible image depth "into" the tablet and "pop-out" a couple inches above the surface of the tablet, which is adjustable to suit your taste.

  • Stephen Bard

    Just like with so-called "spatial" photos/videos, Apple will of course try to make it seem like this 2D to 3D conversion is something revolutionary that they have invented or improved, and their ignorant fanboys don't know any better. There are various methods that many of us have been using for years that use depth maps and AI to do excellent 2D/3D conversions, such that you really never need to see any photos, videos or movies in boring 2D anymore (I now live in a media dimensionality world). A few years ago I stumbled across a free App Lab app for the Quest called Owl3D that allowed you to do excellent cloud 2D/3D conversion instantly for photos/art and overnight for short videos, and then share them in a gallery in the app. Unfortunately, the devs decided that they needed to actually make some money, so they switched to providing AI conversion software for us to use on our home PC (free and pay versions). So I think Owl3D is currently probably the best 2D/3D conversion software with the most settings options. A similar software for your home PC is the free IW3, and both of these are incredibly slow, with full-length movie conversion taking overnight — 2 days. If you read the unbelievably convoluted discussions on the Owl3D Discord, you'll find that there are diverse opinions as to what settings to use to produce the best depth/pop-out and how much artifact you can tolerate (and recommendations for a dozen other auxiliary softwares to further fine-tune your SBS video files). The Owl3D purists might scoff at my next option, but I routinely use the excellent "realtime" 2D/3D AI conversion on my incredible Lume Pad 2 glasses-free 3D tablet. I can take a 12 hour 2D video that I compiled from thousands of Artstation images or hundreds of YouTube videos and run it realtime in 3D that is "almost" as good as the kind that took many hours of high-end GPU time to produce! Moreover, with the Lume Pad 2 you can watch 3D videos with another human being instead of alone in a headset. Next I need to upscale my 12" Lume Pad 2 3D display to a 27" 3D monitor from Acer or Lenovo due out soon. Yet another conversion option is Immersity AI, that used to be LeiaPix, but their cloud video conversion is way too expensive to do things like whole movies.

    • Christian Schildwaechter

      Your never ending tirades denouncing Apple users as ignorant fanboys, shunning AVP as basically unusable and vilifying Apple as mostly concerned with claiming the ideas of others as their own, always miss an important part:

      – It's not about who did it first.
      – It's not about who has the most powerful or technically advanced implementation.
      – It's not even about who has the cheapest option.

      It's about "it just works".

      The article makes it very clear that this is where Apple's implementation shines, and that the tech itself isn't something groundbreaking new. You just choose to (always) ignore this and respond with long lists of existing and of course better options.

      Sure, there are people that will only accept the best and therefore compile everything themselves for ArchLinux. But most people wish for things to "just work", and won't be easily convinced by you adding more text and technical details about all the much more configurable alternatives to Apple's "one click" solution. Quite the opposite actually.

      • Stephen Bard

        I shall never comprehend why you are so smugly satisfied with anything that half-assed "works" without understanding the other options that actually work better at a fraction of the cost.

        • Christian Schildwaechter

          It's easy to comprehend once you stop treating your time as "free" and instead consider it part of the cost, e.g. applying ¤15 hourly minimum wage.

          – 2h finding a free solution
          – 3h reading docs, testing parameters
          – 2h searching forums for answers

          Your free solution now costs ¤105, but only if the first software already achieved what you wanted. The long software list in your comment hints at more than 100h spent on "options that actually work better at a fraction of the costs", with twice that more likely. Which gets you close to the 233h at ¤15 minimum wage needed for buy a USD 3500 AVP. And that's only for the 2D to 3D conversion feature, mostly a nice goody. If you also went looking for a "free and better" 3D movie option, virtual monitor, running tablet apps, whatever AVP is used for, you'd quickly exceed that budget.

          If you like to tinker with software as a hobby, that's fine. If all you want is a solution, free can quickly turn into "expensive" for people not considering their time to be worthless. Explains why so many prefer half-assed "it just works" over spending (lots of) time on free/better things they don't really care about.

        • James Foulk

          Why do you care what other people like? Live your own life, quit worrying about what other people do. I, for one, love AVP. Furthermore, I don’t really care what other companies are doing. So what if some may be technically better at a lower cost. I certainly don’t care, and I never will. I care about polish and the overall experience. I’m allowed to like what I like, and spend my money as I see fit. Trying to convince people in comments is an utterly pointless waste of time.

    • Hussain X

      IW3 has come a long way, even just in the past few weeks it got version 2 AI models thanks to better trained Depth Anything Version 2 (thanks to TikTok who are funding it) as well as updates from IW3 dev which makes it run significantly faster (updated software to use about half the RAM compared to before). I just watched Interstellar in glorious 3D yesterday. The almost 3 hour movie took me around 7.5 hours to complete using Any L Version 2 using 518 depth (which are high end settings) resolution on a 4070. Just a few weeks ago it might have taken 15 hours with slightly worse quality. Now it takes less time with even better quality. And if you go mid to low end settings, it'll cut down time further (maybe 1 min conversion time per 1 min video time). Even on low settings, the 3D immersion easily beats out boring old 2D.

  • psuedonymous

    Needing to use iCloud or AirDrop to get photos on and off the device (and thus relying on third party servers to massage data between them) kind of renders the whole "it's processed only on your device!" aspect moot.

    • Rodney McKay

      AirDrop is local.

      • Ben Lang

        And iCloud photos are encrypted and thus not data-harvested by Apple.

  • Brettyboy01

    I honestly could not be bothered having to walk 2-3 steps to the left/right to see the novelty of a 3D picture every time.

    • Ben Lang

      You don't, that's the beauty of 3D : )

  • They did it in the usual Apple way…