Blackmagic Design has revealed full specs and details for its new URSA Cine Immersive camera, specially designed to shoot 8K VR180 footage for the Apple Immersive Video format. Pre-orders for the $30,000 camera are open now, with shipping planned for Q1 2025. A forthcoming update to DaVinci Resolve Studio (also made by Blackmagic) adds editing tools specifically for Apple Immersive Video, including support for calibration data from the camera.

Apple Immersive Video is a 180° 3D video format intended for playback on Apple Vision Pro. Early versions of Blackmagic’s URSA Cine Immersive are likely the cameras used to film Apple Immersive Video content currently available on the headset.

Now the camera is being made available commercially, with pre-orders available for a cool $30,000. Though certainly expensive, this is in-line with many other high-end cinema cameras.

The URSA Cine Immersive is specially made to capture Apple Immersive Video, featuring a pair of 180° stereo lenses, capturing 59MP (8,160 x 7,200) each, with 16 stops of dynamic range. The camera can shoot up to 90 FPS in the Blackmagic RAW format, which also embeds calibration data (unique to each camera) that’s carried into the editing process for more precise and stable footage.

The forthcoming update to the DaVinci Resolve Studio editing software will include features specific to editing footage from the camera:

  • Immersive Video Viewer: Pan, tilt, and roll clips on 2D monitors or directly on Apple Vision Pro
  • Seamless Transitions: Clean master files using metadata-based bypass for Apple Vision Pro transitions
  • Export Presets: Streamlined delivery to Apple Vision Pro-ready packages

Both Blackmagic and Apple hope the release of the camera and streamlined editing workflow will make it easier for filmmakers to capture and release content in the Apple Immersive Video format.

SEE ALSO
Apple May Halt Vision Pro Production by Year-End Amid Report of Sharply Reduced Output

It’s unclear if the camera and editor will work equally well for capturing VR180 footage for playback on other platforms and headsets, or if there’s something proprietary to the Apple Immersive Video format that would prevent straightforward compatibility and multi-platform releases.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.

Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Nice!
    But I see no reason why this camera wouldn't "work" for other HMDs.
    []^ )

    • Correct. According to Hugh Hou it can work for any VR 180 capable headset.

      • Yeah, that guy's cool beans.
        I'm watching a video now by him about
        how to make spatial videos on a Quest 3!!
        []^ )

  • Sofian

    Does it fly?

  • xyzs

    Beautiful:
    16 stops
    8k per eye
    large sensor
    good price (yes, for professional production, that's a very good deal)
    That's what was needed for high quality VR video production.
    This is gonna set a new default high quality standard.

    • Christian Schildwaechter

      Similar cameras are often rented for a project base instead of bought, esp. for shorter ones like most 180° productions will be. The first search result checked for a similarly priced Red V-Raptor 8K listed rental rates between USD 800/day and USD 2400/week, and there should be cheaper offers. With the market for this type of content still minuscule, only a few studios/film makers will have both enough matching projects and/or faith in near future growth to justify outright buying one.

      • xyzs

        Yes indeed, renting is the main way to access gear in professional productions, still, I meant the price is a great deal for production because buying thus renting will be much cheaper than the RED/ARRI world. And the all-in-one, ready to go form factor of this cam, is making is a no-brainer.

  • dextrovix

    Put me down for two of them.

  • sfmike

    My dream camera. As terrific as this is I predict it will fail as the Western entertainment industry, as well as the electronics manufactures, have a visceral hate of anything 3D. Only their disdain for anything involving a VR headset compares. It will be up to Asia and Europe to make any progress in 3D hardware.

    • Christian Schildwaechter

      As far as I can tell it uses the same camera body as their URSA Cine cameras available from 4K to 12K, but with a stereo lens setup mounted. The 12K version records up to 12,288*6480, while the URSA Cine Immersive lists 8160*7200 per eye, or 16320*7200 total, so there might also be an upgraded sensor involved. All this paired with software enhanced for handling 180° content, half sensor/per eye calibration etc.

      This will certainly be an extreme niche product, but it's not like cameras for professional film production are selling in huge amounts anyway. So with most of it being based on an existing product, the number of units they have to sell for this to not be consider a failure should be rather low.

  • Christian Schildwaechter

    The camera of course won't be limited to creating content for only AVP. It records in a format that neither AVP nor Quest would natively play, which both use the very efficient MV-HEVC, a multi-view video standard dating back to 2014.

    It's up to the DaVinci Resolve Studio video editing software to export the finished project to a format suitable for HMDs, and even the free DaVinci Resolve (not Studio) limited to 4K has been able to create 180° or 360° content playable on Quest for years.

    The special connection to AVP here seems to be mostly that DaVinci Resolve Studio will get an editor extension that not only allows for the 180° content to be for example panned on a 2D monitor, but also live inside AVP during the editing process without having to first export the project, which could improve the workflow a lot.

  • STL

    I‘m sorry, while the resolution sounds impressive, it is extremely low at a closer look. The raw format of a simple iPhone 16 Pro already comes with 48MP. To picture the full 180° at a comparable resolution, one would need 576MP (!) and in 3D incredible 1152MP. While this might be possible in 10 or 20 years from now, all 180° or 360° cameras deliver BLURY photos and videos, as of today, without exception. And online streaming such content is nearly impossible due to the ultra high data throughput required.
    So for now, the spatial video approach with the small yet clear windows in an immersive world are all what‘s possible!

    • xyzs

      If 8k per eye is not enough (I think it is if the lense/encoding quality is top notch), never heard of video upscaling anyway?

      Also, possible in 10 to 20 years? lol..

      15 years ago, a phone could barely do 640*480 crappy quality videos, today they can shoot in 8k hdr for some, you really think it will take 10 to 20 years to get 16k videos with professional cameras ???

      • Christian Schildwaechter

        8K/eye equals 131MP total. With current HMDs only going up to about 4K/eye (32MP total) with about 90° FoV, 8K covers the whole 180° of the video while providing the largest possible pixel size.

        Blackmagic are certainly aware of phone sensors like the Sony IMX903 48MP sensor with 1.4µm pixels used in the iPhone 16. But those don't offer anywhere near the light sensitivity they need, which is why the URSA Cine Immersive uses a massive 141MP 65mm sensor (50.81*23.32mm) with twice as large 2.9µm pixels capturing 4.3 times as many photons.

        Sensors are basically silicon chips, and pixel count increased like with CPUs. But shrinking structures mean smaller pixels capturing less light, causing very noisy pictures in low light conditions. Phones compensate by merging several images from multiple cameras with tons of signal processing for impressive results, but you never see the real image, only an interpretation. Not an option for professional video captured in raw formats for fine grained post processing, requiring the actual colors to be recorded. Excellent phone cameras seriously lowered the entry bar for film makers, but comparing phones to professional cameras featuring insanely expensive sensors and lenses makes little sense.

      • STL

        It is not 8k „per eye“! It is 8 k for the entire hemisphere! Watch any 8k VR video in Quest 3. It is extremely blurry! It is far, far from being crystal clear and sharp. That’s why nobody wants to watch VR video outside of porn.
        What you get per eye is a fraction of 8k only, you end up with about VGA resolution per eye in your focused FOV.

        • HindsiteGenius

          Thats why he said 8K per eye is enough for now. And yes 8k total(4k per eye) hasn’t been enough ever. This is even more glaring when you realize that the leading solution advertising 8k isn’t even really outputting a clean 8k feed.

          • Christian Schildwaechter

            TL;DR: 8K total isn't 4K per eye (missing units). An 8K 180° video stream (4320p) will result in an 4320p image per eye in VR. Units matter.

            This is where terminology gets messy. 8K usually means 8K UHD at 7680*4320, or something with around 8000 pixels horizontal, for most displays in an about 16:9 ratio. VR assumes a roughly 1:1 ratio, so it usually makes more sense to give the number of lines to not confuse it with wider video formats, so 8K UHD is 4320p, 4K UHD is 2160p.

            Some stupid marketing departments decided to call VR HMDs with a 2160p 8:9 display for each eye not 2K, but 4K, because this is the total horizontal resolution of the display that is completely irrelevant for VR. But 8K total is not the same as 4K per eye. There is a unit switch involved.

            Starting with an 8K UHD 4320p display, you still get 4320p per eye. 4K UHD, meaning 2160p has indeed never been enough for 180° video. 8K UHD, meaning 4320p per eye has four times the pixel count and is pretty close to the native resolution of current Quest HMDs when you include head movement in 180° video. Going for a higher video resolution wouldn't provide a lot of benefits.

            And 8K UHD is a current hardware video playback limitation of the XR2 Gen 2, that at most handles 8K(4320p)@120Hz streams. Not sure about the maximum on AVP, but given that Macs and their M-chips are highly optimized for video editing and can handle at least Blackmagic 12K Raw footage, I assume that AVP based on M2 will be able do decode one 16K UHD/8640p video that again would match its ~3500p native resolution quite well when split over two eyes showing an 8:9 picture plus head movement.

            The sensor in the Blackmagic Cine Immersive is actually 17K (horizontal), and the targeted resolution for VR is two parallel pictures at 8640p each. So the resulting 180° video is 8K (lines) per eye, not just 8K (horizontal) for the entire hemisphere, which would be 1/4th the resolution.

          • HindsiteGenius

            Your numbers are sound and I agree. However think how we determine the HMDs resolution is less important than the capturing device. People like us can sift through the BS pretty easily. HMD manufacturers are actually falling in line as on spec sheets they advertise a per eye resolution and sometimes even ppd.

            The problem we have is with capturing devises in the “prosumer” end of things. Like is the R5C really giving you 4k per eye /8k. Lots of smart people seem to think so despite simple highschool geometry and saying otherwise. I recently saw a test between R5C and a K1pro and the R5C looked maybe a tiny bit better but not 4 grand better. In fact they were almost indistinguishable. I would say 6k(12k camera)per eye resolution down resed to 8k can give an image that can saturate most of the HMDs currently out in the wild. I would love to see maybe speed boosted BM ursa 12k since it’s super35 using a dual fisheye lense.

          • Christian Schildwaechter

            TL;DR: With the projection and multiple conversion steps causing many of the quality issues, both going higher res and improved conversion workflows could help.

            [Disclaimer: My previous answer was mostly triggered by STL's nonsense "VGA resolution per eye in your focused FOV" statement, but since he blocked me in response to some (admittedly very snarky) corrections regarding basic economics, ended up as a reaction to your "8k total (4k per eye)".]

            A lot of the quality issues come from video recording a hemisphere to a square grid, which leads to the same problems map makers faced for centuries, with Greenland on typical world maps with Mercator projection looking the same size as the whole of Africa, while actually only covering about 1/15th of the area.

            Similarly 180° hemisphere to square projection leads to some areas getting way higher effective pixel density than others. Like with maps, there are different ways of projection for video, all with different advantages and problems. And all lenses come with optical issues, causing more distortions towards the edges. You end up with an image distorted by fisheye lenses recorded by a flat sensor, compressed with a lossy video codec and later displayed on a flat display with shaders pre-distorting it so it looks perspectively correct when watched through the lenses of a VR HMD. Each step adding compression, distortion or artivacts.

            Ideally you'd want to start with a much higher resolution, then apply a projection keeping the pixel density mostly even and closely matching the display and optics of the device the video will be displayed on, stored in a lower resolution than recorded, but high enough that even low density areas at least match the native resolution for the actual FoV of the specific device while staying within the size limits it can handle. For the best results the first reprojection would already include the correction for the VR HMD lenses to remove another lossy step (causing trouble with added UI elements etc.)

            We aren't there yet, and with lots of steps where things can go wrong, the results from one 8K camera could look very different from those of another with the same sensor resolution. Add different parts of the sensor getting different amounts of light when recording through a fisheye lens, and capturing 180° is a messy subject with still a lot of room for improvement.

            Going for higher resolutions and better lenses is one option, but similar to smartphones now relying on a lot of signal processing to get great images from tiny sensors and lenses, smarter software for the existing cameras could be another way. AVP and Quest now storing 180° video using MV-HEVC alone can improve things, as this codec stores the image for the second eye only as the differences between the two eyes in a separate frame, while so far 180° stereo video in HEVC/h265 stored two views side-by-side or on top of each other in one frame, both reducing resolution and increasing file size, thereby requiring higher compression rates with more artifacts.

        • xyzs

          You don't understand, "8k" 180 degrees videos are 2*4k video coupled next to each other, this is quadruple the pixel amount…

          What this camera outputs is what you would call 16k videos…

          And, as I said, the data becomes much too heavy for headsets, past this point, a local upscaler would definitely be a better solution than streaming 2*16k videos…

    • Arno van Wingerde

      Well, if you want to see every flower on a mountain, than yes, this is nothing. The resolution of the human eye is about 576 Mpixels, and if you turn around and look up and down this number multiplies to even beyond the 1152 MP you mentioned. And than you might want to zoom in as well… Clearly, we are a few years from that amount of detail. So yeah, nothing like full detail, but like the first computer games were nothing like we have today, today's 3D VR pictures are "blurry". Enjoy that for now, laugh about it in 10-20 years time.

      • kraeuterbutter

        There is a common misconception regarding the “576 megapixels” often attributed to the human eye. In reality, the human eye contains approximately 7 million cones for color vision and about 125 million rods for low-light, black-and-white vision, and these are not distributed evenly. Our sharpest vision is concentrated in the fovea centralis, while peripheral vision is significantly less detailed. Within the fovea, the eye can resolve around 60 to 70 pixels per degree (PPD)—sometimes slightly more—and this is what’s typically referred to as “retina resolution.”

        To reproduce that level of detail across a 180° field of view in film or digital imagery, one would need an extremely high resolution. However, this does not mean the human eye literally possesses “576 megapixels.” The distribution of photoreceptors is highly uneven, with extraordinary clarity confined to a small central region and much poorer resolution in the peripheral areas.

        • Christian Schildwaechter

          Indeed, but you could say that human vision has a 576+MP resolution, because we constantly move the eye to point the high resolution fovea at different spots, with the brain creating a perception with way higher resolution than our actual eyes.

          Our eyes have about 180° FoV without turning the eyes, 270° with, even if anything outside about 18° from our visual axis is rather blurry, with only about 7° high resolution when looking straight forwards, and anything beyond 45° unreadable. Technically our eyes can only create a somewhat clear image for about 90°, but add 90° head rotation in each direction, and you end with more than 360° total FoV with 270° somewhat sharp vision.

          So while our "sensors" only need a rather low (non-linear) resolution, 180°/360° video matching it needs to provide a WAY higher resolution than 576MP, because it is never clear where the hires gaze is currently pointing. The 180°/360° being for two rotation axis makes video storage incredible inefficient by default.

          Pretty much the only technically feasible way to really high resolution 360° video is similar to ETFR first measuring eye gaze (and predicting it for the next frame), then streaming only the video for the center of the view at high resolution, with much reduced steps beyond 25°/50°/90° and only brightness changes even noticeable when getting close to the 180° edge. Which would require storing the video frames as a lot of small tiles in versions with different levels of details instead of in one large file like we currently do.

  • Amazing how 3d video went from dead in the water only to be revived by none other than Apple.

    • STL

      Spatial video! Yes. Love it. Small window, good resolution in 3D! But VR videos remain dead, if not computer generated.

      • Definitely – Have come around to the "redesign" so to speak.

        It alleviates the lack of parallax and positional… Starting to agree Spatial Video has a place with these windowed concepts rather than full immersion. Great take!

  • I'm buying 2 or 3

  • Hey Ben, regarding:

    "It’s unclear if the camera and editor will work equally well for capturing VR180 footage for playback on other platforms and headsets…"

    FWIW, I asked Hugh Hou this question on his reddit post today where he was discussing this camera. Here is his response:

    Hugh, love your work btw. I presume this camera can shoot standard SBS 180 VR out the box? What's involved to convert it for AVP? For Quest 3?

    u/hughred22
    Yes! Everyone can use it. Just Fisheye to Euqirect – standard workflow cover already on my YouTube. Just shoot at 8K instead of 16K. You get 2 – 3 hours of filming time on Meta vs Apple (1 hour-ish)

  • guest

    Oh yeah, stick a $30k camera on a drone and hope it doesn't crash!

  • Arno van Wingerde

    Ben can you please take Scott's comment into the article and not make thais another AVP artile but a VR article?

    • Christian Schildwaechter

      The Blackmagic announcement only mentions AVP support for live reviewing the currently edited footage being added to DaVinci Resolve Studio, and there is a chance that the same isn't really technically feasible on a PC. The camera is recording in a Blackmagic raw format that DaVinci will very likely also edit in. Handling huge amounts of raw video data requires huge memory and storage bandwidths, which is why many video editing systems first convert to an internal format that the machine can handle.

      One of the major professional uses for Macs is video editing, and consequently software like Final Cut and Mac silicon is highly optimized for it, allowing for handling very high resolutions thanks to specific integrated video decoders different/faster than those Nvidia GPUs, extremely high memory bandwidth from the unified RAM soldered onto the SoC and matching huge bandwidth to the SSD or external storage via Thunderbolt. The integrated accelerators are what allows MacBooks to leave PC laptops in the dust for video editing even with the Mac running on battery and the PC plugged in.

      That won't matter for cutting 4K video, which a PC can easily handle with PCIe 4x SSD despite the PCIe bandwidth memory transfer bottle neck to the GPU usually containing only en-/decoders for things like AV1 or HEVC. Going 8K may require being rather careful with your components and 16K become impossible without specific workstation hardware. Add the need to then also re-project for a HMD, and the live preview on HMD might simply be unfeasible on PCVR.

      All this is only about editing at high resolution, not about the created content, and includes a lot of speculation. I'm pretty sure the AVP, using the same M2 with integrated video accelerators as a Mac, could handle displaying the 16K Blackmagic raw data. But I wonder how they send all the date it in real time to the AVP via Wifi, so maybe there is some conversion involved that could also allow it work with a PCVR headset.

      • kool

        They usually run a sdi cable straight from the camera into a capture card. I'm not a camera guy I run fx but it's either straight to a computer or a dit is up all night.

        • Christian Schildwaechter

          The specs for the URSA Cine line list (up to) 12Gbit/s SDI, 10Gbit/s ethernet and "high speed" WiFi, plus lots of USB-C for peripherals/displays. Blackmagic Raw bitrates are topping out for 12K@24fps at 1.2Gbit/s with 3:1 compression. URSA Cine Immersive's 17K@90FPS should require 6.4Gbit/s, making Wifi 6E at (theoretically up to) 9.6Gbit/s an option, with SDI much more reliable, and 10G ethernet readily available and cheap.

          This now allows creating the smallest possible portable video editing solution for high end 180° /Immersive video, fitting into a small bag: URSA Cine Immersive, maxed out Mac Mini M4 Pro with 10G ethernet, keyboard and mouse, Wifi 6E router with two 10G ethernet ports, AVP and optionally PSVR2 Sense controllers.

          Running DaVinci Resolve Studio on the Mac with AVP serving as a 32:9 (2*4K) display streamed over Wifi, allowing to switch from desktop DaVinci to live 180° editing view in AVP. With Apple/Sony now bringing PSVR2 Sense compatibility to AVP for both games and high precision XR input, maybe use these for editing directly within AVP. All that for only USD 40K.

  • Seiroth

    Nah, the American entertainment industry just poorly handled 3D. They didn't hate it. They were just not willing to invest into it. It didn't help that the content generators for 3D content were also not invested into it. Very few movies took the time to shoot the movie natively in 3D from the start with a keen focus to ensure the 3D shooting was done properly. Avatar 2 for example blew my mind in 3D, even though it only used the 3D effect for very specific scenes. THAT kind of content is what the American entertainment industry would have supported but they weren't going to invest when only a few directors were interested in shooting quality 3D movies. If Avatar 2 wasn't the exception and was the norm then I have no doubt we would have seen high fps video formats in 4K or 8K and it would have been truly amazing 3D.

  • Rupert Jung

    For 30.000 Dollar I would expect 120 Hz as it's not only supported by all modern headsets but is also compatible with 60 Hz displays.