Switzerland-based CREAL is developing a light-field display which it believes will fit into VR headsets and eventually AR glasses. An earlier tech demo showed impressive fundamentals, and this week at CES 2020 the company revealed its progress toward shrinking its tech toward a practical size.

Co-founded by former CERN engineers, CREAL is building a display that’s unlike anything in AR or VR headsets on the market today. The company’s display tech is the closest thing I’ve seen to a genuine light-field.

Why Light-fields Are a Big Deal

Knowing what a light-field is and why it’s important to AR and VR is key to understanding why CREAL’s tech could be a big deal, so let me drop a quick primer here:

Light-fields are significant to AR and VR because they’re a genuine representation of how light exists in the real world, and how we perceive it. Unfortunately they’re difficult to capture or generate, and arguably even harder to display.

Every AR and VR headset on the market today uses some tricks to try to make our eyes interpret what we’re seeing as if it’s actually there in front of us. Most headsets are using basic stereoscopy and that’s about it—the 3D effect gives a sense of depth to what’s otherwise a scene projected onto a flat plane at a fixed focal length.

Such headsets support vergence (the movement of both eyes to fuse two images into one image with depth), but not accommodation (the dynamic focus of each individual eye). That means that while your eyes are constantly changing their vergence, the accommodation is stuck in one place. Normally these two eye functions work unconsciously in sync, hence the so-called ‘vergence-accommodation conflict’ when they don’t.

On more advanced headsets, ‘varifocal’ approaches dynamically shift the focal length based on where you’re looking (with eye-tracking). Magic Leap, for instance, supports two focal lengths and jumps between them as needed. Oculus’ Half Dome prototype does the same, and—from what we know so far—seems to support a wide range of continuous focal lengths. Even so, these varifocal approaches still have some inherent issues that arise because they aren’t actually displaying light-fields.

More simply put, almost all headsets on the market today are displaying imagery that’s an imperfect representation of how we see the real world. CREAL’s approach aims to get us a several steps closer.

That’s why I was impressed when I saw their tech demo at CES 2019. It was a huge, hulking box, but it generated a light-field that with one eye (and without eye-tracking) I could focus on objects of arbitrary depths (which means that accomodation, the focusing of the lens of the eye, works just like when you’re looking at the real world).

Above is raw, through-the-lens footage of the CREAL light-field display in which you can see the camera focusing on different parts of the image. (CREAL credits the 3D asset to Daniel Bystedt).

Slimming Down for AR & VR

At CES 2020 this week, CREAL showed its latest progress toward shrinking the tech to fit into AR and VR headsets.

Photo by Road to VR

Though the latest prototype isn’t yet on a head-mount, the company has shrunk the display and projection module (the ‘optical engine’) enough that it could reasonably fit on a heard-worn device. The current bottleneck which is keeping it on a static mount is the electronics required to drive the optical engine which are housed in a large box.

Photo by Road to VR

Shrinking those driving electronics is the next step; on that front, the company told me it already has a significantly reduced board which in the future will give way to an ASIC (a tiny chip) which could fit into a glasses-sized AR headset.

CREAL’s ‘benchmark’ tech demo | Photo by Road to VR

Looking through their CES 2020 demo, the company showed that they had replicated their light-field technology in a much smaller package, though the demo had a much smaller eye-box, field of view, and lower resolution than what could be seen through their much larger demo.

CREAL told me it intends to expand the field of view on the compact optical engine by projecting additional non-light-field imagery around the periphery.

This is very similar to the concept behind Varjo’s ‘retina resolution’ headset, which puts a high resolution display in the center of the view while filling out the periphery with lower resolution imagery. Except, where Varjo needs additional displays, CREAL says it can project the lower fidelity peripheral views from the same optical engine as the light-field itself.

The company explained that the reason for doing it this way (rather than simply showing a larger light-field) is that it reduces the computational complexity of the scene by shrinking the portion of the image which is a genuine light-field. This is ‘foveated rendering’, light-field style.

CREAL hopes to cover the entire fovea—the small portion in the center of your eye’s view which can see in high detail and color—with the light-field. The ultimate goal, then, would be to use eye-tracking to keep the central light-field portion of the view exactly aligned with the eye as it moves. If done right, his could make it feel like the entire field of view is covered by a light-field.

That’s all theoretically possible, but execution will be anything but easy.

A growing question is what level of quality the display tech can ultimately achieve. While the light-field itself is impressive, the demoes so far don’t show good color representation or particularly high resolution. CREAL has been somewhat hesitant to detail exactly how their light-field display works, which makes it difficult for me to tell what might be a fundamental limitation rather than a straightforward optimization.

VR Before AR

The immediate next step, the company tells me, is to move from the current static demo to a head-mounted prototype. Further in the future the goal is to shrink things toward a truly glasses-sized AR device.

A mockup of the form-factor CREAL believes it can achieve in the long-run (this anticipates off-board compute and and power). | Photo by Road to VR

Before the tech hits AR glasses though, CREAL thinks that VR headsets will be the first stop for its light-field tech considering more generous space requirements and a number of other challenges facing AR glasses (power, compute, tracking, etc).

CREAL doesn’t expect to bring its own headset to market, but is instead positioning itself to work with partners and eventually license its technology for use in their headsets. Development kits are available today for select partners, the company says, though it will likely still be a few years yet before the tech will be ready for prime time.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.

Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • dk

    “form-factor…. in the long-run” what about a working headset in the short term

  • Mike Porter

    I think a lightfield display in this day and age with existing established CPU and GPU tech is needless and diminishin returns. If you are going to use eye tracking to track to fovea you can just adjust the vergence as well as acomodation with a simple varifocal display, this is complete overkill. Cool tech demo but not useful in practice by any means.

    • Raphael

      It’s a worthwhile tech and will continue to evolve.

      • Mike Porter


    • Adrian Meredith

      Practical uses isn’t really what they should be focusing on as thats what their customers will be doing, they just need to get the tech right.

      • Mike Porter

        They absolutely should be focusing on practical uses otherwise there will be no customers since there may be almost no practical use for the customers.

        If this was for scientific research like CERN then it would be fine but they want to commercialize it. I have a feeling they don’t understand they aren’t working at CERN anymore and don’t understand they need to make stuff people and companies would have use for.

    • Mei Ling

      The varifocal lenses approach is definitely a lot quicker process then spending another half a decade trying to shrink this technology (light fields) down into a usable form factor. I think a lightfield display will have its use in the far future where it would be the more expensive and refined alternative to any product that adopts varifocal lenses.

      • Mike Porter

        It’s not just about making the tech compact. It’s about degrading spatial resolution below HD (sharing the pixels between different focal points) and also way way more GPU performance requirements.

    • Last I heard the jury is still out on exactly which type of varifocal will ultimately win the day. They should at least get it into a headset so we can see what the overall experience is really like.

      It does seem like CREAL has one hell of an uphill battle though.

      • Mike Porter

        What jury? It has been known since day 1 that a true lightfield display requires several dozen render passes for each frame. Several dozen renders vs 1 is always much much more GPU and CPU intensive, this is not a matter of opinion but a simple fact of realtime 3d graphics for lightfield displays, be it VR or other multiscopic lightfield displays.

        • And

          Why can’t they just render the areas of each layer that are in focus for that layer? The eye focusing on a different layer would just blur the other layers? Instead of rendering everything when a lot of it will never be focused on

          • Mike Porter

            You still process several render passes, too GPU intensive.

    • benz145

      Varifocal isn’t exactly a solved problem either. The most advanced varifocal headset that’s actually available today is (to my knowledge) Magic Leap, which supports just two focal planes. Both are pretty complex to do right, it seems.

      • Mike Porter

        Hi Ben,
        Yes, I’m aware of the varifocal challenges, it’s just that the challenges with lightfield displays are far more difficult, in my opinion.
        There’s also Avegant AR tech with its highspeed dithered monochrome DLP frames and also the LetinAR “pin-mirrors” that at least seem to solve the accommodation part. Unless I can test the LetinAR tech so far it seems like Facebook has the simplest solution.

        • benz145

          You could very well be right, but sometimes until the work is done and implemented, the best solution overall isn’t clear (physical keyboards on smartphones were thought to be just fine until capacitive touch was mastered). I’m glad we have people coming at it from different angles!

          • Mike Porter

            I don’t remember what the production cost and accuracy was of touchscreen at that time, but I think the situation with lightfields is different and not analogous. As may you know lightfields require much much more processing power than any GPU can handle at current-gen VR PPD standards and spatial multiplexing degrades resolution so the image source must be an order of magnitude higher resolution to provide a HD image or run at an order of magnitude faster fps to achieve lightfield via time-multiplexing.

            As GPUs get faster it is used for improving 3d graphics and physics simulations, it’s not like in the future GPUs will achieve perfect photorealism and have spare processing power they won’t be using which will be utilized for other things such as lightfield render passes. To a lesser degree this is true with FPS. I doubt FPS of display technologies will continue to increase but not be used for its primary purpose and left to lightfields.

            I doubt the challenges of touchscreens a decade ago were as severe.

        • Jack H

          It was my understanding that CReal and Avegant do similar time-multiplexed integral light fields on DMD/ DLP driven in binary mode. Do you know that it’s the case?

      • Immersive Computing

        Magic Leap’s 2 focal planes were noticeable when it shifts between the 2, but the concept is brilliant especially once multiple depth planes are possible.

  • Marcus

    This was the article I was waiting for, thanks !
    Impressed to see they have managed to shrink this far already, great to see! Hopefully they can keep pushing the tech without the pressure of consumerisation, it needs room to grow. Any images from inside the new optics, I think the frog is the same as last year no?

  • Immersive Computing

    For any PCVR users interested in lightfields, download free “Welcome to Lightfields”by Google. Shows how the technology works, it’s quite something in a higher resolution headset especially the space shuttle and church


    • Foreign Devil

      Yes! even the fixed views where very immersive. I can’t wait until they have moving images or you are able to turn your head and look around.

      • Charles

        Great app, though I think it’s a different concept than this article is talking about. The scenes in this app were created by a special camera that takes stereo images at every possible position and orientation within a limited space. These stereo images are then presented to the user based on the user’s position and orientation.

    • Andrew Jakobs

      Thanx, gonna check it out with my vive pro.

    • benz145

      Welcome to Lightfields is a really neat demo, though it’s worth noting that a light-field can be viewed on a traditional display (as with the Google demo), but that’s different than a light-field display itself.

      • Immersive Computing

        Light field display sounds incredible!

        The Google demo feels like the “Esper” machine in the original Blade Runner, being able to move inside the scene, seeing the sun shifting through the stained glass window of the church as I move sideways is very cool.

        Here’s something else I discovered, the limited head movement can feel limiting (limits of Light field capture), but try moving further back beyond the view sphere and you become the camera with full movement, see the image. https://uploads.disquscdn.com/images/e4bd4204a461e0b0920edaec8852b1312d1a2093027df0549e0b60f0a42ab536.jpg

  • Adil H

    Actual VR dispay is a little far from what we see in real life, and for me the lightfield and farivocal is much important than resolution and field of view.

    • Mike Porter

      what we see in real life varies greatly by individual and glasses wearers

  • Glenn

    The technology in this case is still doing the focusing for the user. The user’s eye is still focused on the same plane, and does not change as it looks from one object to another. The user’s eyes still converge on objects as though they were looking at things at different points in space, but their biological lenses do not focus differently for things that are near versus far.The vergence-accommodation conflict (which is a biological problem for the user) would therefore persist.

    • Tomas S

      It may not be clear from the article, but CREAL’s HW creates genuine light-field with full optical depth. An eye can physically change focus between virtual objects in different distances.

    • benz145

      This is incorrect. CREAL’s display is creatingin a genuine light-field which supports both vergence and accommodation just as a real light-field would.

      • Glenn

        Sorry. I was confused by the discussion of foveated rendering and the use of “camera” in place of “eye,” and came to the mistaken conclusion that the system was focusing at the point of convergence/fovea.

  • Bob

    Good thing is Facebook isn’t heading in this direction otherwise there would be no next generation device coming out from their labs for many many years.

    • Andrew Jakobs

      How do you know? You have no idea what they already have in their labs..

      • Bob

        No I have no idea but Facebook have already shared information about their in-house prototypes and I can assure you it isn’t lightfields.

        They could potentially have a separate team working on the technology but it isn’t on the cards for their next generation device; that again I can assure you. This suggests they aren’t heading in the direction of incorporating a lightfield display with the next major product.

  • Kim from Texas

    I wish there was a date on the through-the-lens picture. This looks like the exact same picture as 2019. Or does CREAL just need to find a new image to display? I assume that this is picture from the “benchmark” demo and not the new (2020) smaller display.

  • Creal… really? This is why it’s getting harder to check your spelling with a Google search. Every misspelled word is a the name of some idiot company.

    I’ll have to see one of these of these depth of field displays in person to really grasp the difference between what I have now in existing headsets and what they are offering. I already feel like I have depth of field when looking around in a HMD. My eyes are focusing just like they do in the real-world. It doesn’t feel like anything is missing.

    I could use more resolution, more processing power, deeper blacks (as so many headsets are now LCD), better haptics. But out of all of my concerns related to VR, depth isn’t even on the list. I already feel like there’s depth there.

    • benz145

      Eyes focus by two means:

      – Vergence: When looking at ‘infinity’ your eyes are parallel. As you look at closer and closer objects, your eyes pivot toward the nose the keep the individual image from each eye aligned or fused. This is ‘stereo’ vision and it’s what gives us a sense of depth.

      – Accomodation: Individually, each eye has its own focus mechanism called accomodation. There’s a lens in your eye which changes shape to bend light so that it enters your eye at the correct angle to achieve focus. This angle changes depending if you are focusing on an object near or far.

      In the real world, Vergence and Accommodation are unconsciously synchronized as you look around.

      Most headsets today correctly support Vergence (stereo) by providing each eye with a slightly different image). But they don’t support Accommodation because the display is at a fixed distance from the eye. This creates a conflict between Vergence and Accommodation, because instead of being synchronized, one must stay static (Accomodation) while the other can change (Vergence). This is the so-called ‘Vergence-Accommodation conflict’.

      Light-fields and ‘varifocal’ display technologies can fix this issue because they represent the light such that Vergence and Accommodation work correctly in sync.

      Some say that the Vergence-Accommodation conflict can lead to eye-strain, especially when focusing on very near objects. Displays which correctly support Vergence and Accommodation can also look more real because they can show Accomodation-based depth-of field which is a strong depth queue that we’re used to in the real world.

      • Immersive Computing

        Vergence-accomodation conflict felt very real in the Vive to me, I remember getting cross eyed, mild eyestrain towards a light headache if spending too much time looking closely at near objects. I suspect the gritty feeling SDE didn’t help with eye fatigue either though.

  • Good that they managed to shrink their technology, but it seems that they still need years to arrive to an usable form factor. And that tiny glass in the end is just fluff to show the vision.

  • Paul Schuyler

    This is VR (and ultimately AR’s), one great hope. A true working solution to eliminate the visual discomfort(s) that comes with these headsets. Stereoscopy can’t do it no matter how high the resolution or refined the lenses. AR tech is not much better, with fundamental hardware roadblocks and tradeoffs for years to come. This uses a completely novel optical solution, apparently.

  • Jack H

    Both CReal and Avegant light fields really remind me of this project by Andrew Maimone:


    Essentially it’s a Texas Instruments “DLP” projector display illuminated by an array of LEDs. Each LED shining on the DLP creates a different view. The combination of several views creates the integral imaging style light field.

    At the time the LEDs were driven in binary on/off mode and I don’t think the DLP was driven in binary mode. However, by summing LED views with variable brightness and by using the newly available binary modes in Texas Instruments DLPs it should be possible to create a much higher quality light field system.

    Having said that, there i a lot to be said for just using a Maxwellian view “retinal display” approximation such as the LetinAR or other pinhole design since they don’t have a computing overhead and are also comfortable “accommodative” displays.