Co-founded by former CERN engineers who contributed to the ATLAS project at the Large Hadron Collider, CREAL3D is a Switzerland-based startup that’s created an impressive light-field display that’s unlike anything in an AR or VR headset on the market today.

At CES last week we saw and wrote about lots of cool stuff. But hidden in the less obvious places we found some pretty compelling bleeding-edge projects that might not be in this year’s upcoming headsets, but surely paint a promising picture for the next next-gen of AR and VR.

One of those projects wasn’t in CES’s AR/VR section at all. It was hiding in an unexpected place—one and half miles away, in an entirely different part of the conference—blending in as two nondescript boxes on a tiny table among a band of Swiss startups representing at CES as part of the ‘Swiss Pavilion’.

It was there that I met Tomas Sluka and Tomáš Kubeš, former CERN engineers and co-founders of CREAL3D. They motioned to one of the boxes, each of which had an eyepiece to peer into. I stepped up, looked inside, and after one quick test I was immediately impressed—not with what I saw, but how I saw it. But it’ll take me a minute to explain why.

Photo by Road to VR

CREAL3D is building a light-field display. Near as I can tell, it’s the closest thing to a real light-field that I’ve personally had a chance to see with my own eyes.

Light-fields are significant to AR and VR because they’re a genuine representation of how light exists in the real world, and how we perceive it. Unfortunately they’re difficult to capture or generate, and arguably even harder to display.

Every AR and VR headset on the market today uses some tricks to try to make our eyes interpret what we’re seeing as if it’s actually there in front of us. Most headsets are using basic stereoscopy and that’s about it—the 3D effect gives a sense of depth to what’s otherwise a scene projected onto a flat plane at a fixed focal length.

Such headsets support vergence (the movement of both eyes to fuse two images into one image with depth), but not accommodation (the dynamic focus of each individual eye). That means that while your eyes are constantly changing their vergence, the accommodation is stuck in one place. Normally these two eye functions work unconsciously in sync, hence the so-called ‘vergence-accommodation conflict’ when they don’t.

More simply put, almost all headsets on the market today are displaying imagery that’s an imperfect representation of how we see the real world.

On more advanced headsets, ‘varifocal’ approaches dynamically shift the focal length based on where you’re looking (with eye-tracking). Magic Leap, for instance, supports two focal lengths and jumps between them as needed. Oculus’ Half Dome prototype does the same, and—from what we know so far—seems to support a wide range of continuous focal lengths. Even so, these varifocal approaches still have some inherent issues that arise because they aren’t actually displaying light-fields.

So, back to the quick test I did when I looked through the CREAL3D lens: inside I saw a little frog on a branch very close to my eye, and behind it was a tree. After looking at the frog, I focused on the tree which came into sharp focus while the frog became blurry. Then I looked back at the frog and saw a beautiful, natural blur blossom over the tree.

Above is raw, through-the-lens footage of the CREAL3D light-field display in which you can see the camera focusing on different parts of the image. (CREAL3D credits the 3D asset to Daniel Bystedt).

Why is this impressive? Well, I knew they weren’t using eye-tracking, so I knew what I was seeing wasn’t a typical varifocal system. And I was looking through a single lens, so I knew what I was seeing wasn’t mere vergence. This was accomodation at work (the dynamic focus of each individual eye).

The only explanation for being able to properly accommodate between two objects with a single eye (and without eye-tracking) is that I was looking at a real light-field—or at least something very close to one.

That beautiful blur I saw was the area of the scene not in focus of my eye, which can only bring one plane into focus at a time. You can see the same thing right now: close one eye, hold a finger up a few inches from your eye and focus on it. Now focus on something far behind your finger and watch as your finger becomes blurry.

This happens because the light from your finger and the light from the more distant objects is entering your eye at different angles. When I looked into CREAL3D’s display, I saw the same thing, for the same reason—except I was looking at a computer generated image.

A little experiment with the display really drove this point home. Holding my smartphone up to the lens, I could tap on the frog and my camera would bring it into focus. I could also tap the tree and the focus would switch to the tree while the frog became blurry. As far as my smartphone’s camera was concerned… these were ‘real’ objects at ‘real’ focal depths.

Through-the-lens: focusing on the free. | Image courtesy CREAL3D

That’s the long way of saying (sorry, light-fields can be confusing) that light-fields are the ideal way to display virtual or augmented imagery—because they inherently support all of the ‘features’ of natural human vision. And it appears that CREAL3D’s display does much of the same.

But, these are huge boxes sitting on a desk. Could this tech even fit into a headset? And how does it work anyway? Founders Sluka and Kubeš weren’t willing to offer much detail on their approach, but I learned as much as I could about the capabilities (and limitations) of the system.

The ‘how’ part is the least clear at this point. Sluka would only tell me that they’re using a projector, modulating the light in some way, and that the image is not a hologram, nor are they using a microlens array. The company believes this to be a novel approach, and that their synthetic light-field is closer to an analog light-field than any other they’re aware of.

SEE ALSO
Facebook Open-sources DeepFocus Algorithm for More Realistic Varifocal VR Rendering

Sluka tells me that the system supports “hundreds of depth-planes from zero to infinity,” with a logarithmic distribution (higher density of planes closer to the eye, and lower density further). He said that it’s also possible to achieve a depth-plane ‘behind’ the eye, meaning that the system can correct for prescription eyewear.

The pair also told me that they believe the tech can be readily shrunk to fit into AR and VR headsets, and that the bulky devices shown at CES were just a proof of concept. The company expects that they could have their light-field displays ready for VR headsets this year, and shrunk all the way down to glasses-sized AR headsets by 2021.

At CES CREAL3D showed a monocular and binocular (pictured) version of their light-field display. | Photo by Road to VR

As for limitations, the display currently only supports 200 levels per color (RBG), and increasing the field of view and the eyebox will be a challenge because of the need to expand the scope of the light-field, though the team expects they can achieve a 100 degree field of view for VR headsets and a 60–90 degree field of view for AR headsets. I suspect that generating synthetic lightfields in real-time at high framerates will also be a computational challenge, though Sluka didn’t go into detail about the rendering process.

Through-the-lens: focusing on the near pieces. The blur scene in the background is not generated, it is ‘real’, owed to the physics of light-fields. | Image courtesy CREAL3D

It’s exciting, but early for CREAL3D. The company is a young startup with 10 members so far, and there’s still much to prove in terms of feasibility, performance, and scalability of the company’s approach to light-field displays.

Sluka holds a PhD in Science Engineering from the Technical University of Liberec in the Czech Republic. He says he’s a multidisciplinary engineer, and he has the published works to prove it. The CREAL3D team counts a handful of other PhDs among its ranks, including several from Intel’s shuttered Vaunt project.

Sluka told me that the company has raised around $1 million in the last year, and that the company is in the process of raising a $5 million round to further growth and development.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


  • Nobody55

    Amazing! It does look very similar to holographic displays (not commercially available) which recreate the light wavefront at the position of the eyes.
    I’m pretty sure the principle is the same. It involves a laser source (their projector?) and the modulation function is the Fourier transform of the 3D surface. From there, optical diffraction recreates the correct wavefront, giving the impression of depth.

    It’s been a while since I thought that holographic displays are the future for VR/AR !

    • Lucidfeuer

      Sure but they are at least 20 years away.

      • Nate Vander Plas

        Really? I feel like we could have them in 5-10 years easily. I might not have said that before reading this article, but this looks super promising. Their projection of 2021 seems a bit overly optimistic, but 20 years?

        • dk

          it’s not 20 years with the standard simpler way with display with array of lenses in front of it(look up nvidia light field display and it has a tiny form factor) ….those will be feasible when we have 8k per eye panels for a reasonable price and reasonable price for a pc driving it which with eye tracking might not be that far off….and there might be different tech that can speed up this evolution ….maybe this tech can make it happen if they explain how is it better and what r the specs of it

        • Lucidfeuer

          You just have to look at how slow Virtual headsets development as been since 2013 to understand how long it unfortunately takes for fundamental research and development to become a productible component for mass markets.

      • Nobody55

        I don’t think so. The technology is there in the research labs. But it needs to pass the mass production step in order to be affordable.

        • mirak

          That would certainly be a huge plus for people who already have headsets, but it would not draw to many people who were not interested by vr yet.

          • Nobody55

            Yeah maybe. But having real accommodation would certainly help to make VR feel more comfortable and natural.
            So people not accustomed to VR headsets could be more interested.

        • Lucidfeuer

          Unfortunately that’s how long it takes for technology that needs iterating on a fundamental level (lightfield optics especially with vergence accommodation is not small feat) to eventually get operational and yieldable at consumer market level.

  • Jonathan

    Wow ! Why is this not on the frontpage of a lot of the technical publications ?! Maybe it’s a bit early for the company, but it seems it could be a real game changer !

    • dk
      • No, it is not done like Nvidia. Both are true “light field” displays, but CReal3D is done very differently.

        • Felix

          How are they generating the light field?

          • mirak

            With unicorn farts.

          • TheObserver

            Yup. Unicorn farts and mermaid saliva.

        • dk

          yep I mentioned that the ones I linked have array of lenses ….in case people haven’t heard of other light field displays

        • Nate Vander Plas

          Karl, do you have any idea how they’re doing it?

    • mirak

      Because even Nvidia already showed prototypes.

  • namekuseijin

    this is great for movies, TV content showing on a flat screen – they only have to capture, record and transmit the whole lightfield.

    But for computer-generated imagery, generating the whole lightfield at once is not a good idea: the player will be only focusing on this or that part or depth. It’s the very reason why we need eye-tracking: so we can lower computational requirements for rendering for VR.

    eye-tracking eventually will offer varifocal solution too and that is far more computationally economic than a whole lightfield generated on the fly and mostly wasted, much like our own whole VR rendering on screen as well…

    • alexp700

      I suspect computationally its only a depth buffer required – which is generated anyway. This should work with foveated rendering, as the mechanism is complimentary – you just need less detail where the eye is not looking, which can be represented on a lightfield display.

      • No, there is no eye tracking or foveated display used.

        • dk

          so do u have an idea what’s the trick they r using …they r claiming that a 100 fov vr headset is possible in near future
          ….and with a more traditional LFD with array of lenses to get angular resolution similar to the rift/vive the panels need to have a massive resolution around 8k per eye
          what projector r they thinking of using or is it possible because of something else they r doing
          looking forward to your article about this set-up

        • mirak

          What are you talking about ?
          He is not saying that.

          • The key point is that they are not currently eye tracking which would be required to support foveated rendering. I suspect they might in the future to cut down the data load in a product.

      • Moe Curley

        It’s not about the ability to get one part of the image to “get blurry”. It’s about the image being composed of areas which are located at differrent focal planes so that the eye can accomodate by naturally focusing at different distances.

        • mirak

          Not areas, light rays.

          • Yes. Magic Leap has people thinking in terms of “planes” which act differently and why Magic Leap could only show one at a time and had to track the eye.

            A true light field works with “bundles” of rays (its not possible represent individual rays and loosely speaking). Creal3D does not currently track the eye so all the depths are available at once. I expect in the future they will track the eye to reduce the data similar to Foveated rendering.

          • Moe Curley

            Thanks for the insight.

    • Firestorm185

      Introducing: Nvidia ITX

    • mirak

      I don’t see the issue, as you can use foveated rendering and eye tracking with light fields too.

      With light fields you could also use accomodation depth rendering.

      You don’t need to render accurately depths of focus you are not focusing to, because your eyes will see them blurry anyway.

      If Varifocal can use eye tracking to guess the depth of focus that the eye should accommodate to, then lightfields renderer can use that also to render less accurately the depth the eye is not accomodating to.

      Of course Verifocals win in performances but is killed by Light Fields realism.

  • Ombra Alberto

    I hope that Oculus will buy them and invest in this technology.
    And then you put it on the CV2

    • Justin Davis

      If you want CV2 to be many thousands of dollars.

    • Exactly what I was thinking!! I was like “Oculus needs to buy them out!!” Heh. We don’t need a Saturation of VR Headsets on the Market!! Of course it’s inevitable that a few contenders will win the day, and I hate to say it, but Oculus even though I’ve been a long term fanboy for that team, I’m not saying such with bias, but the surprising fact that somehow they won the largest share in Steam Users and Oculus sold out this last Christmas Season on Amazon and Best Buy!! That was a little shocker, because I was even concerned that they were falling behind, though my loyalty and belief in their company and product remain intact irregardless of where they stand in the arms race! It’s just nice to know that others do like their product a lot as well! I mean to me it was a no brainer, the Rift’s design, the form factor, soft fabric outter shell, the nifty neat Toutch controls compared to the Vive and other HMD’s just seemed like it was such a breakthrough Product in VR space. What I <3 about Oculus is that they don't push for Ultra Extreme things like 240 degrees field of view or other gimicks like Pimax and they're so called "8K" res display (not truly 8k) That make headsets that much bulkier, big, cumbersome, and not all that user friendly.. Oculus tries to pack the best it can in a small package that whilst it may not have all those bells and whistles, it still feels like a Premium Product and affordable for the masses!! That's the dream afterall! I keep seeing all these startups, even Magic Leap and Star VR all of which are Priced for the Ultra Rich Consumer, they're not meant for mass adoption ((or at least not yet that is)) Which makes you realize the goals of those companies are not all that Consumer friendly come to think of it. Oculus on the other hand, they're mission is to bring the best tech they can and to sell it Even or at a loss for that fact, in order to benefit the Consumer!! They can do this because they've the funding and backing of Facebook, they don't need to rely on Startup funds or private investors! An established Company working to bring VR to Billions of Users Worldwide! It was even Oculus' research and development team that brought peripherals such as solving and addressing vergence accommodation issues to the table when others were more focused mostly only on increasing field of view or higher resolutions. They were doing this with Half Dome Prototype and also trying to fit it in the same lightweight form factor of the Rift! It's pretty impressive! Things like that prove to me their making strides in the right direction, they're keeping things like form factor, ergonomics, and ease of use in the forefront which are cirtical issues for VR right now, instead of working backwards and making yet another HMD that's a giant box!! I'm very confident and hopeful that Oculus will be a major contender in the VR Revolution in the years to come! I was concerned that perhaps they'd be a sort of "Apple" like company when it comes to their product, with the whole being Exclusive closed ecosystem, but that wasn't the case, they've kept more or less open source but while also delevering their own software and things that help ensure quality of experience, thus they're a mix of "Apple" and "Microsoft" which makes Facebook a Company with a whole different reputation and feel behind their business practices that feels "Fresh" and unique to them and not just a repeat of the dumb business practices of so many major companies before them! Hopefully they stay true to that and keep delivering amazing things and not go under like Nokia, Enron, and even Apple! Although Apple's redeeming quality is the Air Pods, at least they made something fantastic after Jobs died!! Something that's not brain-dead like their dongle phones that are overpriced and way behind Samsung's Galaxy Phones!

  • Nerd from Texas

    Seems like what Magic Leap promised, but didn’t deliver

    • dk

      yeah so many years and billions and what they did in the dev kit is 2 hololens waveguides that work one at a time ………and these guys have a better demo to show ….but implementing it in a headset for a reasonable price and powering it by a reasonably powerful pc is a different story

  • brubble

    Neat-o.

  • Rosko

    Impressed, even more so if we actually see it in a product.

  • Nate Vander Plas

    This is exciting! Hopefully they really can miniaturize it and make it affordable! I’m sure there are many problems to solve on the rendering side, but it seems like that will be the easy part. I’m astounded at how clear it looks, especially for a “proof of concept!” I’m totally perplexed at how it works since they don’t use a microlens array.

  • Albert Hartman

    For young eyes only. No one over 50 has accomodation anymore, nor does anyone who wears glasses. That’s why vergence matters and everything being in focus is not so bad.

    • Bob

      “nor does anyone who wears glasses.”

      Really?

    • mirak

      That’s total non sense.

    • Moe Curley

      Wow. That is so far from correct.

    • Albert Hartman

      I don’t know why I’m bothering, but here’s some information. https://en.wikipedia.org/wiki/Accommodation_(eye)#/media/File:Duane_(1922)_Fig_4_modified.svg

      • Daniel Gochez

        53 here. I’ve worn glasses all my life and just lately needed a prescription for another set of glasses specific for up close (reading) But I can still see things that are close to my eye much blurier if I focus on something in the distance, and vice versa. Despite what that graph might suggest.

  • impurekind

    The fact this exists for you to try it means it’s coming soon-ish though, so it’s something to excited about for sure. Once you can get stuff like this in commercial AR and VR headsets–wow!

  • MosBen

    Very cool. I wonder if you could have the 100 degree light fields in the center of view in an HMD and then more traditional screens to fill out the outer edges of the FOV to achieve something closer to a natural human FOV. Certainly interesting, and it bodes well for the future of VR.

    • maiskorn123

      smart thinking

      • MosBen

        Thanks!

  • Alan Harrington

    And this is why Magic Leap didn’t deserve to soak up all the Billions… None left for the real innovators out there.

    • brandon9271

      I was thinking the same thing. These guys are the real deal and MJ are snake oil salesmen

  • bleed gfx

    Lightfield Displays are the future of VR. Now the Most interesting Part: if you want Realtime 3D graphics, it needs to be raytraced! And now nvidia(who also works on lightfield displays) steps in with Rtx cards equipped with VR Link connections..

  • Ok, I am hyped. But I don’t much believe their claims on miniaturization… I think we still need years

  • The fact that these prototypes have a limited color range and are based on a projector give hints how the system works. I’d wager that they use an off-the-shelf DLP projector with a modified color wheel or perhaps additional optical elements between the micromirrors and the main lens.
    Wikipedia page on DLP projectors: https://en.wikipedia.org/wiki/Digital_Light_Processing

  • cataflic

    I’m a little bit tired of “worderful ideas”…”prototype”…come on…years are passing by and every time we have this …prototypes…again years to wait until a fuctioning device.

  • the display currently only supports 200 levels per color (RBG)

    What does this even mean? Are you referring to the number of code values per R, G, B channel, e.g. a 7.645-bit display?

  • Tuotuo Li

    I believe this is using the same method described here. TomoReal: Tomographic Displays
    https://arxiv.org/pdf/1804.04619.pdf

  • Nerd from Texas