Holo-this, holo-that. Holograms are so bamboozling that the term often gets used colloquially to mean ‘fancy-looking 3D image’, but holograms are actually a very specific and interesting method for capturing light field scenes which have some real advantages over other methods of displaying 3D imagery. RealView claims to be using real holography to solve a major problem inherent to AR and VR headsets of today, the vergence-accommodation conflict. Our favorite holo-skeptic, Oliver Kreylos, examines what we know about the company’s approach so far.


Guest Article by Dr. Oliver Kreylos

oliver-kreylosOliver is a researcher with the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES). He has been developing virtual reality as a tool for scientific discovery since 1998, and is the creator of the open-source Vrui VR toolkit. He frequents reddit as /u/Doc_Ok, tweets as @okreylos, and blogs about VR-related topics at Doc-Ok.org.


RealView recently announced plans to turn their previous desktop holographic display tech into the HOLOSCOPE augmented reality headset. This new headset is similar to Magic Leap‘s AR efforts in two big ways: one, it aims to address the issue of vergence-accommodation conflict inherent in current VR headsets such as Oculus Rift or Vive, and AR headsets such as Microsoft’s HoloLens; and two, we know almost no details about it. Here they explain vergence-accommodation conflict:

Note that there is a mistake around the 1:00 minute mark: while it is true that the image will be blurry, it will only split if the headset is not configured correctly. Specifically, that will not happen with HoloLens when the viewer’s inter-pupillary distance is dialed in correctly.

Unlike pretty much everybody else using the holo- prefix or throwing the term “hologram” around, RealView vehemently claims their display is based on honest-to-goodness real interference-pattern based holograms, of the computer-generated variety. To get this out of the way: yes, that stuff actually exists. Here is a Nature article about the HoloVideo system created at MIT Media Lab.

The remaining questions are how exactly RealView creates these holograms, and how well a display based on holograms will work in practice. Unfortunately, due to the lack of known details, we can only speculate. And speculate I will. As a starting point, here is a demo video, allegedly shot through the display and without any special effects:

I say allegedly, but I do believe this to be true. The resolution is surprisingly high and quality is surprisingly good, but the degree of transparency in the virtual object (note the fingers shining through) is consistent with real holograms (which only add to the light from the real environment shining through the display’s visor).

There is one peculiar thing I noticed on RealView’s web site and videos: the phrase “multiple or dynamic focal planes.” This seems odd in the context of real holograms, which, being real three-dimensional images, don’t really have focal planes. Digging a little deeper, there is a possible explanation. According to the Wikipedia entry for computer-generated holography, one of the simpler algorithms to generate the required interference patterns, Fourier transform, is only able to create holograms of 2D images. Another method, point source holograms, can create holograms of arbitrary 3D objects, but has much higher computational complexity. Maybe RealView does not directly create 3D holograms, but instead projects slices of virtual 3D objects onto a set of image planes at different depths, creates interference patterns for the resulting 2D images using Fourier transform, and then composes the partial holograms into a multi-plane hologram. I want to reiterate that this is mere speculation.

realview-holoscopeThis would literally create multiple focal planes, and allow the creation of dynamic focal planes depending on application or interaction needs, and could potentially explain the odd language and the high quality of holograms in above video. The primary downside of slice-based holograms would be motion parallax: in a desktop system, the illusion of a solid object would break down as the viewer moves laterally to the holographic screen. Fortunately, in head-mounted displays the screen is bolted to the viewer’s head, solving the problem.

SEE ALSO
HoloLens Inside-out Tracking Is Game Changing for AR & VR, and No One Is Talking about It

So while RealView’s underlying technology appears legit, it is unknown how close they are to a real product. The device used to shoot above video is never shown or seen, and a picture from the web site’s medical section shows a large apparatus that is decidedly not head-mounted. I believe all other product pictures on the web site to be concept renders, some of them appearing to be (poorly) ‘shopped stock photos. There are no details on resolution, frame rate, brightness or other image specs, and any mention of head tracking is suspiciously absent. Even real holograms need head tracking to work if the holographic screen is moving in space by virtue of being attached to a person’s head. Also, the web site provides no details on the special scanners that are required for real-time direct in-your-hand interaction.

Finally, there is no mention of field of view. As HoloLens demonstrates, field of view is important for AR, and difficult to achieve. Maybe this photo from RealView’s web site is a veiled indication of FoV:

RealView-FoVI’m just kidding, don’t be mad.

In conclusion, while we know next to nothing definitive about this potential product, computer-generated holography is a thing that really exists, and AR displays based on it could be contenders. Details remain to be seen, but any advancements to computer-generated holography would be highly welcome.

  • Jack H

    From my reading of digital holography and other light field techniques, there are some major regions for reducing a 4D light field to multiple depth planes but the most important are as you say the easier computation (across holography, tensors, integral displays) and the problem of a finite number of depth increments possible because of the greater than infinitesimal size of the display source pixels.

    By the way Oliver, everywhere I go I seem to run into your AR sandbox systems, most recently a demo at the BT Young Scientist Exhibition here in Dublin!

    • Jack H

      To add to the previous, a major challenge of holographic reconstruction is that most embodiments involved a first display/ spatial light modulator to change the phase (gives the shape of the light) and a second for amplitude/ brightness, so it was two displays instead of one and colour field sequential. I have seen a few that either has 6 SLM layers in one Hamamatsu LCOS device (seemed slow and expensive) or seem to be able to do RGB and phase simultaneously on a single SLM but seemed to be lower quality and computationally intensive.
      Lastly I’m not sure if one may have to vary the focus of the last lens to in a 4f holographic system to get a sweet spot for reconstruction over a large depth range.

      I decided to try something much simpler, it’s just a switchable stack of polarisation sensitive metamaterial lenses, so it’s multifocal but not a “true” light field (no polar or azimuthal angles to the ray origins). Here is a terrible quality video I did with the first engineering sample: https://www.youtube.com/watch?v=TI7px7rd4ls

      • Ivan Onuchin

        One way to reduce amount of calculations is to know accurate position and direction of view. That’s why all such technologies are combined with eye tracking, which is integrated in all-in-one optical design. https://www.youtube.com/watch?v=QBa-668ByAk

  • Dagottfr

    IMMY’s Natural Eye Optics (N.E.O) has already solved the accommodation convergence conflict.

    Utilizing one’s own eye as the only lens in the entire device, it is a direct retinal projection that emulates human vision.

    The team that fixed the Hubble/built the James Webb worked on this for almost a decade.

    I truly believe they are the only ones that will/have cracked the code with their fundamentally different and organic approach.

    • Immy did not solve accommodation/vergence conflict. Here’s a diagram showing how Immy’s display engine works: https://i2.wp.com/immyinc.com/wp-content/uploads/2017/01/neo-infographic-original1.png

      Notice the standard micro OLED display at the top, and the sequence of three mirrors to create and superimpose an enlarged virtual image of the microdisplay over the user’s direct view of the environment. True, they’re not using lenses, but they’re still creating a virtual image the old-fashioned way. The user will see a floating, enlarged, version of the microdisplay in their vision, at a fixed optical distance.

      They make a big deal about not using lenses, and their light “never entering another medium,” but that doesn’t change the fundamentals of how the display engine works, and doesn’t make it a “retinal display.” The core idea of a retinal display is to bypass refraction from the eye’s lens by directly projecting light onto the retina through a narrow laser beam. The primary difference is that the laser beam, by virtue of being very narrow and non-divergent, is not affected by the lens, and therefore the image on the retina is always in focus no matter how the lens is accommodated.

      The benefit of Immy’s display engine is that mirrors induce significantly less geometric distortion and chromatic aberration than lenses, and that using a half-mirror as the final optical element creates an undistorted view of the real world.

      • Jack H

        Can the above 3 mirror system create a (very tight eye relief) retinal condition display without needing either a very small first element or pinhole aperture?

        • I don’t see how. You need light from a range of angles entering the eye through the lens’ optical center and at a very narrow diameter to bypass lens refraction, and then still hitting the retina in a very small spot to appear in focus. That implies the light needs to come in the form of narrow collimated beams. The only way to create those outside a scanning laser would be, as you say, a pinhole aperture directly on the lens or extremely close to it. Could be done via contact lens, or some pharmaceutical means to constrict the pupil.

  • Lucidfer

    Not a vaporware, unlike Magic Leap.

    • Erm, your comparison is based on what?

      • Lucidfer

        The fact that Holoscope uses real, existing technology that they’re showcasing a real exemple of version Magic Vaporware that never made sense in terms of existing science, never showed anything and was invested in and promoted like any vaporware scheme that existed before.

        • Dynastius

          Just saw this…so you think all the people who have seen it in person are bullshitting? My only question about Magic Heap is if they can really miniaturize the tech enough and mass produce it. People have actually used the thing and were blown away, but it was a huge piece of equipment.

          • Lucidfer

            No I think there’s truth, period. Science is truth, marketing PR lies are truth too. It’s not the first time that Google and Wired lied about a groundbreaking technology that people invested millions in with promises that didn’t make ONE fucking sense in terms of existing technology or even science, only for it to disappear while money somehow got laundered.

    • JustNiz

      I think you have that exactly round the wrong way. Magic Leap is apparently real enough: https://www.wired.com/2016/04/went-inside-magic-leaps-mysterious-hq-heres-saw/
      I haven’t seen anything beyond just emtpy claims for Holoscope, no reports of any actual demos to be found anywhere, leading me to suspect they dont have any actual working hardware yet.

      • Lucidfer

        Then you don’t know much about what you’re talking about. The Wired article is by experience quasi-proof that it’s vaporware (and it wouldn’t be the first time). Holoscope have nothing surprising of a technology, although this could be vape too.

        • JustNiz

          I’d rather be wrong than a rude asshole like you.

          • Lucidfer

            Yeah I can see that, you would trade truth for a fucking selfie with a celebrity cunt.

    • VR Geek

      Are we sure this is not vaporware?

      • Lucidfer

        PR says there’s a huge probability. The state of science research says this is absolutely the case. Lightfield glasses are in no-way a consumer product technological reality. Now in the state of things, that doesn’t mean they won’t release yet another unusable AR Glasses to show the point, but clearly nothing of the “revolutionary, ground-breaking, to-be a huge success” product that was overblown and overadvertised in all the suspicious PR ways possible.

  • Richard Servello

    So awesome to see that So many Mr devices are coming soon !!!!

  • Peter Hansen

    A “spatially correct, long lasting erection”?!

  • Great article Dr. Kreylos and I believe you are right about the multiplane interference pattern, but this does not appear to be the case in the second demo with animated bird. I am also wondering if this based upon the work NVIDIA and Stanford collaborated on a couple years back testing out the practicality of multiple images planes in a VR headset. It is also noteworthy that Magic Leap demos also have a similar look in their demonstration videos.

  • Looks like an incredible device!