Researchers at Stanford and Samsung Electronics have developed a display capable of packing in greater than 10,000 pixels per inch (ppi), something that’s slated to be used in VR/AR headsets and contact lenses of the future.

Over the years, research and design firms like JDI and INT have been racing to pave the way for ever higher pixel densities for VR/AR displays, astounding convention-goers with prototypes boasting pixel densities in the low thousands. The main idea is to reduce the perception of the dreaded “Screen Door Effect”, which feels like viewing an image in VR through a fine grid.

Last week however researchers at Stanford University and Samsung’s South Korea-based R&D wing, the Samsung Advanced Institute of Technology (SAIT), say they’ve developed an organic light-emitting diode (OLED) capable of delivering greater than 10,000ppi.

In the paper (via Stanford News), the researchers outline an RGB OLED design that is “completely reenvisioned through the introduction of nanopatterned metasurface mirrors,” taking cues from previous research done to develop an ultra-thin solar panel.

Image courtesy Stanford University, Samsung Electronics

By integrating in the OLED a base layer of reflective metal with nanoscale corrugations, called an optical metasurface, the team was able to produce miniature proof-of-concept pixels with “a higher color purity and a twofold increase in luminescence efficiency,” making it ideal for head-worn displays.

Furthermore, the team estimates that their design could even be used to create displays upwards of 20,000 pixels per inch, although they note that there’s a trade-off in brightness when a single pixel goes below one micrometer in size.

SEE ALSO
VR is Getting an Official 'Dungeons & Dragons' Game from 'Demeo' Studio

Stanford materials scientist and senior author of the paper Mark Brongersma says the next steps will include integrating the tech into a full-size display, which will fall on the shoulders of Samsung to realize.

It’s doubtful we’ll see any such ultra-high resolution displays in VR/AR headsets in the near term—even with the world’s leading display manufacturer on the job. Samsung is excellent at producing displays thanks to its wide reach (and economies of scale), but there’s still no immediate need to tool mass manufacturing lines for consumer products.

That said, the next generation of VR/AR devices will need a host of other complementary technologies to make good use of such a ultra-high resolution display, including reliable eye-tracking for foveated rendering as well as greater compute power to render ever more complex and photorealistic scenes—things that are certainly coming, although aren’t here yet.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 3,500 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • TechPassion

    Samsung Odyssey 2, this year, with this technology, please.

  • leseki9

    To clarify the research paper, they explain how metasurfaces can be used with OLED to bypass need of color filters and achieve better light efficiency, gamut and brightness. The high Nits achieved as well as high PPI is necessary for diffractive waveguide-based AR, however such high PPI and brightness is not needed for VR even for retinal resolution.
    The research paper authors seem to either not be very familiar with VR optics and display technologies or just mention AR as well as VR just as potential use case with other future optics.

    The paper does not address the actual issue with very high PPI but VR FOV display panels: the high production cost and low yield of such a display. THis is not the first technology that allows retinal resolution PPIs.

    Basically the more pixels you manufacture the more costy it gets per panel and the higher the chance that one of the pixels is dead in each panel. For a 1080×1200 pixel panel a yield of 1 per million is not too bad but for 4K by 4K or 8K by 8K panel it’s unacceptable and you need 16-64x better yield. We are currently struggling even with 2K x 2K panels.

    • mirak

      Such high resolution is needed for ligthfield displays, which are the future of VR.

      • leseki9

        Not true, lightfield displays are just one of 4 main groups of VA confilct-free VR display technologies. All the current research suggests even with lightfield displays eye tracking is necessary for resonable CPU and GPU requirements and proper image rendering, which defeats the purpose of lightfield displays over, say, varifocal displays. Research is still ongoing but as it stands there’s no evidence to be confident that lightfield displays or holographic displays are the future of VR.

        • Bob

          Lightfield – too exotic, too expensive.

          Varifocal is the best and most cost effective way within the next 10 years.

          • leseki9

            I probably agree. I’d agree about the same statement regarding holographic display as well.
            And it’s not even just about being expensive. If eye tracking is solved and varifocal provides the same user experience lightfield would and if lightfield doesn’t solve the requirement for eye tracking for correct image rendering itself, what is the point in spending time and resources on lightfield? There has to be a reason, otherwise we will be in the same situation as the discontinued flat screen CRT displays which provided superb contrast ratio.

          • mirak

            The point of ligthfields is that it doesn’t need eye tracking to correctly render the image depth of focus.

            However foveated rendering with eye tracking would help to computer ray traced images in real time.

          • leseki9

            Except it does need to know where the eye is looking at to render correctly without artifacts. And also to bring CPU an GPU requirements to a reasonable level. There are papers explaining the first problem.

          • mirak

            This one doesn’t need eye tracking to correct artifacts
            https://www.roadtovr.com/creal3d-light-field-display-ar-vr-ces-2019/

            You would have so source what you talk about.

          • leseki9

            An inexperienced eye may not notice an artifact in a controlled environment, nor are there articles that someone who isn’t an optical engineer would understand for me to just link to you or quote from. The main issue is “pupil swim”, your eye lenses don’t just rotate, they also move. In other words the center of rotation of your eye lens isn’t the center of the eye lens itself. This causes artifacts. There’s a similar issue with focal surface, multifocal and holographic displays. The easiest video is this one but I haven’t seen one explaining the issue for lens array-based lightfield displays specifically. https://www.youtube.com/watch?v=LQwMAl9bGNY&feature=youtu.be

            When the lens array is at a much farther distance from your eyes physically like lightfield monitors or when there is an extra eyepiece like the old CREAL prototype in your link may be using this isn’t a big deal.

            And again, besides the above there’s also the issue of unrealistic computational requirements. Pre-rendered videos or a CAD or 3d modelling program may run in realtime but a video game with modern graphics is just not possible. And it probably won’t be possible in the future either.

        • mirak

          You describe exactly why it’s the future of VR.

          • leseki9

            No, as long as lightfields also rely on eye tracking there is no advantage, as I already explained. There is no reason to assume at this point that eye tracking dependency will be solved.

      • kontis

        How do you know that?

        The inventor of Nvidia’s near-eye light field display has no idea if it is, but you know that. Fascinating what armchair engineering is capable of.

        • mirak

          Of course he does.
          Ligthfields are reproducing the way light rays reach the eyes in the reality.

          It means you can accomodate like in reality and see things in focus like in reality.

        • dk

          the nvidia guy now works at facebook reality labs
          ….and he means there is a reduction in the perceived resolution with this type of stack with array of lenses ….so to get it to current level perceived resolutions u would need high resolution panel

          the light field stack is tiny as far as form factor so no more box on your face and it solves vergence accommodation but to get big fov and good resolution u need really high resolution panel …..and most likely eye tracking and foveated rendering

  • mirak

    It’s still fucking Pentile RGBG OLED according to the pictures, not RGB OLED xD

    • leseki9

      RGBG can represent one pixel where two green subpixels simply share their total luminance.

      • Bob

        He’s talking about pentile with regards to screendoor effect which would be a non-issue with these sort of super high pixel density displays.

        • mirak

          yes, but on the other side we can argue that there is no point using pentile on such high resolution

          • Bob

            Sure but that’s really the point; pixel density is so high the pixel arrangement wouldn’t matter which leads to another topic $$$.

          • mirak

            The theory behind Pentile is that the human eye is more sensible to gree and ligth than color.
            This makes sense to increase the perceived resolution if you lack pixels.

            But so far I think that it was more a way for samsung to advertise resolutions that could match what LCD does, than a really efficient way to increase perceived resolution.

            Legally you must have like an amount of 50% of the subpixels of the resolution to advertise that your screen can display that resolution.

          • Regarding colour:-

            3 types of cone cells in human retina, peak sensitivities approximately 420 nm for “short-wavelength” (blue) cells, 535 nm for the “medium” (green) and 565 nm for “long” (red).

            Regarding luminance:-

            6-10 million cone cells (colour) across the retina, and about 120 million rods (luminance)

            Eye is many times more sensitive to light than colour, and colour perception depends on human physiology; 8-10% of male population exhibit red/green colour blindness due to unequal distribution or performance of cone cell types.

            Could partly explain why pentile works for many, but not all..

        • leseki9

          I think you’re missing the point. Pentile means pixels share a subpixel, pentile means “reducing the number of blue subpixels with respect to the red and green subpixels” https://en.wikipedia.org/wiki/PenTile_matrix_family

          Here you don’t have that, you have the opposite: an extra green subpixel. If anything this helps with reduced screendoor vs a tall red, green and blue subpixels only.

      • Adrian Meredith

        Resolution is counted by the number of green sub pixels so it’s still going to look fuzzy around the edges

    • Andrew Jakobs

      With such a PPI who cares what the layout is, you won’t notice it anyway.

  • kontis

    We already had incredibly dense PPI displays for a long time (microdisplays). They don’t matter and are not used in VR, because they are too small to achieve wide FOV.

    A single nice number is not enough. This is similar to all the battery breakthroughs. One showstopper and it forever stays in laboratory or at best is a niche use for something specific.

    • Andrew Jakobs

      Well, microdisplays were actually used in VR back in the day. Forte VFX for example..

  • Andrew Jakobs

    such a high PPI also mean a very high resolution, and there is no way GPU’s will be that powerfull in the coming years to drive such displays, certainly not at consumer prices..
    Also the newer headsets already have major improvements with SDE reduction, so even with an UHD panel per eye it would probably be SDE free (taking into account the panel sizes don’t really increase.

    • dk

      1080p can be 10-20k ppi if u shrink it small enough :P

      it could be used the way varjo is doing it or this https://youtu.be/-pg7vyRTj2E
      but according to Varjo 3000ppi for what was it 30 by 40 fov(or less) is enough to get rid of the sde
      but this is just new type of stack and production method ….they r not saying how it could be used

    • cirby

      The whole point is that you could have a 10,000×10,000 panel, but use in-hardware upscaling to run a 2500×2500 signal. The “resolution” would be 2500×2500 for the displayed image, but the individual pixels would basically be groups of 16 (4×4).

      With clever upscaling, you could get better-than-2500 perceived resolution. Absolutely zero screen door effect, and dead pixels will be so tiny you won’t even see them unless there’s several all clumped together.

  • xyzs

    That’s good to see that such high definition are achievable and with OLED.

    Now, the Pentile matric is not what VR needs..
    Make 5K screen with correct RGB matrix instead and everyone will be happy.