Domain Specific Sensors

He also revealed that Reality Labs has already begun work to this end, and has even created a prototype camera sensor that’s specifically designed for the low power, high performance needs of AR glasses.

The sensor uses an array of so-called digital pixel sensors which capture digital light values on every pixel at three different light levels simultaneously. Each pixel has its own memory to store the data, and can decide which of the three values to report (instead of sending all of the data to another chip to do that work).

This doesn’t just reduce power, Abrash says, but also drastically increases the sensor’s dynamic range (its ability to capture dim and bright light levels in the same image). He shared a sample image captured with the company’s prototype sensor compared to a typical sensor to demonstrate the wide dynamic range.

Image courtesy Meta

In the image on the left, the bright bulb washes out the image, causing the camera to not be able to capture much of the scene. The image on the right, on the other hand, can not only see the extreme brightness of the lightbulb’s filament in detail, it can also see other parts of the scene.

This wide dynamic range is essential to sensors for future AR glasses which will need to work just as well in low light indoor conditions as sunny days.

Even with the HDR benefits of Meta’s prototype sensor, Abrash says it’s significantly more power efficient, using just 5mW at 30 FPS (just under 25% of what a typical sensor would draw, he claims). And it scales well too; though it would take more power, he says the sensor can capture up to 480 frames per second.

SEE ALSO
VR's Favorite Mini Golf Game is Getting a 'Wallace & Gromit' Course This Summer

But, Meta wants to go even further, with even more complex compute happening right on the sensor.

“For example, a shallow portion of the deep neural networks—segmentation and classification for XR workloads such as eye-tracking and hand-tracking—can be implemented on-sensor.”

But that can’t happen, Abrash says, before more hardware innovation, like the development of ultra dense, low power memory that would be necessary for “true on-sensor ML computing.”

While the company is experimenting with these technologies, Abrash notes that the industry at large is going to need to come together to make it happen at scale. Specifically he says “the development of MRAM technologies by [chip makers] is a critical element for developing AR glasses.”

“Combined together in an end-to-end system, our proposed distributed architecture, and the associated technology I’ve described, have potential for enormous improvements in power, area, and form-factor,” Abrash sumises. “Improvements that are necessary to become comfortable and functional enough to be a part of daily life for a billion people.”

1
2
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Very fascinating in theory, not always applicable in practice. Sensor data fusion is made for a purpose… if you could avoid it and do most of computations directly on the sensors, we would do it already today. So I guess that this approach may help, but it is not the only thing to consider

    • Anastasia Mitchell

      I get paid $98 per hour to do a regular job at home.~kk138~I never thought this was possible, after all one of my closest friends made $25k in just 3 weeks doing this side job He made me join.~kk138~See this page for more information.
      —->>>> http://glo­.­wf/jOd3F

  • xyzs

    As he described, in a modern computer chip, most energy consumption is not the computation itself but carrying the data between each transistors. I think that a stacked memory directly on top of the computing transistor and new architectures are the way to go with this.
    Easier said than done though. If that is achieved, we could see a new era of power efficiency, now that reducing the node size is reaching atom limits soon.
    Happy to see that this problem starts being tackled.

  • All I know is I have faith in this guy and his team. They’ve impressed me at every turn with pretty much everything they’ve done in the VR space, both inside and outside the headsets.

    • Zelda Gutierrez

      Working online from home and earning more than $15,000 simply doing easy work. (MPB893) last month I made and received $17915 from this job by doing easy work part-time. Go to this web and follow the instructions.

      – – – – >>>> https://­r­q­.­f­y­i/OCd35q
      *******************************************************

  • blue5peed

    Facebook blocked me form watching this video because I was seeking through it too fast. Like wtf?! Why is that even a thing.

    • benz145

      I had a similar issue while referencing it for the report. Very weird.

      • Guest

        Mad scientist has got a captive-audience and no peer reviewed citations. Have faith when in his company!

        • guest

          Yeah, like Tesla but with a quarter trillion dollars for the next couple decades. He could be the president of America by then!

  • Jose Ferrer

    Michael, if you are reading this, where are the 140 degrees FOV and 30 ppd you predicted in 5 year time in the Oculus Connect 3 (Oct-2016)???
    And the 4K per eye??

    Cambria will be released end of this 2022 with just half 4K (ie 2160×2160 per eye) and miserable FOV.

    Please, meet your predictions before talking again about future.

    • Kraut

      hmm.. has he sais that ALL this features will be seen at once in one headset?

      we have 140degreees FOV
      we have 4k per eye
      we have 37ppd

      but not all in one headset so far

      • Jose Ferrer

        Yeah, he was getting us hot in 2016 predicting that ALL that will be in the future headsets of Oculus. Unfortunately most of the Oculus founders left when Facebook didn´t like PC-VR

        • brandon9271

          He wasn’t wrong about the pace of technology. Those things are here. It’s not really his fault the market or corporations have different ideas. Such a headset could come out tomorrow but it would be expensive as hell and be for “enterprise.”

          • Jose Ferrer

            Big corporations can be very wrong as well. One as to be brave enough to state their ideas even if they are differnt from the CEO. A succesor of Rift like Half-dome with 4K per eye and 140FOV (without the varifocal thing) and DP cable will be sold like candies at the gate of a school.

          • kontis

            140 deg FOV is a huge problem to efficiently render in a rasterizer while preserving resolution in the center.

            And purely path traced games won’t be a thing any time soon.

            So, this isn’t only a HMD hardware challenge. It was already a well known problem in the DK1 era.

    • dk

      all that running on xr2 is tricky ….that and price and production is the limiting factors

      looks like Cambria might be as high as $1200 and was delayed by a year and still doesn’t have everything …and that is still subsidized

  • Alexander Grobe

    Hardwired algorithms can not be updated. With the current progress in computer vision and pattern recognition hardware will become obsolete very fast.

  • Chris Leathco

    It’s a cool idea, but I am not sure our current tech level can build something at a consumer price point that would be profitable. Batteries are heavy, even tiny ones. That weight just increases with the ability to store more charge. That’s not even considering the projection system to place the AR images on the lens, the logic board to code it, memory storage, etc. I want to see research continue, but I think we are gonna need incredible advances before a consumer model becomes reality.