Meta to Ship Project Aria Gen 2 to Researchers in 2026, Paving the Way for Future AR Glasses

1

Meta announced it’s shipping out Project Aria Gen 2 to third-party researchers next year, which the company hopes will accelerate development of machine perception and AI technologies needed for future AR glasses and personal AI assistants.

The News

Meta debuted Project Aria Gen 1 back in 2020, the company’s sensor-packed research glasses which it used internally to train various AR-focused perception systems, in addition to releasing it in 2024 to third-party researchers across 300 labs in 27 countries.

Then, in February, the company announced Aria Gen 2, which Meta says includes improvements in sensing, comfort, interactivity, and on-device computation. Notably, neither generation contains a display of any type, like the company’s recently launch Meta Ray-Ban Display smart glasses.

Now the company is taking applications for researchers looking to use the device, which is said to ship to qualified applicants sometime in Q2 2026. That also means applications for Aria Gen 1 are now closed, with remaining requests still to be processed.

To front run what Meta calls a “broad” rollout next year, the company is releasing two major resources: the Aria Gen 2 Device Whitepaper and the Aria Gen 2 Pilot Dataset.

The whitepaper details the device’s ergonomic design, expanded sensor suite, Meta’s custom low-power co-processor for real-time perception, and compares Gen 1 and Gen 2’s abilities.

Meanwhile, the pilot dataset provides examples of data captured by Aria Gen 2, showing its capabilities in hand and eye-tracking, sensor fusion, and environmental mapping. The dataset also includes example outputs from Meta’s own algorithms, such as hand-object interaction and 3D bounding box detection, as well as NVIDIA’s FoundationStereo for depth estimation.

Meta is accepting applications from both academic and corporate researchers for Aria Gen 2.

SEE ALSO
'Star Trek: Infection' is Bringing a Slice of VR Body Horror to Quest 3 This Year

My Take

Meta doesn’t call Project Aria ‘AI glasses’ like it does with its various generations of Ray-Ban Meta or Meta Ray-Ban Display, or even ‘smart glasses’ like you might expect—even if they’re substantively similar on the face of things. They’re squarely considered ‘research glasses’ by the company.

Cool, but why? Why does the company that already makes smart glasses with and without displays, and cool prototype AR glasses need to put out what’s substantively the skeleton of a future device?

What Meta is attempting to do with Project Aria is actually pretty smart for a few reasons: sure, it’s putting out a framework that research teams will build on, but it’s also doing it at a comparatively lower cost than outright hiring teams to directly build out future use cases, whatever those might be.

Aria Gen 2 | Image courtesy Meta

While the company characterizes its future Aria Gen 2 rollout as “broad”, Meta is still filtering for projects based on merit, i.e. getting a chance to guide research without really having to interface with what will likely be substantially more than 300 teams, all of whom will use the glasses to solve problems in how humans can more fluidly interact with an AI system that can see, hear, and know a heck of a lot more about your surroundings than you might at any given moment.

AI is also growing faster than supply chains can keep up, which I think more than necessitates an artisanal pair of smart glasses so teams can get to grips with what will drive the future of AR glasses—the real crux of Meta’s next big move.

Building out an AR platform that may one day supplant the smartphone is no small task, and its iterative steps have the potential to give Meta the sort of market share the company dreamt of way back in 2013 when it co-released the HTC First, which at the time was colloquially called the ‘Facebook phone’.
The device was a flop, partly because the hardware was lackluster, and I think I’m not alone in saying so, mostly because people didn’t want a Facebook phone in their pockets at any price when the ecosystem had some many other (clearly better) choices.

Looking back at the early smartphones, Apple teaches us that you don’t have to be first to be best, but it does help to have so many patents and underlying research projects that your position in the market is mostly assured. And Meta has that in spades.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 4,000 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • Christian Schildwaechter

    Not sure about Meta's press material, but the whitepaper absolutely calls these AI/smart glasses in the very first lines of the abstract.

    Egocentric perception from wearable multimodal devices is crucial for contextual and embodied AI. Project Aria […] is a platform to advance egocentric sensing and real-world AI. […] This paper introduces Aria Gen 2, a new generation of all-day wearable smart glasses that expand the boundaries of multimodal egocentric sensing.

    One reason they don't widely proclaim Aria 2 as smart glasses may be to not awaken false hopes that something similar will be released to customers soon. Aria 2 doesn't do all that much out of the box, it mostly provides a lot of sensors and analysis tools plus online services with a software development kit to create new applications.

    The paper (linked in the article, 17 pages, somewhat dry) is an interesting read regarding how they solve issues like tracking and voice recognition in noisy outdoor environments on a much smaller compute/power budget than available in HMDs. While Aria 1 required a (wireless) connection to a PC for pretty much everything, Aria 2 can do (simplified) 6DoF head, hand and eye tracking on device for up to eight hours, so this is about moving out of the lab, and into the world.

    And sending it out to researchers and companies all over the world isn't a cost saving measure, it is putting the device into the hands of people that will use it to develop/test a vast range of applications/use cases. Which is an important part of moving out of the (Meta Reality) lab and into the world, and a win-win for both sides and the users, as nobody else has/offers similarly capable devices in an almost glasses form factor to try these things on today.

    Somewhat unrelated, the HTC First/Facebook phone wasn't their only flopped attempt to lure a huge amount of people onto cheap (subsidized) Facebook devices. Around the same time in 2013, they started internet_org/Free Basics with a couple of mobile companies to bring "free" mobile internet to people in developing countries. The problem was that the free data was limited to a just few "valuable" sites like Wikipedia, and one of them was of course Facebook, with the recently acquired WhatsApp another.

    After a lot of criticism they opened it a bit, but still including only a list of Facebook curated sites. This caused India with now 1.43B people, 85% of which are mobile phone users, with smartphone use expected to grow to 1B by 2026, to ban free services unless they offered net neutrality, not giving preference to single providers. Which caused Facebook/internet_org to withdraw from the Indian market in 2015.

    They've tried before, they will try again.