Meta Aims to Double, Possibly Even Triple Smart Glasses Production This Year

13

Meta and EssilorLuxottica are potentially set to double the expected production target for their smart glasses, according to a recent Bloomberg report.

Citing people familiar with the matter, the report maintains Meta has suggested increasing annual capacity to 20 million units by the end of 2026, as the company hopes to seize growing consumer interest in smart glasses.

Additionally, the report maintains that, provided demand is strong, capacity could exceed 30 million units. Talks are said to still be ongoing, Bloomberg says.

Ray-Ban creator EssilorLuxottica noted in February 2025 that it was ramping up production capacity to 10 million annual units by the end of 2026.

Meta Ray-Ban Display & Neural Band | Photo by Road to VR

The 10 million figure already represented a significant push past its 2 million units sold following the 2023 release of the first-gen Ray-Ban Meta smart glasses.

Currently, Meta and EssilorLuxottica offer two fundamental smart glasses types: audio-only AI centric frames, styled in both Oakley and Ray-Ban variants, and Meta Ray-Ban Display, which includes a single full-color display embedded in the right lens.

SEE ALSO
Meta Reveals 'WorldGen' Tool to Generate VR Worlds from AI Prompts

This comes amid news that Meta is pausing the international rollout of the $800 Meta Ray-Ban Display smart glasses, which was set to arrive in the UK, France, Italy and Canada sometime in early this year. The company maintains the pause was due to “unprecedented demand and limited inventory.”

Meanwhile, Meta is laying off around 10 percent of staff at its Reality Labs XR division, according to a New York Times report. The move is seen as a strategic shift, moving focus from VR and its metaverse ambitions to AI and smart glasses.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 4,000 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • Christian Schildwaechter

    I have no doubt that smartglasses are selling much better than VR HMDs, being way less intrusive. And given that the Meta Ray-Bans are also sunglasses, they will be a better investment even if nobody uses the smart features, while a Quest sitting on a shelf only collects dust. And I actually believe that they can be very useful, and people use them at least for taking pictures.

    But I still need to see some actual usage statistics. Quest sales were sort of inflated by parents buying them as Christmas presents for their teenage kids, so there was quite some disparity between Quest owners and active Quest users once the VR enthusiasts became outnumbered by more casual and usually younger users. Meta aiming for 10M smartglasses sold each year clearly hints that there is a lot more market potential than with VR HMDs, but says very little about how they will be used, which is quite important for the success of these devices.

    90% of the Oculus Go use was for watching movies, not a good sign for mobile VR. If 90% of the use of Meta Ray-Bans is as sun glasses, and another 8% for taking pictures, I would doubt that another round of investing billions into these devices plus the very expensive data centers for the AI driving them, will lead anywhere. Just as a reminder: 2011 the Microsoft Kinect 360 earned a Guinness world record for the "Fastest-Selling Consumer Electronics Device" after selling 8M units in just 60 days. Selling something that looks like a cool idea, and it doing what people hope it does and then keeping them using it for more than the initial WOW phase are two very different things.

    • Herbert Werters

      Yes, I see it exactly the same way, and I don't think smart glasses will be much more successful than VR glasses.

  • Max-Dmg

    Their software will be as sh*t as all of the rest of their software, and designed by students.

  • Foreign Devil

    I'll hold out to see what Google/Alphabet offers. I'd like prescription glasses with the best AI models that can see what I can see and give me instructions when I do renovations for the first time on my home and help with live translations when I'm speaking to my Chinese inlaw.

  • NL_VR

    not interested in glasses like this until its real AR, which is probably way further into the future.

    • silvaring

      isnt that like being in 1996 and saying you're not interested in cell phones unless they have full color HD displays and 3G modems? That would have been a long wait if so. I suspect you may be in the minority.

      • NL_VR

        no because this type of smart glasses is not "AR glasses". a screen glued in front of you is not AR not even close. You get one step closer to AR if you can anker the screen but thats just one step.

      • Christian Schildwaechter

        TL;DR: Waiting for tech to mature doesn't make you a luddite, and is often the much smarter move.

        My 1997 Siemens S10 active had a color display and a modem, though the resolution was 97*54 RGB pixels. With RGB here meaning a pixel could be red, green, blue or gray, with no shades of these. It still was the first to offer color at all. The GSM modem was also new, limited to 14.4kbit/sec and the reason I picked the phone. My first mobile device with an actual "hires" full color screen was a 2004 Palm Tungsten T5 with a 320*480 16bit/65K color display, the same resolution as the 2007 iPhone. The Palm could connect to a 3G phone, but I still paired it to a 2G phone that may or may not have supported EDGE with up to 236kbit/sec, I don't remember.

        So I didn't wait for FHD displays (FHD/1080p wasn't defined until 2006), nor 3G UMTS, which became available in Europe in 2004 at 384kbit/sec. Would I have recommended doing the same to others? No way. I never really used the Siemens modem, the colors just reduced the contrast. I used the hell out of my T5 and other Palms before it, but entering data with a pen either via PalmOS Graffiti handwriting or the onscreen keyboard was a pain. Connecting the Palm to my phone to go online was an unreliable disaster.

        Most people were served much better with waiting at least for the iPhone 3GS, and really for the 2010 iPhone 4. Which was also the first phone used for VR with USC MxR's 2012 FOV2GO viewer, thanks to its 960*480 retina display. Being a first mover/early adopter usually comes with a high price, as the technology often isn't really useful yet, has some actual disadvantages, and regularly involves lots of configuration and debugging hassle. And early on it is rather expensive. That's fine if you have a fitting use case or treat this as a hobby, but most people are looking for something actually useful, and are often better served by waiting a decade. And it quite astonishing how little they will often miss.

        Current smartglasses are borderline useless except for a few edge cases. If you need a wearable teleprompter, the Even Realities G1 are great. If you suffer from hearing loss, the Meta Ray-Ban Display with live transcription may be worth the money. If all you want to do is take is pictures without using your phone, get a pair. If you want smartglasses to help you in the way that AR glasses have been hyped for years, like showing you an instruction overlay during repairs or a virtual sofa in you living room before you order it, better wait another decade.

        I started showing people VR around 2014, back then always recommending them to wait. When the Rift CV1 released, I still mostly recommended waiting five years. The Quest 2 was probably the first HMD that I actually recommended to a few tech wise users, but most regular people would still be overwhelmed with guardians and passthrough and navigating the store and all the hassle involved with it. Maybe I'll recommend the Steam Frame to more people, if Valve manages to make it as smooth an experience as the Steam Deck.

        I'd recommend smartglasses only to people who know enough about them that they don't my recommendation, am personally only interested in those with displays, and there mostly those with an open source approach allowing me to influence their level of uselessness. In a few years I will start recommending them to relatives with bad hearing, but I doubt that there will be any AR glasses worth the name before 2035. The Samsung Galaxy XR is basically a pair of AI driven smartglasses doing some AR stuff in XR HMD form, because they cannot shrink it smaller yet, and its impressive Gemini plus Google services integration is still more tech demo than actually useful, for USD 2000. Waiting is the smart move here for the fast majority of people, and they know it.

        • silvaring

          I respectfully disagree Christian, the smart glasses we saw in 2025 are still in the teething phase, but already within a few years you can see how small and sleek they are getting. With conversational AI achieving similar breakthroughs since ChatGPT debuted advanced voice mode just over a year ago.

          With the rate of progress we are seeing 2035 sounds overly conservative. Of course I could be wrong, and an AR / AI winter could be coming, but im not sure this is the case because much like the Nokia 6110, a suitable, world shaping AR device only needs to do one or two things very well, in a suitable form factor, with a decent battery life.

          Dont think im hating on VR please, I love VR and think it will do just fine. But not as a daily driver for most of us, we already have daily drivers called smart phones and they will be enhanced by AR dont you think?

          • Christian Schildwaechter

            TL;DR: miniaturization is very hard, AI will continue to improve, but not as fast and not really becoming intelligent, and we still lack a lot of fundamental AR tech even without the challenge of cramming it into glasses with almost no compute power available.

            It of course all boils down to what "AR glasses worth the name" means, and how much AI/AR is expected to run locally. My test case is usually ""My kitchen sink water thing/faucet just broke, I have these tools here that I don't even know the names of and some kind of tape. How can I fix this?"

            I expect the AI bubble to burst, but mostly as a market correction with some large players like OpenAI that have no secondary income stream running out of investors, but I don't expect another AI winter, nor an AR one. AI is already very useful and powerful, just not useful and powerful enough to ever make back the trillions of USD thrown at it right now.

            Me still expecting it to take a decade is based on three factors: miniaturization, still lacking AR functionality, and AI improvements slowing down. The latter is already somewhat visible, newer models are getting incredibly large, requiring more expensive hardware to create, with more marginally improvements. That's rather typical for new tech, with low hanging fruits allowing for initially fast progress, and then slowing down. Smartphones these days don't gaing big features, VR HMD resolution made huge relative jumps in the first years to then get stuck around 2K for years, with 4K initially expected long ago now increasing the costs multifold.

            AI still isn't really intelligent, it is just excellent at inferring solutions from vast numbers of similar problems it found described somewhere else. But by itself it currently cannot derive/learn more systematic subjects like math, you basically have to plug in a math engine for an AI to do what a ten year old can do. Once you do, it becomes phenomenal though. So certain types of expert knowledge like the above mentioned DIY repair with a limited set of given resources will actually be quite difficult to achieve. AI can be extremely useful even without that, but I expect people to run into some hard limits that won't go away just by throwing even more hardware at it.

            Miniaturization is quite obviously a problem we already saw on VR HMDs. These are usually limited to 10-20W TDP, so what they can do is more defined by power consumption than technological progress. Apple being the exception here, putting a desktop class M5 into HMD, but at the price of an external 400g battery pack and still very limited runtime. But VR HMDs are powerhouses compared to Meta's Orion prototype drawing "a few dozen milliwatts", so less than 1% of what a Quest runs on. They only managed that by moving compute to a puck and only refreshing the display via push, not allowing for something like video play or graphics animation without totally crushing battery life. And Meta said they hope to turn Orion into a product by 2030 at laptop prices, targeting a 70° FoV using silicon carbide waveguides, which would be completely sufficient for my "AR glasses worth the name" definition.

            Getting things into a glasses form factor is incredible challenging, and todays smartglasses are basically just lightweight SoCs with microphones, speakers and cameras that delegate pretty much everything to a remote data center, with at best some local speech recognition. This of course helps, but isn't really feasible once you get into actual AR requiring object recognition and obstruction. The above mentioned virtual sofa needs the glasses to create a 3D room model, the current 6DoF head rotation/position, objects in the room with their specific depth, then render the virtual sofa matching the current light conditions, and cover up light sources behind the virtual sofa.

            You can do that today pretty easily with an Varjo XR-4 Focal edition connected to a fast PC in the 10-20ms acceptable to not be uncomfortable, at something like 1000W TDP. It should work on the 30W TDP AVP thanks to the dedicated R1 processor doing all the room and object tracking at insane speed. The Quest 3 with ~18W TDP clearly can't do it, not sure about the GXR, but probably also not. And you cannot really do that in a remote data center due to overall latency.

            A rule of thumb was that mobile GPU performance trails desktop GPUs by about a decade. The Quest currently draws about 1/50th of the XR-4+PC, and special optimizations like on AVP can help a lot, but Orion being able to draw less than 1% of what Quest currently draws puts it basically another decade behind it for local processing power. And we have seen generational performance increases in chips go down due to physical limits and increasing technological challenges.

            And even if you have the XR-4 plus powerful PC plus a fast internet connection to an AI running in a data center, you still couldn't do the "my kitchen sink faucet just broke" scenario today, because our object recognition plus scene analysis plus AI suggesting solution simply isn't there yet, not even close. All these problems with AI not necessarily scaling up as fast for more complex problems as before, a glasses form factor being extremely limiting, Meta hoping to release way more limited smartglasses at a high price with still lots of technological challenges not before 2030, and the real world interpretation needed for a true AR digital assistant not even working as a tech demo, are what causes me to still place "AR glasses worth the name" at 2035.

            If what you consider AR glasses something like Samsung's GXR in a smaller form factor, providing you with extra information when standing in front of a monument or giving you a text description with some diagrams how to repair a faucet, then you can have that earlier, esp. if you can tolerate the latency that comes with doing the actual work in a remote data center. But for me that is more like advanced Pokemon Go, a location based service that shows things in the environment without really understanding them in any way. I'm aware that Gemini does a lot more on Android XR, but it is still mostly just presenting existing Google services in a more convenient form.

          • silvaring

            Oh believe me I agree that true AR glasses with SLAM are probably just under a decade out. But I dont believe that true AR is needed for everyone to start having smart glasses, just like I dont believe full HD screens and 3G was needed for everyone to start using cellphones.

  • Raph

    Honestly, I can’t wait for cheap Chinese alternatives.
    Meta will absolutely deserve ending up with nothing — because short-sightedness and greed always work out so well, right?

    Which is the funniest part of all this: they were VR leaders. They basically built their own console and an entire ecosystem.

    And then… they decided to be greedy idiots.

    • NL_VR

      Yep thats because they thought by doing this they would be far in the lead with their end goal to invade our life with wearables ai devices on our head. But its a whole different thing and then they realize all the money spend on VR didnt put they far ahead from others. Others could litterly make competing products day 1. just look at Google and Samsung, they sat still waiting until the tech was ready. i think it made Meta throw everything upside down, they are not a gamingcompany.