In June 2023, Apple unveiled the Vision Pro to a standing ovation at WWDC. Tim Cook called it "the beginning of spatial computing." The device was gorgeous -- a ski-goggle-shaped headset with micro-OLED displays running at 23 million pixels, an R1 chip dedicated entirely to sensor fusion, and a passthrough camera system so advanced it could render your real environment in near-real-time at a latency human perception could barely detect. The engineering was extraordinary. The price was $3,499. The promise was that this was the future of personal computing.
Two and a half years later, Apple has essentially suspended production of the Vision Pro. Ad spending on the product has fallen over 95% year-over-year across all major markets. The company shipped approximately 390,000 units through 2025, generating roughly $1.4 billion in revenue -- a rounding error for a company that does $380 billion a year. According to multiple supply chain reports, Apple is now redirecting its spatial computing team toward a completely different product category: lightweight AI glasses with no display at all, styled after Ray-Ban wayfarers, targeted for late 2026.
The Vision Pro did not fail because it was a bad product. It failed because it was a brilliant answer to a question nobody was asking.
The Numbers Tell a Brutal Story
Let me lay out the market data because the numbers are more damning than any opinion piece.
In 2025, the overall VR headset market declined 42.8% year-over-year. This was not a correction after a boom -- it was an acceleration of an existing downward trend. Meta shipped approximately 5.6 million Quest units, generating $2.9 billion in revenue, but even Meta's numbers were below internal targets. Sony's PSVR2 was quietly discontinued as a standalone product line. HTC's Vive division continued to shrink. The only VR company showing growth was ByteDance's Pico, and only in the Chinese domestic market.
Apple's 390,000 Vision Pro units need to be understood in context. Apple sells over 230 million iPhones per year. It sells roughly 60 million iPads. Even the Apple Watch, which took years to find its market, moved over 40 million units annually by its third year. The Vision Pro sold fewer units in its entire lifetime than Apple sells AirPods in a single week.
But here is the number that should haunt every VR headset manufacturer: in that same period where VR headsets declined 42.8%, the smart glasses market grew 211.2%. That is not a typo. The category that puts lightweight frames on your face with cameras, speakers, and AI -- no immersive display, no isolation from the world, no neck strain -- more than tripled in a single year.
The market has spoken. And it did not say "give me a computer strapped to my face."
What Went Wrong with Vision Pro
The Vision Pro was the most technically impressive consumer electronics device I have ever reviewed. I say this without hyperbole. The display quality was staggering. The eye-tracking was precise enough to serve as a primary input mechanism. The spatial audio was so well-calibrated that virtual sound sources felt physically present in the room. The passthrough cameras, while not perfect, were generations ahead of anything Meta or HTC had produced.
None of that mattered.
The fundamental problem was weight, heat, and social isolation -- three issues that no amount of silicon could solve within the current form factor.
At 600-650 grams (depending on the light seal configuration), the Vision Pro was too heavy for extended use. Apple clearly knew this, which is why the external battery pack existed -- moving 353 grams of battery off the head and into the pocket. But even with that compromise, most users reported discomfort after 45-60 minutes. The device pressed on the forehead and cheeks. It fogged up in warm environments. It left red marks.
Thermal management was a constant constraint. The M2 chip and R1 chip together drew 10-15 watts in active use. Dissipating that much heat in an enclosed space pressed against human skin is a physics problem with no elegant solution at this scale. Users reported warmth on the forehead, and Apple's software actively throttled performance to manage thermals, which meant the experience degraded the longer you used it.
But the deepest problem was social. Put on the Vision Pro and you disappear from the room. Yes, Apple built EyeSight -- the external OLED display that shows a digital rendering of your eyes to people around you. It was meant to preserve social presence. In practice, it was uncanny and slightly creepy. Family members, colleagues, and friends consistently reported that talking to someone wearing a Vision Pro felt like talking to someone who was not really there.
The device was designed for individual spatial computing. But humans are social animals. The most successful consumer electronics products of the last two decades -- smartphones, AirPods, smartwatches -- all enhance social connection rather than replacing it. You can use an iPhone while sitting at dinner with friends. You can take an AirPod out to talk to someone. You cannot casually dip in and out of a VR headset.
The Developer Exodus
Perhaps the most telling indicator of the Vision Pro's trajectory was the developer ecosystem, or rather, its collapse.
At launch, Apple touted over 1,000 visionOS apps. By mid-2025, the growth had flatlined. Major app developers who had initially committed resources to visionOS -- Netflix, YouTube, Spotify -- either never shipped native apps or quietly stopped updating them. The economics were simple: with fewer than 400,000 devices in the wild, the addressable market for a visionOS app was smaller than many individual neighborhoods.
I spoke with several visionOS developers over the past year. The pattern was consistent. Initial excitement about the platform's technical capabilities, followed by the dawning realization that there was no user base to sell to, followed by a quiet reallocation of engineering resources back to iOS and Android. One developer told me his visionOS app had generated $11,000 in total revenue over eight months. His iOS version of the same app earned that in a weekend.
Without developers, there are no compelling apps. Without compelling apps, there is no reason to buy the hardware. Without hardware sales, there is no developer incentive. This is the death spiral that has killed every computing platform that failed to reach critical mass, from webOS to Windows Phone to Google Glass.
The Smart Glasses Inflection
While VR headsets were declining, something remarkable was happening in a category that most tech analysts had written off after Google Glass's spectacular public failure in 2014.
Meta's Ray-Ban Smart Glasses, launched in late 2023 at $299, became a quiet hit. By 2025, Meta had shipped over 10 million units across all Ray-Ban Smart Glasses generations. The product did not try to replace your computer. It did not overlay holographic interfaces onto your world. It took photos, played music, made calls, and -- with the addition of Meta AI in 2024 -- answered questions about what you were looking at.
That last feature turned out to be the killer app nobody predicted. Point your glasses at a restaurant menu in a foreign language and ask "what should I order?" Look at a broken appliance and ask "how do I fix this?" Glance at a plant in a garden center and ask "will this survive in my apartment?" The utility was modest but genuine, and critically, it did not require you to withdraw from the world to access it.
The 211.2% growth in smart glasses was not driven by a single product. Xiaomi, XREAL, and a wave of Chinese manufacturers entered the category with products ranging from $99 to $500. Even fashion brands started partnering with tech companies on smart eyewear. The category had something the VR headset market never achieved: a form factor that people were willing to wear in public.
This is the fundamental insight that I believe Apple has now internalized: the winning spatial computing device is not the one with the most pixels. It is the one people will actually put on their face every morning.
Apple's Pivot: The Ghost of Google Glass
Multiple credible reports indicate that Apple is developing lightweight AI glasses for a late 2026 launch. The device reportedly has no display -- just cameras, microphones, speakers, and an array of sensors feeding data to Apple's on-device AI models. It looks like a normal pair of glasses. You charge it overnight. You wear it all day.
This is a complete philosophical reversal for Apple's spatial computing division. The Vision Pro was about immersion -- replacing the world around you with a computed one (or at least heavily augmenting it). The glasses are about ambient intelligence -- having an AI that sees what you see and can answer questions, take notes, identify objects, provide navigation, and translate languages without you ever touching a screen.
The irony is thick. Google attempted almost exactly this concept in 2013 with Google Glass and was driven out of the consumer market by privacy concerns, social stigma, and the "Glasshole" backlash. But several things have changed since then.
First, the AI capabilities are incomparably better. Google Glass had a voice assistant that could do basic web searches. Apple's on-device AI models can understand visual context, maintain conversational memory, and perform complex reasoning. The utility gap is enormous.
Second, the form factor has matured. Ray-Ban Smart Glasses proved that you can put cameras in eyewear frames without making the wearer look like a cyborg. The social acceptability barrier has been lowered by millions of people already wearing camera-equipped glasses in public.
Third, the privacy landscape has shifted. People post their entire lives on TikTok and Instagram. The idea that glasses with cameras represent a unique privacy threat has been rendered somewhat quaint by the ubiquity of smartphones that record everything anyway.
Meta Quest 4: The Last Stand of Immersive VR?
Meta is not giving up on immersive VR. The Quest 4, expected in late 2026 at around $800, represents what may be the last major investment in consumer VR headsets. Notably, Meta is ditching physical controllers entirely in favor of hand tracking -- an acknowledgment that the controller paradigm is a barrier to mainstream adoption.
The Quest 4 specs are impressive on paper: pancake optics with near-retina resolution, full-color passthrough at 90fps, a Snapdragon XR3 chip, and a weight target under 450 grams. It will be a meaningfully better product than the Quest 3.
But I am skeptical that better specs will solve the fundamental adoption problem. The Quest 3 was already a good product. The Quest 2 before it was a good product. The technology was never the bottleneck. The bottleneck was that most humans do not want to strap a computer to their face for hours at a time. Making the computer lighter, sharper, and faster does not change the underlying behavioral mismatch.
Meta itself seems to recognize this. Mark Zuckerberg has been increasingly vocal about AI glasses as the "next computing platform," and Meta's partnership with Ray-Ban receives more marketing investment than Quest. The Quest 4 may be excellent. It may also be the last of its kind.
The Hardware Physics of Face Computers
As someone who has spent two decades analyzing consumer hardware, let me explain why the form factor problem is so intractable.
A VR headset must accomplish several things simultaneously: display high-resolution images at close range to both eyes, track head and eye movement with sub-millisecond latency, render passthrough video of the real world, process complex 3D graphics, manage thermal output, and do all of this while balancing on a human face that is sensitive to pressure, heat, and weight.
The human head tolerates approximately 300-400 grams of front-facing weight before discomfort sets in within an hour. A pair of regular eyeglasses weighs 25-40 grams. Ski goggles weigh 150-200 grams. The Vision Pro, at 600+ grams, was roughly three times the comfort threshold for extended wear.
Reducing weight requires reducing battery size (less runtime), reducing display size (smaller field of view), reducing compute power (worse graphics), or breakthrough materials science that does not yet exist at consumer price points. This is not a software problem. It is a physics problem. And physics problems do not get solved by software updates.
Smart glasses, by contrast, have a fundamentally easier physics problem to solve. Without immersive displays, without high-powered GPUs, without the need for sub-millisecond visual processing, the power budget drops from 10-15 watts to 1-2 watts. That means smaller batteries, less heat, less weight. A smart glass frame can weigh 45-55 grams -- barely more than regular prescription glasses. You can wear them all day without thinking about it.
What Dies and What Survives
I do not believe immersive VR is dead as a technology. It has clear and valuable applications in enterprise training, surgical planning, architectural visualization, and high-end gaming. What I believe is dead is the vision of immersive VR as a mainstream consumer computing platform. The Vision Pro was the most ambitious and technically accomplished attempt to make that vision real, and its commercial failure is the strongest evidence yet that the market does not want it.
What survives -- and what I believe will define the next decade of personal computing -- is ambient spatial computing. AI glasses that sit on your face like regular glasses, that see what you see, that understand context, that answer questions, that take notes, that translate, that navigate. Not a replacement for reality but a layer on top of it. Not immersive but pervasive.
Apple's pivot from Vision Pro to AI glasses is not a retreat. It is a correction. The company spent an estimated $3.5 billion developing the Vision Pro and its underlying technology. Much of that investment -- the sensor fusion algorithms, the spatial audio processing, the computer vision models, the eye-tracking calibration -- transfers directly to a glasses form factor. The Vision Pro was, in hindsight, the world's most expensive proof of concept.
The future of computing on your face is not about putting you inside a computer. It is about putting a computer quietly into the background of your life. Apple learned this lesson the hard way. The rest of the industry should learn it from Apple's $3.5 billion mistake.
Kenji Murakami is a robotics and hardware engineering expert. He specializes in consumer electronics, sensor systems, and human-machine interaction. He has analyzed product teardowns and hardware design trends across the computing industry for over two decades.