In 2015, the predictions were breathtaking in their confidence. Elon Musk promised full self-driving capability by 2018. Lyft's co-founder declared the majority of rides would be autonomous by 2021. Google's self-driving car project -- later Waymo -- was logging millions of test miles, and industry analysts were projecting that autonomous vehicles would be a $7 trillion market by 2050. The narrative was irresistible: human drivers were dangerous, inefficient, and soon to be obsolete.
It is now 2026, and I still cannot hail a self-driving taxi in any city where I actually need one.
The gap between what was promised and what has been delivered is not merely a story of delayed timelines. It is a story of fundamental misunderstanding -- about what driving actually requires, about how technology scales, and about the difference between a demo that works 99% of the time and a product that must work 99.99% of the time. That last fraction of a percent has consumed billions of dollars, destroyed multiple companies, and humbled some of the most formidable engineering organizations on the planet.
The Demo Problem
The autonomous vehicle industry has always been extraordinarily good at demos. In controlled conditions -- clear weather, well-mapped roads, sparse traffic, predetermined routes -- self-driving cars are genuinely impressive. They brake smoothly, navigate intersections, yield to pedestrians, and maintain lane discipline with a precision that most human drivers cannot match.
The problem is that driving does not occur in controlled conditions. It occurs in construction zones where lane markings have been painted over three times. It occurs on roads where a traffic light is obscured by an overgrown tree branch. It occurs when an unmarked police car is directing traffic using hand gestures that violate the standard signal vocabulary. It occurs when a plastic bag is tumbling across the highway and the system must decide in milliseconds whether that object is a bag or a child.
The industry term for these situations is "edge cases," and the fundamental insight that has reshaped the entire field over the past decade is this: edge cases are not rare. They are the normal operating environment for a vehicle that drives in the real world. A human driver encounters dozens of situations per hour that require contextual reasoning, social negotiation, and prediction based on incomplete information. Each of those situations is, from the perspective of a machine learning system, an edge case.
Chris Urmson, who led Google's self-driving car project from 2013 to 2016, put it well in a moment of candor: "The problem is not that the last 10% is hard. The problem is that there's a new last 10% behind every last 10% you solve."
What Waymo Actually Achieved -- and What It Didn't
Waymo deserves credit. It is the only company that has deployed a genuinely driverless commercial service at meaningful scale. Its robotaxis operate in parts of Phoenix, San Francisco, and Los Angeles, and by most accounts they provide a functional, if occasionally awkward, ride experience. In 2025, Waymo completed over 150,000 paid rides per week across its service areas.
But look closer and the limitations are instructive. Waymo's Phoenix territory is a carefully selected grid of wide, well-maintained roads with clear lane markings, predictable traffic patterns, and minimal weather variation. San Francisco's deployment, while more ambitious, is still restricted to specific zones and operating hours. The vehicles occasionally freeze in traffic, confused by novel scenarios, requiring remote human operators to intervene.
More importantly, Waymo has spent an estimated $5.5 billion to reach this point, and the unit economics remain deeply challenging. Each Waymo vehicle costs roughly $200,000 to build and equip with sensors, compared to $35,000 for a typical rideshare vehicle. The LiDAR arrays, cameras, radar units, and compute hardware require regular maintenance and calibration. The fleet requires a substantial team of remote operators, map maintainers, and field technicians.
Waymo is real. But it is not the revolution that was promised. It is a carefully managed, geographically constrained, enormously expensive service that works in a handful of favorable environments. Scaling it to, say, all of New York City -- with its double-parked delivery trucks, jaywalking pedestrians, aggressive lane changes, and winter snow obscuring lane markings -- remains a problem of a fundamentally different character.
The Cruise Implosion
If Waymo represents the best-case scenario, Cruise represents the cautionary tale. General Motors' autonomous vehicle subsidiary operated a driverless taxi service in San Francisco beginning in late 2022, projecting confidence and moving aggressively to scale. Then, in October 2023, a Cruise vehicle struck a pedestrian who had been hit by another car and dragged her twenty feet before stopping. The incident was horrific, but what followed was arguably worse: Cruise executives initially withheld the full dashcam footage from regulators, triggering a crisis of trust that proved fatal.
California revoked Cruise's driverless testing permit. The company recalled its entire fleet. An internal investigation revealed a culture of prioritizing speed over safety. By 2024, GM had slashed Cruise's spending, laid off roughly a quarter of the workforce, and pivoted to a more modest strategy using vehicles with human safety operators.
The Cruise story is not merely a corporate failure. It crystallized a concern that the public and regulators had been quietly harboring: when an autonomous vehicle makes a mistake, who is accountable? The driver who doesn't exist? The engineer who trained the model? The executive who decided the system was ready? The regulatory framework for answering these questions remains profoundly underdeveloped.
The Long Tail of Edge Cases
Let me describe some scenarios that remain genuinely unsolved, or at best inconsistently handled, by current autonomous driving systems.
Construction zones. Temporary signs, human flaggers, shifted lanes, and equipment encroaching on the roadway. These environments change daily and cannot be pre-mapped with the precision that most systems require. Waymo's vehicles in San Francisco have been documented stopping and waiting indefinitely when confronted with novel construction configurations.
Adverse weather. Rain degrades camera and LiDAR performance. Snow obscures lane markings and road edges. Fog reduces sensor range. Ice changes vehicle dynamics in ways that require entirely different control strategies. Most autonomous vehicle testing has occurred in Sun Belt cities for a reason.
Social negotiation. A four-way stop where all drivers arrived simultaneously. A narrow street where two cars must negotiate who backs up. A parking lot where drivers communicate through eye contact, hand waves, and subtle vehicle positioning. These interactions rely on social conventions that vary by culture, region, and individual personality. No machine learning model has convincingly replicated this capability.
Unpredictable actors. Children chasing a ball toward the street. A cyclist weaving between lanes. An intoxicated pedestrian stepping off the curb. A dog running unleashed. A vehicle driving the wrong way. Each of these requires not just detection but prediction of intent, and prediction of intent requires a model of human (or animal) behavior that goes far beyond pattern matching.
Sensor degradation. Bug splatter on cameras. Mud on LiDAR housings. Misaligned sensors after hitting a pothole. Sensor fusion systems that produce conflicting readings. A human driver with dirty glasses squints and drives more cautiously. An autonomous system with degraded sensors may not even know it is degraded.
The Regulatory Maze
The regulatory landscape for autonomous vehicles in the United States is a patchwork of contradictions. There is no federal framework governing the deployment of driverless vehicles. Instead, regulation occurs at the state level, with each state adopting different rules, testing requirements, and liability frameworks.
California requires detailed incident reporting and permits. Arizona has been famously permissive, which is why Waymo launched there first. Texas has minimal specific AV regulation. New York effectively prohibits autonomous vehicle operation through existing vehicle-and-traffic law. A company seeking to operate nationally must navigate fifty different regulatory regimes, many of which were written with no conception of autonomous vehicles.
The absence of federal standards has created a bizarre situation. A Waymo vehicle that is legally authorized to operate without a driver in Phoenix would be illegal to operate on the streets of Manhattan. The same technology, the same safety record, the same vehicle -- but radically different legal status depending on which side of a state line it sits.
International regulation is even more fragmented. China is pursuing its own framework, with companies like Baidu's Apollo operating robotaxis in Wuhan and Beijing under rules that bear little resemblance to American or European standards. The European Union has begun developing type-approval regulations for autonomous vehicles, but implementation timelines extend well into the 2030s.
Tesla's Parallel Universe
No discussion of autonomous driving is complete without addressing Tesla, though it occupies a peculiar position in the landscape. Tesla's "Full Self-Driving" feature is not, by any engineering definition, full self-driving. It is a Level 2 advanced driver assistance system that requires continuous human supervision. Yet it is marketed under a name that implies autonomy, installed in over two million vehicles, and used daily by drivers who -- Tesla's legal disclaimers notwithstanding -- treat it as though it were autonomous.
The results are predictable and documented. NHTSA has investigated hundreds of crashes involving Tesla's Autopilot and FSD systems. The fundamental problem is automation complacency: humans are terrible at monitoring systems that work well most of the time. When FSD handles 95% of driving situations competently, the driver disengages. When the 5% arrives -- a scenario the system cannot handle -- the driver is checking their phone, looking at the scenery, or simply not paying attention.
Tesla's approach -- camera-only perception, massive fleet data collection, and iterative neural network improvements -- is technically fascinating and may eventually achieve genuine autonomy. But the company's persistent conflation of "impressive demo" with "ready for deployment" has, in my view, set back public trust in autonomous vehicles by years.
The Economics Nobody Wants to Discuss
The original pitch for autonomous vehicles was economic as much as technological. Remove the driver, the argument went, and the cost per mile drops dramatically. Robotaxis would be cheaper than owning a car. Autonomous trucking would slash logistics costs. The savings would be so overwhelming that adoption would be rapid and inevitable.
The reality has been considerably less convenient. The cost of the autonomous driving stack -- sensors, compute, redundant systems, mapping infrastructure, remote monitoring -- adds $100,000 to $200,000 per vehicle. That cost must be amortized over the vehicle's operational life, and current sensor lifespans are measured in years, not decades. The operational overhead of maintaining high-definition maps, monitoring fleet health, managing remote interventions, and responding to edge case incidents is substantial.
Waymo has not disclosed its per-ride economics, but independent analysts estimate the cost per ride-mile at $3 to $5, compared to $1.50 to $2.50 for a human-driven Uber. The gap is closing, but it has not closed, and the path to profitability requires not just technological improvement but massive scale -- which requires solving the very edge case and regulatory problems that have constrained deployment.
Autonomous trucking faces similar challenges, compounded by the fact that long-haul routes traverse multiple states, each with different regulations, and include loading dock scenarios that require complex low-speed maneuvering.
A Realistic Timeline
So where does this leave us? I have spent fifteen years working in this field, and I will offer what I believe is a realistic assessment, stripped of both hype and despair.
2026-2028: Waymo continues gradual expansion to additional cities, likely targeting Austin, Miami, and Atlanta -- Sun Belt cities with favorable weather and road infrastructure. Regulatory battles will remain city-by-city. Unit economics will improve but will not reach parity with human drivers.
2028-2032: Autonomous trucking achieves limited commercial deployment on specific highway corridors, likely starting with the I-10 and I-45 corridors in Texas. Hub-to-hub trucking, where vehicles operate autonomously on highways and human drivers handle the first and last miles, becomes viable. This hybrid model sidesteps the hardest urban driving problems.
2032-2035: Regulatory frameworks begin to mature. Federal standards emerge in the US. The EU implements type-approval for specific vehicle categories. Sensor costs decline by 60-70% from current levels. Truly driverless operation expands to 15-20 major US metropolitan areas.
2035 and beyond: Autonomous vehicles become a normalized part of the transportation landscape in favorable geographies, coexisting with human drivers rather than replacing them. Full replacement of human driving -- the original promise -- recedes to a horizon that is perpetually fifteen years away, much as it has been since 2015.
The lesson is not that autonomous driving is impossible. Waymo has proven that it is possible. The lesson is that the distance between "possible in a controlled environment" and "ubiquitous in every environment" is not a gap to be crossed with a few more years of engineering. It is a chasm that requires solving problems we are still learning how to define.
The self-driving future is coming. It is just coming much more slowly, much more expensively, and in a much more constrained form than anyone who watched those gleaming demo videos in 2015 was led to believe. And perhaps that is not a failure. Perhaps it is what responsible engineering looks like when the stakes are measured in human lives.
Tobias Keller is a transportation systems engineer and autonomous vehicle researcher based in Munich. He has spent fifteen years working on perception systems, sensor fusion, and safety validation for autonomous vehicles across both industry and academic research. His work has appeared in IEEE Transactions on Intelligent Transportation Systems, the Journal of Field Robotics, and Automotive Engineering International.