The AI Layoff Scam: Why Companies Are Firing People for Technology That Doesn't Work Yet
Let me tell you what is actually happening in corporate America right now, because the press releases are lying to you.
In January 2026 alone, 35,105 tech workers lost their jobs. That is 82 separate layoff events. 856 people per day walking out of offices with cardboard boxes, their badges deactivated before lunch. And the reason given, again and again, in earnings calls and internal memos and carefully worded LinkedIn posts from CEOs? Artificial intelligence.
AI is coming. AI will handle it. AI makes this role redundant.
Except here is the part they leave out: the AI they are citing does not work yet.
The Numbers Don't Lie, But the Companies Do
Harvard Business Review published a piece in late 2025 that should have been a bombshell: "Companies Are Laying Off Workers Because of AI's Potential -- Not Its Performance." Read that headline again. Not its performance. Its potential. Companies are firing real people today based on a technology that might do their job someday, theoretically, if everything works out.
A Resume Builder survey found that 44% of hiring managers expect AI to be the top driver of layoffs in 2026. Not the top driver of productivity gains. Not the top driver of new product development. The top driver of firing people. That is not a technology strategy. That is a cost-cutting strategy wearing a technology costume.
The data from 2025 is even more damning. Approximately 55,000 layoffs were explicitly attributed to AI. But when researchers at Forrester dug into the specifics, they found that the AI capabilities supposedly replacing those workers were, in many cases, speculative. Demos, not deployments. Prototypes, not production systems. Roadmap slides, not shipped software.
Forrester's own projection? Half of those AI-attributed layoffs will be quietly rehired -- offshore, at lower salaries, doing the same work the "AI" was supposed to handle. That is not an AI transformation. That is labor arbitrage with better PR.
Amazon and the Art of "AI Restructuring"
Let us talk about Amazon, because they have turned this into an art form.
In January 2026, Amazon announced another round of staggering layoffs across multiple divisions, citing "AI restructuring" as the primary driver. The language was precise and deliberate: roles were being "reimagined for an AI-first future," positions were "evolving beyond traditional human workflows," teams were being "right-sized for intelligent automation."
Translation: we are firing people.
But look at what Amazon is actually deploying. Their warehouse automation, while impressive, still requires extensive human oversight. Their customer service AI handles basic queries but escalates anything complex to -- you guessed it -- human agents. Their coding assistant tools assist developers but do not replace them. The gap between the AI capability Amazon actually has and the AI capability Amazon is using to justify headcount reduction is enormous.
This is not unique to Amazon. CBS News reported in early 2026 that an increasing number of companies are "pointing to AI" when announcing layoffs, even when the connection between the technology and the eliminated roles is tenuous at best. It has become the acceptable excuse, the boardroom-approved euphemism.
The Legal Question Nobody Is Asking
Here is where it gets interesting from a labor law perspective, because I have been waiting for someone to litigate this, and the cases are starting to form.
Can you legally fire someone because a technology might do their job in the future? The answer is more complicated than most employers realize.
In most US states, employment is at-will, meaning an employer can terminate for any reason or no reason, provided it is not an illegal reason (discrimination, retaliation, etc.). So on its face, "we are replacing your role with AI" is a legal basis for termination. But there are several wrinkles.
First: the WARN Act. The Worker Adjustment and Retraining Notification Act requires employers with 100+ employees to provide 60 days' notice before mass layoffs affecting 50 or more workers at a single site. Many of these AI-attributed layoffs are structured specifically to avoid WARN Act thresholds -- spreading terminations across locations, staggering them over time, classifying them as "restructurings" rather than "layoffs." If a pattern emerges showing that companies systematically circumvented WARN Act protections using AI as a pretext, plaintiffs' attorneys will have a field day.
Second: disparate impact. If AI-attributed layoffs disproportionately affect workers over 40, or workers of a particular race or gender, the Age Discrimination in Employment Act (ADEA) and Title VII of the Civil Rights Act come into play -- regardless of the stated reason. Early data suggests that AI layoffs are hitting two groups hardest: entry-level workers (whose roles are deemed most "automatable") and high-salary senior workers (whose compensation makes them expensive targets for cost-cutting disguised as modernization). The age distribution of those high-salary workers skews older. That is a disparate impact case waiting to happen.
Third: implied contract and good faith. In states that recognize implied contracts or a duty of good faith and fair dealing in employment, firing someone for a technology that does not yet perform the stated function could be challenged. If an employer represents to investors that AI will replace certain functions, collects the stock price bump, fires the workers, and then quietly rehires offshore labor to do the same work -- that is not an AI transformation. That is fraud adjacent, and the terminated workers have a colorable argument that the stated reason for their termination was pretextual.
Fourth: collective action. The National Labor Relations Act protects workers' rights to engage in concerted activity, including discussing and protesting working conditions. If workers organize to challenge AI-attributed layoffs and the employer retaliates, that is an unfair labor practice. We are already seeing nascent organizing efforts among tech workers specifically around AI displacement, and the NLRB under the current administration has signaled receptivity to these claims.
The "AI Washing" of Corporate Layoffs
There is a term gaining traction among labor economists: "AI washing." It is a deliberate analogy to greenwashing -- the practice of making exaggerated or misleading claims about environmental responsibility.
AI washing works like this: a company needs to cut costs. Maybe revenue is flat. Maybe margins are compressed. Maybe the board wants a leaner operation. In previous decades, the company would announce layoffs and cite "market conditions" or "strategic realignment." The stock would take a hit. Analysts would ask hard questions.
But now there is a magic word: AI.
Announce the same layoffs and frame them as an "AI-driven transformation," and suddenly the stock goes up. Analysts praise the "forward-thinking leadership." The CEO gets a profile in a business magazine about "embracing the future." The laid-off workers are collateral damage in a narrative that benefits everyone at the top.
The incentive structure is perverse. Wall Street rewards companies for saying the word "AI" in the context of headcount reduction. It does not reward them for actually deploying functional AI. The announcement of AI replacement is worth more than the reality of AI replacement. So companies announce, collect the premium, and figure out the technology later -- if ever.
Meanwhile, the workers are gone.
Who Gets Hit Hardest
The distribution of AI-attributed layoffs is not random. Two groups are absorbing a disproportionate share of the damage.
Entry-level workers are the first targets because their roles -- data entry, basic customer service, content moderation, junior administrative tasks -- are the ones most plausibly handled by current AI tools. The word "plausibly" is doing heavy lifting in that sentence. These tools can handle simple, repetitive tasks in controlled environments. They struggle with edge cases, context, nuance, and anything that requires genuine judgment. But the plausibility is enough for a press release.
The destruction of entry-level positions has a cascading effect that companies either do not understand or do not care about. Entry-level jobs are how people enter industries. They are how skills are built, networks are formed, institutional knowledge is transmitted. Eliminate the entry level and you do not get a leaner organization -- you get an organization that cannot develop its own talent pipeline. Five years from now, these same companies will be complaining about a "skills gap" that they manufactured.
High-salary senior workers are the second target, for a simpler reason: they are expensive. A senior program manager making $180,000 is a tempting line item to eliminate when you can tell the board that "AI-augmented workflows" will absorb their responsibilities. Never mind that no AI system currently deployed can manage cross-functional programs, navigate organizational politics, or make judgment calls about resource allocation under uncertainty. The salary savings are immediate and the AI deployment is somebody else's problem.
The Rehiring Shell Game
Forrester's projection that half of AI-attributed layoffs will be quietly rehired deserves more attention than it has received.
Here is how the shell game works. Company X announces 500 layoffs, citing AI transformation. Stock rises. CEO bonus triggers. Three months later, a contractor in Manila or Hyderabad or Krakow is hired to do the exact same work, at one-third the salary, with no benefits, no job security, and no visibility in the headcount numbers that analysts track.
The AI did not replace the worker. The AI story replaced the worker. The actual work still requires a human. It just requires a cheaper human in a jurisdiction with fewer labor protections.
This is not new. Offshoring has been a corporate strategy for decades. What is new is the packaging. Calling it "AI transformation" instead of "offshoring" avoids the political and reputational backlash that offshoring attracted in the 2000s and 2010s. It is the same playbook with a shinier cover.
What Workers Can Actually Do
If you are a worker facing an AI-attributed layoff, here is concrete advice from a labor law perspective.
Document everything. Save emails, Slack messages, internal communications about AI deployment. If the company claims AI will do your job, document what AI tools are actually in use and what they can actually do. The gap between the claim and the reality is your evidence.
Request your personnel file. In many states, you have a legal right to your complete personnel file. Performance reviews, disciplinary records, and internal assessments that contradict the narrative of AI replacement are powerful evidence in any subsequent legal action.
Talk to your coworkers. Concerted activity is protected under the NLRA. If multiple workers are being laid off under the AI pretext, collective action -- whether through a union, a class action, or organized public pressure -- is more effective than individual complaints.
Consult a labor attorney before signing a severance agreement. Many severance packages include broad release clauses that waive your right to sue. An attorney can advise you on what you are giving up and whether the severance offer is adequate given the circumstances.
File for unemployment immediately. Do not let the company's characterization of your departure as "voluntary" or "mutual" go unchallenged if it was neither. AI-attributed layoffs are terminations, and you are entitled to unemployment benefits.
The Reckoning Is Coming
Here is my prediction, and I will stake my professional reputation on it: the AI layoff bubble will produce a wave of litigation within 18 months.
Age discrimination suits will be the leading edge. Disparate impact claims will follow. WARN Act violations will generate class actions. And at least one major case will center on the question of whether an employer can fire workers based on a technology that demonstrably does not perform the functions it was claimed to perform.
The legal infrastructure is not ready for this. Courts are slow, and AI literacy among judges is uneven at best. But the factual patterns are clear, the affected worker populations are large, and the plaintiffs' bar smells blood.
Corporate America has spent two years telling workers, investors, and the public that AI can do things it cannot do. They have used that narrative to eliminate tens of thousands of jobs. When the gap between the claim and the reality becomes undeniable -- and it will -- the legal and reputational consequences will be severe.
856 people per day. In January alone. For a technology that does not work yet.
That is not a transformation. That is a scam. And the law, eventually, will catch up.
Declan Murphy is a labor law expert and employment rights advocate specializing in technology-driven workforce displacement, collective bargaining, and corporate accountability.