On the AI jobs - debate.
Date
Apr, 2026
Service
Two sides
Client
Self
Project Overview
The public conversation about AI and knowledge work has settled into two positions.
The optimist position says AI creates more jobs than it destroys. The argument rests on the historical pattern of automation. The industrial revolution eliminated blacksmithing and produced factory work. The internet eliminated travel agents and produced web developers. Each wave traded an old category for a new one, and net employment held or grew. Every major tech executive currently on record is making some version of this argument.
The pessimist position says AI is structurally different. Previous automation waves eliminated tasks within jobs. AI targets the input itself, which is human cognition. The “just learn the new thing” response stops working when the new thing gets automated before anyone can retrain for it. Law, medicine, consulting, accounting, finance, marketing, design, engineering, and education all employ millions of people doing work that is cognitively similar to what AI already does adequately. The price of that work compresses as adoption spreads.
Both positions are partly right.
The optimist case often carries a sales shape. The companies selling tools benefit when customers believe there will still be workers to give the tools to. If the workers disappear, the customers disappear.
However, that alone does not refute the optimist case. It does argue for reading the optimist stance as advocacy more than prediction. And who says advocacy can’t be prediction.
The pessimist case overclaims its certainty. Historical patterns are clear in hindsight. In the middle of the transition, the industrial revolution did not present itself as “blacksmithing down, factory work up” at the time. Factory work was not a standardized category until long after the category was already forming. The same is happening now. Just faster and not distributed.
Validation work is already expanding in scope. Because there is a gap between when AI delivers complex output and when a person takes accountability for it. An agent does not have the same accountability as a person, because a person’s livelihood depends on the work, and that dependency shapes how the work gets done. Accountability moves up the chain in varying degrees towards the people who build and deploy the agents. New forms of data work are emerging for the same reasons. However, the roles do not exist yet.
Some industries will hold longer because of regulation or trust requirements. The direction is structural. But pace and distribution are open questions. The transition will not be even.
What is already visible is useful. Anthropic has partnered with Accenture. OpenAI has partnered with Mckenzie, BCG, and others. At the onset of AI, that specific industry was ruled out as dead. More arrangements of this shape will follow.
1. Can we credibly start defining new roles?
2. Can we trace them back into modifications to our education process?
3. In the short term, who gets trained for the new work, and how fast? What has this looked like historically?


