There is a dangerous illusion at the heart of the Agentic AI revolution – an illusion so compelling that many of its architects are too entranced to see the cliff edge they’re marching us toward. It is the belief that we can automate the bottom while preserving the top. That we can replace the juniors, the assistants, the interns, the entry-level analysts and copywriters and designers – and still somehow end up with expert strategists, creative directors, senior engineers and C-suite leaders a decade from now.
This is, to put it bluntly, nonsense.
It is the equivalent of burning the fields in spring and expecting a harvest in autumn. And the people most aggressively pushing for this machine-mediated future don’t seem to understand their own business model, or how the fragile ecology of human expertise actually works.
The Vanishing Ladder
Every skilled practitioner in every industry – from architecture to medicine to film-making to finance – learns through doing. Through apprenticeship. Through repetition. Through error.
Junior doctors do night shifts. Junior lawyers draft boilerplate. Junior designers obsess over margins. Junior journalists sit through town council meetings. Junior data analysts clean filthy datasets. These are not inefficiencies to be eradicated. They are crucibles. They are the furnace in which good judgment is forged.
But the Agentic Ai mindset sees these as “low-value tasks” – the perfect ground for automation. “Let the machine do the boring bit,” they say, “free the human to focus on high-level thinking.” What they miss is that high-level thinking emerges from low-level doing. It is not innate. It is accumulated.
You don’t get a top strategist who has never written a deck.
You don’t get a principal engineer who has never debugged legacy code.
You don’t get a great novelist who’s never struggled with a terrible first draft.
Remove the rungs at the bottom of the ladder, and no one climbs to the top.
The Fantasy of the Fully Formed Expert
There is an implicit fantasy underlying the Agentic Ai push: that experts can be preserved like wine, or 3D printed on demand. In reality, expertise is more like a muscle – it needs constant tension and development. But more dangerously, it needs roots. And those roots start in the minor leagues.
This is where the philosophical provocation begins. The people pushing Agentic Ai often frame their narrative through the lens of utilitarianism – the idea that if a machine can do a task faster, cheaper, or more efficiently, then it is morally and economically right to let it.
But this utilitarian argument assumes a very narrow definition of value – immediate productivity. It does not account for latent value, or future capacity. In Aristotle’s terms, it is concerned only with actuality, not with potentiality.
When we cut junior roles, we are cutting away potential. We are erasing the conditions under which future excellence might emerge. The field might look clean. But it is sterile.
False Economies and Invisible Debt
Let’s make this more real.
In 2023, a major advertising agency proudly unveiled its new AI copy assistant. It could generate first drafts, social posts, product descriptions – all the “low-level” stuff. The agency began cutting junior copywriter roles. Six months later, the senior creatives complained: “The quality of the work has dropped.” Why? Because there were no juniors to bounce ideas off, no one to catch a tone error before it reached the client, no one to revise the AI’s slightly-off brand voice. Senior staff found themselves redoing more work, not less.
They had “saved” money. But like all false economies, it came at a cost they hadn’t measured: cognitive load, creative friction, time. Their human pyramid had lost its base. And without a base, the top wobbles.
In the world of software engineering, we see similar effects. GitHub Copilot, ChatGPT, and other code-completion agents have radically accelerated junior development. But those same juniors now often ship code they don’t fully understand. One senior engineer I spoke to said, “We’re creating a generation of devs who can paste but not parse.” They can prompt, but not debug. Worse, they can’t design systems – because they never built one from scratch.
The Apprenticeship Paradox
The heart of the issue is what we might call the Apprenticeship Paradox: the tasks most easily automated are the ones essential to learning.
An apprentice carpenter sweeps the floor, but while sweeping, they watch. They listen. They learn about the grain of wood, the order of operations, the rhythm of the shop. A young editor marks up transcripts and learns cadence, timing, pacing. An intern manages the dull Excel sheets and slowly begins to glimpse the patterns that make a business run.
These are not inefficiencies. They are scaffolding. Remove them, and the edifice collapses.
This was exactly the argument made by philosopher Matthew Crawford in Shop Class as Soulcraft – a beautiful and underrated book about the cognitive dignity of manual and skilled labor. Crawford argued that the erosion of hands-on learning in favor of “knowledge work” had created a crisis of competence. He wasn’t writing about AI. But he may as well have been.
The Illusion of the Agent
Let’s push further into provocation.
Agentic AI, in its current form, is not a tool. It is a proxy. It doesn’t amplify human creativity – it simulates it. That’s a crucial distinction.
You don’t learn from watching a proxy perform. You learn from doing. The feedback loop is personal. Immediate. Emotional. An AI writing tool does not tell you why the headline doesn’t work. A human mentor does.
This is the real risk. Not just the automation of tasks – but the automation of feedback. The system that replaces you doesn’t teach you. It replaces you silently. It does the job without explanation, without mentorship, without reflection. And without reflection, there is no growth.
So the junior writer never becomes a senior.
The junior analyst never becomes a strategist.
The junior engineer never becomes a systems thinker.
They don’t exist.
A Business Model That Cannibalises Itself
Here’s where the business model critique comes in – and it’s a damning one.
The companies most aggressively pursuing Agentic AI often depend on deep domain knowledge. Consultancies, law firms, creative agencies, engineering firms – all built on layered expertise, institutional memory, and judgment honed over years. They are selling “seniority” as a product. But they are ripping up the pipeline that produces it.
It is a self-cannibalising model. The short-term gain in margin is offset by the long-term erosion of talent. Like a bank paying out more in dividends than it earns in interest – it works until it doesn’t.
In startups, we call this eating your seed corn. In war, we call it burning your reserves. In ecosystems, we call it collapse.
What Comes Next?
To be clear, this is not an anti-AI screed. AI, used wisely, can support learning. It can augment junior roles. It can make apprenticeship more accessible, more global, more supported. But that requires intentionality.
It requires that we design AI systems that are teachers, not just agents.
It requires that we measure learning velocity, not just task completion.
It requires that we think in decades, not in quarters.
In short, it requires wisdom. And wisdom is in short supply in the current gold rush.
If we don’t course-correct, the future is predictable. We will have systems no one understands. Expertise will become rarer, more precious, more siloed. The gap between competent generalists and true experts will widen. And as today’s seniors retire, we will find ourselves with no one left to teach – and no one left to lead.
The Final Irony
Let us end with a bitter irony.
The great selling point of Agentic AI is that it frees humans to do “higher-value work.” But the very definition of high-value work depends on mastery. And mastery depends on time, tutelage, and toil.
If we automate the apprenticeship, we don’t liberate humans. We infantilize them. We freeze them in place.
We won’t get a future of experts. We’ll get a future of prompt monkeys – humans whose only skill is knowing what to type into a box.
And when the machines need tuning? When the model hallucinates? When the client says “this just doesn’t feel right”? When the AI fails?
There will be no one left who knows how to fix it.
Not because humans are stupid. But because we got too clever for our own good – and too impatient to let wisdom grow.

