The agentic org chart is two people deep

Here's the thing about org charts: they're not just administrative artefacts. They're a theory of work. Every box and line is an assumption about who needs to tell whom what to do, and why. Draw a tall pyramid and you're saying that execution is expensive, specialised, and hard to coordinate without layers of people translating intent down the chain. That assumption has been true for most of human organisational history.

It's stopping being true right now.

I've spent the last couple of years working with companies building agentic systems, and I keep seeing the same thing. The org structures people are defending were designed for a world where humans performed the bulk of execution work. That world is changing faster than most leadership teams are willing to admit.

What the old shape assumed

The traditional hierarchy exists because coordination is hard. A CEO has a vision, but they can't personally talk to every engineer, every support rep, every analyst. So you add managers. Then managers of managers. Each layer's job is to take intent from above and translate it into specific tasks for the people below. Useful work. Genuinely useful, when the execution layer is made of people who need direction, context, and feedback to function.

Middle management, in this model, is the translation layer. The person who turns a strategic priority into a sprint backlog. The team lead who turns a support philosophy into a ticket-handling process. The legal coordinator who turns a GC's instincts into a review checklist. Translation work. Real work.

But here's what's shifted. AI agents translate intent into tasks themselves. You give a well-configured agent system a goal, you give it context and constraints, and it decomposes the problem, plans the steps, and executes. Not perfectly. With supervision required. But the supervision needed is much lighter than what it replaces, and it operates at a completely different altitude. You're not managing tasks. You're managing outcomes.

The shape that's emerging

I'd argue the natural shape of an AI-first organisation is two people deep.

A principal decides what's worth doing. This is a genuinely senior role: setting direction, holding context, making judgement calls, owning accountability. Call it founder, executive, or lead strategist depending on your context. The title matters less than the function. This person has to be good.

An operator runs the agent fleet. This person configures, monitors, and improves the systems that do the work. They catch errors. They know when to intervene. They understand what the agents can and can't handle. They're part systems thinker, part quality control, part infrastructure owner. This role is new and genuinely skilled.

Between them? Not much. The translation layer shrinks because agents handle translation.

What this looks like in practice

Take a product team. Twelve people: a product manager, a few engineers, a designer, a researcher, a couple of QA specialists, a data analyst, a scrum master, and some support roles. Standard mid-size product squad. In an agentic setup, that's closer to three: a product lead who owns the roadmap and makes prioritisation calls, an operator who runs the agent systems handling spec writing, code generation, test coverage, and analytics, and maybe a designer doing the work that still genuinely benefits from a human eye. The other nine aren't fired in a weekend. But over a two-year horizon, you're not backfilling them when they leave.

Or take a customer support team of 40. That team exists because each support interaction takes human time, and volume is high. With a well-built agent fleet, you might run the same volume with 5 people: an operator managing the systems, two or three specialists handling the genuinely complex cases the agents escalate, and a lead who owns the overall customer experience and makes calls on policy. The other 35 positions don't get posted again.

Legal review is instructive. One general counsel with a well-configured agent system can cover a review workload that previously kept three or four junior lawyers busy. The GC supplies judgement: what's a real risk, what matters given the company's specific situation, when to push back on a clause. The agent handles the reading, summarisation, clause comparison, and first-pass flagging. The ratio of output per senior person goes up sharply.

None of these examples require AGI. They're possible right now, with current systems, for any company willing to build them.

What the remaining jobs look like

The jobs that persist are senior jobs. That sounds like good news, and in some ways it is. The work that remains is more interesting: more judgement, more ambiguity, more real decisions. You're not approving PRs or triaging tickets. You're deciding what to build, who to serve, what risks to carry, when to stop.

The operator role is genuinely new and underappreciated. These people need to understand how to write effective agent instructions, how to design evaluation systems that catch errors before they compound, how to debug agentic pipelines, and how to know when a workflow is trustworthy enough to run with less oversight. That's a real craft. We don't have great names for it yet, and we're certainly not training people for it in business schools.

The jobs that struggle to persist are coordination jobs. Roles that exist primarily to translate someone else's intent into someone else's action. Roles that exist to write status updates, run standups, own the ticket queue, or chase approvals. I don't say this with relish. A lot of good people have built careers in exactly these roles, and they'll need to reorient. The honest thing is to name it.

Why companies are slow to redraw the chart

There's an obvious political reason. The people who run companies often got there by being good at navigating large, complex organisations. They have personal networks built into the existing shape. Shrinking the hierarchy shrinks their domain, and in many cases, their identity.

There's also a subtler reason. Rebuilding a team around agentic systems requires making explicit what was previously implicit. A good middle manager carried an enormous amount of tacit knowledge about how things get done, what the real priorities are, and which rules are actually enforced. When you remove that layer, you have to encode that knowledge somewhere. Usually in agent instructions and evaluation criteria. That work is unglamorous, takes time, and requires the principal to actually articulate things they've never had to write down before.

Most organisations find it easier to add an AI tool to the existing structure than to restructure around it. So they do. And they get modest gains, and tell themselves they're keeping up.

They're not.

The question worth sitting with

I've seen a few organisations move fast on this. They tend to share one quality: the people at the top understand what agents can actually do, not in the abstract but in detail, and they've built the willingness to act on what that implies.

The companies that are stalling tend to share a different quality. Everyone can see what's happening. The senior people know that the translation layer is becoming automatable. The middle managers know their roles are changing. Nobody wants to be the one to say it out loud.

The question isn't whether agentic systems will flatten your organisation. That's in motion regardless. The question is whether your existing team already knows it's happening, and is quietly hoping no one notices.