Strategy is what's left when code and design become free
For most of the last twenty years, the people who got things done ran the show. Engineers, designers, and delivery leads were what companies actually needed. Strategy was fine, but always downstream of execution. The best ideas in the room could still lose to someone who shipped faster. The cost of making things kept strategy in its place.
That constraint is collapsing.
When a single operator with AI agents can ship in a day what used to take a squad a quarter, something structural changes. The scarcity moves. Making stops being the hard part. What to make decides whether you win or waste a year of agent-hours on the wrong problem.
That's the entire case for strategy as a serious discipline again. Not strategy as a slide deck. Strategy as the actual rate-limiting work.
How execution became the measure of everything
Strategy fell out of fashion for a reason. Shipping software was expensive. A product team of ten, running for six months, cost real money and carried real risk. Velocity was precious. The teams that figured out how to move fast, cut scope, and ship iteratively survived. Everyone else got disrupted.
So the field optimised. Agile, lean product, two-pizza teams, sprint velocity as a health metric. Good advice. It worked. The mistake was assuming the underlying constraint would stay fixed.
What changes when making gets cheap
A concrete version. A year ago, a new onboarding flow took a front-end engineer two weeks, a designer a week, a PM coordinating throughout, plus QA and an analyst setting up tracking. Six weeks of effort across four people.
Today, a capable operator with a good agentic coding system can do a version of that in a day or two. The interface gets built. The copy gets written. The tracking gets instrumented. Review still required. Human judgement still required at key points. But the effort-to-output ratio has shifted by an order of magnitude.
Now consider the question that actually matters: which onboarding flow should you build? For which users? Solving for which behaviour? Measured how? That takes the same careful thinking it always did. Probably more, now that you can act on the answer so quickly. A bad answer used to be corrected slowly by the friction of slow execution. Now it gets shipped fast, iterated on fast, and compounds fast.
The skills that actually matter now
Strategy in practice splits into four things.
Foresight. The ability to read what's emerging before it's obvious. Most organisations are good at understanding the present and projecting it forward in a straight line. Foresight means catching the signals that suggest the line is about to bend. Which customer complaints are early warnings of a shift? Which competitor move signals where the market is heading? It requires wide reading, intellectual honesty, and a willingness to sit with ambiguity.
Optionality. Staying light on commitments until you have enough information to commit well. This runs against most organisational instincts. Companies want plans. Boards want roadmaps. Teams want to know what they're building next quarter. The strategist's job is to hold that pressure, and to know which decisions are easily reversible and which aren't. Reversible decisions fast. Irreversible ones deserve more time.
Judgement. Deciding what's worth optimising for. Most strategy falls apart here. Companies confuse activity with direction. They measure what's measurable rather than what matters. They optimise for engagement when they should be optimising for trust, or for short-term conversion when they should be building for long-term retention. Good judgement means naming what you're actually trying to achieve, being honest about the trade-offs, and holding that line when the pressure to chase a different metric arrives.
Measurement design. Defining what "done" means clearly enough that an agent fleet can work toward it. This one's new and more demanding than it sounds. Agents will optimise hard for whatever metric you give them. If the metric is wrong, they'll optimise hard for the wrong thing, quickly and at scale. Defining good success criteria, with the right leading indicators and the right guardrails, is now strategy-level work. It used to sit in analytics teams and get treated as operational. It should sit at the top.
What most companies are still doing
I've talked to a lot of product leaders this past year. Most are genuinely excited about what AI systems can do. They're using them to accelerate existing teams. They're shipping faster, which feels good.
Shipping faster only helps if you're heading in the right direction. If your strategy is fuzzy, moving faster gets you to the wrong place sooner. The discipline of knowing which direction to point hasn't improved at the same rate as the speed at which you can move.
A few companies get this. Anthropic is unusually deliberate about which capabilities to build and which to hold back, and why. That's strategy work at the highest level, and it will likely determine their position over the next decade more than any individual product decision.
The companies that struggle will be the ones treating AI as a speed upgrade without asking what they're speeding toward.
Execution was the last war
Execution still matters. Ships still need to leave the harbour. But "we're great at shipping" is no longer a differentiated position. It's table stakes, the same way "we have electricity" stopped being a selling point once everyone had electricity.
The question that separates good companies from great ones is shifting back to what you're building, for whom, and why it matters. That used to be a question you could answer loosely because execution constraints kept you honest. Bad ideas got filtered out by resource scarcity. Take away the constraint and the question gets sharper.
Most companies are still talking about execution as the differentiator. That was the right answer for a long time. It was the last war. The next one is about knowing what to build in the first place, and most companies haven't started preparing for it yet.