Anaken
Contact
← Articles

March 25, 2026

AI Didn't Move the Task. It Moved the Constraint.

Everyone keeps debating prompting vs. agents vs. skills. That's the wrong conversation. The constraint moved - and most people haven't noticed yet.

Yes, this is another AI take. You've seen the pattern - someone coins a term, the timeline catches up, everyone rewrites their bio. Prompting. Agents. Skills. I'm doing it too. That's not hypocrisy. It's just how attention works.

But here's what this whole conversation keeps circling without landing on.

Every few months, a new vocabulary captures the discourse. Right now it's skills over agents. Before that it was agents over prompts. Before that it was prompts over everything.

Each wave isn't wrong exactly. There are real distinctions between these things. But watch what the argument is actually about: it's always about technique. About the right way to construct an interaction with the model. About methodology at the task layer.

And that's the tell.

When the field keeps debating the same thing with trending buzzwords, it usually means something deeper hasn't been named yet. The debate about how to do the thing is louder than the debate about what the thing is actually for. People are optimizing a layer that no longer bottlenecks them - because that's the familiar place to have expertise. That's where authority used to live.

The jargon cycle isn't progress. It's displacement activity.

The Constraint Moved

For most of human history, the limiting factor in getting things done was execution. You might have a clear mental model of what needed to happen - but you still had to do it yourself, or manage the people doing it, or build the systems to do it at scale. Zoom-out thinking was a luxury. It cost time you didn't have. The smartest people in most fields learned to operate efficiently at the task level because that's where the work actually lived.

AI didn't just make certain tasks faster. It moved the bottleneck.

Execution is now accessible in a way it has never been before. Not free, not perfect - but accessible enough that it stops being the thing holding you back. And when the bottleneck moves, the leverage point moves with it.

If execution is no longer what limits you, then the mental model IS the work. How you frame the problem. What context you bring to it. What you're actually trying to solve versus what you think you're trying to solve. These were always important. Now they're primary.

This is why the agents vs. skills debate keeps feeling slightly off. It's two people arguing about hammers vs. drills while the real question is what they're trying to build.

This Has Happened Before

This isn't new. It's a pattern.

When calculators became cheap and widespread, the insight wasn't "learn calculator techniques." The insight - which took years to fully absorb - was that mathematical intuition suddenly mattered more than computational speed. Knowing what to calculate, and why, and whether the answer made sense - that became the job. The constraint moved from arithmetic to judgment.

The people closest to the old tools were often the last to see it. Not because they were slow - but because proximity to execution made it harder to see execution as the thing that had changed.

The same shift happened in design when digital tools collapsed production costs. In music when distribution became essentially free. In writing when publishing did. Each time, a wave of discourse about the new tools. Each time, a slower realization that the tools weren't the point - the model you brought to them was.

AI is that shift, happening faster and across more domains simultaneously than any previous version of it.

The Model Is the Leverage

This matters beyond technology. Beyond software, beyond startups, beyond anyone who would describe themselves as working in AI.

A surgeon thinking about patient outcomes. A policy analyst framing legislation. A teacher designing a curriculum. A founder deciding what to build next. The mental model you bring is increasingly what separates good outcomes from average ones.

The question isn't which AI framework you use. It's whether you've zoomed out far enough to see what problem you're actually solving. That's a harder question. It doesn't fit in a LinkedIn post as neatly as "skills over agents." It doesn't generate the same engagement.

But it's the question that compounds.

The next vocabulary wave is already forming. Someone is coining the term right now. It'll be partially right, partially useful, and mostly a rearrangement of the same underlying confusion.

Don't chase. Sit with it. Apply.