ai · engineering · organisations

Everyone Is Becoming an AI Engineer (Whether They Know It or Not)

AI Engineer Literacy

There are two competing visions of how AI will reshape work.

Most organisations are starting with one. I think there's a better sequence.

The Delegation Model

This is the vision most leadership teams have internalised. It looks familiar because it maps onto existing hierarchies:

Manager → AI agents → work gets done

In this model, AI agents are essentially digital workers. You delegate tasks to them. A new specialist role emerges: the "AI engineer", who builds and maintains these agents. Everyone else carries on as before, just with some robot colleagues.

It's a comforting vision. It fits neatly into org charts and business cases.

And it's not wrong, exactly. But I think it's incomplete without something else happening first.

The Amplification Model

There's another way to look at this. Instead of AI sitting beneath you in a hierarchy, it sits between you and your work:

You ↔ AI ↔ work

In this model, "AI Engineer" isn't a job title. It's a literacy requirement.

A software developer is still a software developer. A designer is still a designer. They're just working with power tools instead of hand tools.

A carpenter with a nail gun is still a carpenter. They still need to understand joinery, materials, and design. They're just faster, and they can take on bigger projects.

The skill ceiling goes up. It doesn't disappear.

The Literacy Problem

Here's what I'm seeing in practice: people aren't close enough to the physics of how AI actually works.

They don't understand context windows. They don't know what context engineering means, or why it matters. They can't explain why one agent might need to talk to another agent, or when that's useful versus when it's wasteful.

So they try things based on what they've been told AI "should" be able to do. They're disappointed when it doesn't work. They conclude that AI isn't ready, or isn't useful for their domain.

But the problem isn't the technology. It's the mismatch between expectation and capability.

Imagination-First vs Capability-First

Most AI initiatives I see across the industry follow this pattern:

  1. Leadership identifies a use case that "should" benefit from AI
  2. Teams are commissioned to make AI fit that use case
  3. Nobody deeply understands the underlying mechanics
  4. It doesn't work reliably
  5. Time, money, and goodwill are wasted
  6. Everyone moves on to the next shiny thing

This is imagination-first thinking. You start with what you want AI to do, then try to bend reality to match.

The alternative is capability-first thinking:

  1. People learn what AI actually is and how it works
  2. From that grounded understanding, they identify what AI can do reliably
  3. They build small, tested workflows within their actual jobs
  4. These get refined through real use
  5. Only then do proven capabilities get packaged into products or solutions

The difference is simple:

  • Imagination-first: "We need an AI solution for X"
  • Capability-first: "AI can reliably do Y. Where does Y fit?"

Two Tracks, In Order

Organisations need both exploration and productisation. But the order matters.

Track 1: Learning

Give people embedded AI capabilities. Let them experiment within their actual workflows. Build genuine understanding of what works and what doesn't. Keep stakes low and learning density high.

Track 2: Packaging

Take things that have been proven to work reliably, by real people doing real work, and turn them into repeatable solutions.

The mistake is skipping Track 1. Jumping straight to productisation based on vendor promises and conference keynotes. Building infrastructure for use cases that never had a realistic chance of working, because nobody understood the constraints.

The Opportunity

If you take the amplification model seriously, something interesting happens. Your entire workforce becomes a testing ground.

Thousands of people, learning what AI can and can't do, in the context of their actual jobs.

That's not a cost centre. That's a competitive advantage.

The organisations that win won't be the ones with the biggest AI budgets or the most agents deployed. They'll be the ones whose people genuinely understand what they're working with.

Everyone is becoming an AI engineer. The question is whether your organisation helps them become a good one.