The leverage · · 9 minute read

AI is a system, not a magic box

Why first-principles thinking outlasts every tool that comes through.

There is a particular tiredness I keep meeting in coaching calls.

Across from me is someone who has done the work. They’ve read the prompt-engineering book. They’ve taken the GPT course, the Claude course, the no-code agent course. They have a paid plan on three platforms. They built something useful in February and shipped it. And then a new model came out, and a new tool, and their workflow stopped working the same way it used to, and the book they bought already feels dated, and now they’re back on a webinar trying to keep up.

They aren’t failing. They’re running on a treadmill that gets faster every quarter. They feel busy. They don’t feel like they’re building anything that compounds.

What they’re missing isn’t a tool. It’s a way of seeing.

The magic-box delusion

The dominant frame in nearly every “AI for professionals” piece written in the last three years is: here is the magic box. Push these buttons in this order. Memorise these prompts. Subscribe to this platform. Outputs will come out.

It’s seductive because it’s simple. You don’t have to understand anything — you just have to memorise the ritual. And it works, briefly. Long enough to feel competent.

Then the model updates. The pricing changes. The platform you invested in pivots, or shuts down, or gets bought. The prompt that produced gold last quarter now produces something flatter. And you’re back at the start, looking for the next ritual.

The treadmill is the symptom. The cause is the frame. If you treat AI as a magic box, you are forever a user of someone else’s product, and your competence has the half-life of their roadmap.

What is AI, actually?

Stop and look at what is happening when ChatGPT or Claude writes a brief for you. There isn’t one thing happening. There are five.

One: the model. A trained language model that predicts the next token, given everything before it. It does not know things in the way you know things. It continues patterns. The patterns it has been exposed to are extraordinary, and dense, and often correct. But the mechanism is prediction, not knowledge.

Two: the context. The set of words, files, instructions, and prior turns the model is allowed to see for this specific task. The model can only operate on what is in front of it. What you put in front of it — in what order, with what framing — matters more than which model you chose.

Three: the memory. What persists between runs. Some products give you stateless context (every conversation starts fresh). Others give you working memory (the model remembers things you told it last week). Memory is a design choice, not a given.

Four: the tool use. Whether the model can do things outside its head — search the web, run code, read a file, send an email, query a database. A model with no tools is a sophisticated typewriter. A model with tools is something closer to an analyst with a laptop.

Five: the judgement. What you bring. Which of three drafts is right for this client. Which question to ask differently. When to reject the output entirely. The model doesn’t have your taste. It can’t. Taste is the part you encode by deciding what good looks like.

That is a system. Five parts. The output you receive is a function of how those five parts have been arranged — usually by someone else, on your behalf, hidden behind a single-button interface.

Once you can name the parts, you stop being a user. You start being a designer.

Why tool-chasing breaks you

Here is the cost of treating AI as a magic box: every time the box changes, your competence resets.

A short timeline of the last three years, abbreviated:

  • 2023: ChatGPT goes mainstream. Prompt engineering is the discipline of the moment. Books are written.
  • 2024: Claude, GPT-4o, open-source Llama models. Function calling. The first real wave of agent platforms.
  • 2025: Reasoning models. Multimodal. Long context. Computer use. Agent frameworks reshape every quarter.
  • 2026: Agents are the default. Models can run tasks for hours. The platforms you bought eighteen months ago are mostly gone or unrecognisable.

If your competence has been “I know how to use this tool,” you have re-learned your work four times in three years. You are tired for a reason.

If your competence is “I know how to design a system that uses whichever model is in front of me,” you have absorbed each release as background noise. The tool is interchangeable. The system is yours.

The first principles that don’t change

Across every release I have watched, four things have stayed true. They were true in 2023. They will be true in 2027. If you build on them, you stop chasing.

Models predict, they don’t know. The output you see is a continuation of patterns, not a retrieval of facts. That single fact explains most of what feels surprising about AI — the hallucinations, the context-dependence, the way one small wording change produces wildly different results. You stop being surprised once you stop expecting a database.

Context is scarce, and it is the leverage point. Models can only see what you place in front of them. What you put in front of them — in what order, with what framing — is by far the largest single factor in output quality. Bigger than which model. Bigger than which platform. Most of the difference between a senior expert’s AI output and a junior’s is the context they thought to provide.

Judgement is yours, and it stays yours. The model doesn’t know which of three drafts is right for your client, in this moment, in this market. You do. The work of evaluating, rejecting, and choosing is not something that gets automated by a more powerful model. The more powerful the model, the more important the taste.

Iteration beats perfection. The first output is rarely the right one. Treating AI like a vending machine — input prompt, receive deliverable — misses the discipline that actually produces good work. Ask. Read. Refine. Reject. Ask differently. The work is in the loop, not in the prompt.

These four don’t move with model releases. They are first principles in the literal sense: the things from which everything else can be derived. Anchor your practice to them and the next quarter’s release becomes a matter of which part of my system this changes — not do I need to start over again.

How does first-principles thinking compound with trained instinct?

Here is the move that closes the loop with the moat.

Your trained instinct — the lived experience nobody else has — is the moat. First-principles understanding of AI is the leverage. The two compound when they meet.

Without a moat, first-principles AI knowledge produces fast generic output. Useful, sometimes. Not differentiated. The world already has more of that than it can absorb.

Without first principles, a deep moat stays bottled up. You have twenty years of judgement that nobody outside the room can access, and no way to externalise it through a system you understand. Tools are alien. AI feels like someone else’s game.

Moat × leverage = output that is unmistakably yours, produced at a scale you couldn’t have managed before. AI agents that read the room the way you read the room. Drafts that already sound like you. Decisions that carry your taste, even when you weren’t in the room.

That is what we are after. Not a faster way to make generic content. A way to take the system that’s been running in your head for twenty years and turn it into one that runs while your hands aren’t on it.

What does this look like in practice?

A few quiet shifts.

Stop reading prompt-engineering guides. They date in months. Read primary sources from the labs that build the models — the Anthropic research blog and the OpenAI research page tell you more about where the systems are going than any third-party course will.

For every project you bring AI into, name the five parts. What is the model. What is the context. What memory exists. What tools does it have. What judgement do I bring. Make those choices deliberately, not by default.

Build for a specific decision, not a generic deliverable. “Help me draft proposals” is a weak system. “Help me decide whether this prospect is the right fit for my practice, given how I’ve handled the last forty similar conversations” is the beginning of a real one.

Iterate in writing. The act of refining what you ask — and why — is the act of encoding judgement. Skip it, and you ship someone else’s work in your name. Do it properly, and the system starts to sound like you.

In the second phase of The Anchor Method Charting Course — this is the work. We don’t teach tools. The tools are interchangeable. We teach the system and then encode your judgement into it. If that sounds like the route you want, the cohort is built around this exact arc.


What I want you to leave with is this:

AI is not a magic box. It is a system. Once you can name the parts, the system stops feeling alien, and the next release stops feeling like a threat. Tools change. The four first principles — prediction, context, judgement, iteration — do not.

You don’t need to chase. You need to design. The system you’re designing has a moat at the centre of it that nobody else can copy: the way you see your work. AI is the leverage that takes that moat and makes it bigger.

The treadmill is optional. Step off.

with care,Soh Wan Wei