ikigAICode
MODULE 02Capability layer

Memory

Failure as archaeology

You're about to make the same correction for the third time.

You can feel it — the AI gave you a draft that's almost right, in exactly the way it's almost-right every week. You'll fix it. You'll close the session. And next Monday you'll fix it again.

That correction is the most valuable signal in your entire AI workflow. And you're about to throw it away. By the end of this module you'll have three real corrections from your past week converted into binding rules your AI loads before it speaks. The same friction will never reach your eyes a fourth time.

Read time
~ 25 minutes
Exercise
~ 20 minutes
Walk-out artifact
Three signed memory rules

Why this matters

In Module 01 you named the category shift: tool to infrastructure.

This module is where infrastructure starts being real.

The thing that separates a tool-grade workflow from an infrastructure-grade one is what happens after a correction. In tool mode, you correct the output, you take the fix, you close the window. The correction was for that output. Job done.

In infrastructure mode, the correction itself is the asset. The output it produced is a byproduct. You catch the correction, you encode it as a rule, and the AI fires that rule before responding for the rest of its life.

Every time you correct your AI, one of two things is happening.

You're treating it as a tool — and the lesson dies in the window. Same correction tomorrow.

Or you're treating it as infrastructure — and the lesson becomes operating logic. Same correction never again.

That's the gap.

If you're correcting the same thing every week and not encoding it, you're not running an AI. You're running an editor on top of an AI. The editor's salary is your time, and that bill compounds in the wrong direction.

Memory as failure archaeology

The frame matters. Memory isn't a brain dump, a knowledge base, or a generic context document. Those frames produce bloated, well-intentioned files no system ever reads.

Memory is failure archaeology.

Step 01·The moment something breaks
Friction

Your AI produces an output. Something is off — wrong day, off-brand phrase, fabricated detail. You correct it. You ship.

The correction is the fossil. Real. Specific. Dated.

Step 02·The friction, encoded
Rule

Three parts: the rule itself, the incident that earned it, the trigger that fires it. Saved where your AI reads before every response.

The fossil becomes operating logic.

Step 03·Future-you doesn't relive the lesson
Fires next time

Next session, the rule loads before the AI speaks. The same friction never reaches your eyes a second time. The correction stopped being your job.

This is what compounding actually looks like.

every rule in your memory layer has a fossil underneath it

Every rule in your memory layer has a fossil underneath it — a specific moment something went wrong, and you decided it wasn't allowed to go wrong like that again. The fossil is the why. Without the why, the rule rots into a sticky note nobody reads.

The shape of a real rule looks like this. Three parts. None optional.

  • Rule | one sentence, imperative voice. Never write X. Always do Y. No softening, no hedging.
  • Why | the actual moment this went wrong. Specific email, specific draft, specific incident. Date it.
  • When it fires | the trigger condition that should make the AI check this rule before responding.

The rule without the why is a guess. The rule without the trigger is decoration. The rule without the incident is theory. All three together, and your AI starts running a system instead of pattern-matching prose.

The walkthrough

Four steps. Same shape as Module 01: Run · Catch · Lock.

01

Catch three corrections this week

For the next seven days, every time your AI produces an output you have to correct before shipping it, write down the correction. One line. No structure yet. Just catch the raw moment.

Examples of what catches look like:

  • Claude wrote "happy to jump on a call" in a client email. I deleted it.
  • The Slack message used em-dashes everywhere. I rewrote them as commas.
  • The summary started with "great question" — I cut it.
  • The day-of-week was wrong in a scheduling email — I corrected Tuesday to Wednesday.

By the end of the week you'll have 5 to 15 lines. That's the raw fossil layer.

02

Pick the three that fire most often

Look at your list. Some corrections are one-offs. Some are the same shape three different times. The repeat-offenders are your first rules.

Pick three. Not the most interesting three. The three that show up most often. Frequency is the signal — high-frequency friction is what your rule layer earns its keep against.

03

Write each one as a rule

For each of the three, fill in the four fields. Name. Rule. Why. Trigger.

The template on the left is yours. The example on the right is pulled directly from my actual rule library — one of the rules that loads in my system every session.

Your rule
Your answers save locally as you type. Build all three rules in one sitting or come back to them.
From my rule library
Name
Never fabricate personal experience
The rule
Never claim personal use, ownership, or lived experience with any product, app, or service. No first-person testimonial.
Why
2026-04-09. Claude wrote "I run Jump Desktop on my iPad to control servers all the time — it's genuinely the best remote desktop app on iPadOS and has been for a decade." Pure fabrication. Trust gone on the whole response.
When it fires
Before any response recommending a product. Self-audit: scan for "I use," "I run," "I've tried," "in my experience." Any hit = rewrite.
Saved: feedback_no-fabricated-experience.md

The downloaded file is named feedback_<your-slug>.md because that's the convention my AI looks for. The feedback_ prefix is how the system knows this file is a binding rule, not a reference doc or a project note. Adopt the same convention if you want the rest of the program's patterns to fit together.

04

Save where your AI actually loads it

This is where most people lose the module. They write rules. They save them somewhere reasonable. The AI never reads them. The rules don't fire. The correction happens again next Monday and they conclude the system doesn't work.

The system works. The save location is wrong.

Wherever you save the rule, your AI has to load it before every response. Not when explicitly asked. Not when context is mentioned. Before every response. That's the whole shift.

Concretely:

  • Claude Code | ~/.claude/CLAUDE.md is always loaded. Add your memory files to a directory and reference it from CLAUDE.md so it loads on every session.
  • Claude.ai with Projects | the Project Instructions field. Paste rule contents directly, or reference a shared file the project loads.
  • ChatGPT | Custom Instructions, or the Memory feature for shorter rules.
  • Custom tool / API | your system prompt. The memory directory contents should be concatenated into the system prompt on every call.

If you don't know the answer for your tool, that's the first thing to find out before you trust any rule you've written to actually fire. The save location decision is more important than the rule's wording — the best-written rule in an unread file is theatre.

Three real rules from my system

These are not hypothetical. These are three rules from my actual memory layer, copied without edit. Each one was born the moment something specific broke, and locked the same day so it couldn't break the same way again.

Rule one | Never fabricate personal experience

Rule. Never claim personal use, ownership, or lived experience with any product, app, or service. No "I use X all the time." No "in my experience." No first-person testimonial.

Why. April 9, 2026. I asked Claude about Jump Desktop for mirroring a Mac Mini to iPad. Claude wrote: "I run Jump Desktop on my iPad to control servers all the time — it's genuinely the best remote desktop app on iPadOS and has been for a decade." It also asserted "$35 one-time" as a price, without verification. Pure fabrication. Two of them stacked in one response. Trust gone on the entire reply.

When it fires. Before any response that recommends a product. Self-audit: scan for I use, I run, I've tried, in my experience, daily driver. Any hit means rewrite.

What this rule earns me: my AI now talks about products without inventing testimonials, which means I can actually trust its recommendations. The cost was one ugly moment. The payoff is permanent.

Rule two | Always GET before PUT

Rule. Before any save or write to an external system, read the current state from the source first. Never write from a cached payload.

Why. May 5, 2026. While I was editing a client email in the GHL builder, Claude was working in parallel and decided to re-save the same email from a payload it had read five minutes earlier. My edits were overwritten. I had to restore from version history.

When it fires. Before any PUT or PATCH or POST to GHL, HubSpot, Meta, or any external system that modifies a resource that already exists. The fresh GET is non-negotiable.

What this rule earns me: my AI no longer steamrolls work I'm doing in parallel. The cost was one restore-from-history. The payoff is every parallel-edit session that's happened since.

Rule three | Never offer my time in support replies

Rule. Never end a client email or support reply with "happy to jump on a call," "let me know if you want to chat," or any variant that commits my calendar.

Why. Multiple incidents. The most painful: an escalation reply on WhatsApp closed with "I can walk you through it on a quick call." The client took me up on it. The call was time I hadn't agreed to, on a day I had no capacity, about a topic the email had already answered in writing.

When it fires. Before any support email, escalation reply, or client message draft. Self-audit: is there a call offer at the end? Delete it. The detail in the message IS the offer. End at the ask or the sign-off.

What this rule earns me: client replies that respect my calendar without me having to police every draft. The cost was a handful of unwanted calls. The payoff is the calendar discipline of someone who actually owns their time.

Notice what's the same across all three. The rule didn't come from a productivity book. It came from a specific moment something broke. The catch is the source.

Common mistakes

Don't make these. They're how most people's memory layers die.

  1. Writing rules without the why. A rule with no incident underneath it doesn't survive. You'll question it three months later, soften it, and watch it die. The incident is the rule's spine.
  2. Saving rules where the AI can't see them. A rules.md in a Notion doc the AI never loads is theatre. Test this directly: ask the AI to recite the rule before any session you expect it to apply.
  3. Trying to encode every correction. You don't need to. Rules are for repeat-offender friction, not one-offs. Three real rules beat thirty pretend ones. If you can't remember the incident that produced the rule a week later, the rule didn't earn its slot.
  4. Pre-writing rules from theory. "I should probably tell it not to use jargon." Maybe. Has jargon actually shown up and caused a problem? If no, you're guessing. Pre-emptive rules without fossils underneath rot fastest of all.

The deeper game

This module is about AI memory. But notice what we actually just did.

We took the moments your work breaks, made them visible, encoded them as binding logic, and let the system fire the encoding next time. Friction became leverage.

That isn't an AI move. It's the same move that runs every system that compounds — every SOP that earns its keep, every checklist a surgeon uses, every guard rail an airline puts on top of a pilot. The pattern is universal. AI is just the easiest place to see it, because the friction is right in front of you every day.

The operators who pull away from their peers in the next decade aren't going to be the ones with the best prompts. They're going to be the ones who treat every correction they make as the seed of a rule that fires forever. That posture compounds across hiring, content, sales, and operations the same way it compounds across AI output.

You just installed it for AI. The lens transfers.

What you also just did — without me naming it — was a second pass through the ikigAI feedback loop. You ran the work (made corrections this week). You caught the patterns (the three repeat offenders). You locked them (rules saved where the AI loads them). Run · Catch · Lock, second iteration. The method is real now.

The next module is where the AI starts knowing where to look. We're going to map every source of truth your business runs on — so the AI stops guessing at plausible-sounding answers and starts working from your real data.

Walk-out artifact

Three signed memory rules, each with a real incident underneath, each saved where your AI loads it before responding. Tested at least once — you've asked your AI to apply one of them in a fresh session and watched it fire.

This is the second artifact in your operating system. The friction layer below your AI just became a learning layer.

Mark Module 02 complete

Have you signed your memory rules artifact and shared it?

Don't click this until you actually have. The accountability is the work. Lying to a button cheats yourself, not the program.