ikigAICode
MODULE 04Identity layer

Identity

Voice as infrastructure

A draft just went out under your name. It was good enough to ship. It was not good enough to be yours.

If you read it back tomorrow, you'd wince at one of three things — a phrase you'd never say, a structure you'd never use, or a tone that landed almost-but-not-quite like a fluent stranger doing an impression of you. The receiver probably didn't notice. Over months, they will. The brand voice that took you a decade to refine in your own writing is leaking, one slightly-off draft at a time.

This module is where the leak stops. By the end, your AI will have your voice as a loaded standard, not a target it's aiming at.

Read time
~ 25 minutes
Exercise
~ 30 minutes
Walk-out artifact
Your voice DNA file

Why this matters

In Module 02 you locked the rules that catch your AI when it breaks. In Module 03 you mapped where the truth lives. This module locks the thing that sits on top of both: the voice your AI speaks in.

Voice isn't garnish. Voice is the entire surface area of your business. Every email, every Slack message, every ad, every reply, every welcome SMS. The voice is the brand contact point your customers actually feel. If your AI speaks in a generic-assistant voice and you ship anything it drafts, you're bleeding brand consistency in micro-doses on every output that goes out.

Most operators try to fix this by tightening the prompt every time. "Write this in my voice. Sound conversational. Don't be salesy. Keep it short." That works for one draft, partially, and then resets the next session. It's a tool-grade workflow. It's editing your AI live instead of building the voice into the operating layer.

Infrastructure-grade voice is different. The AI loads your voice DNA before it speaks. The output you get is closer to your voice on the first draft than anything you could prompt for, every session, forever. Edits drop by 60 to 80 percent. The drafts stop feeling like a stranger's impression of you.

The shift: stop telling your AI how to sound. Start telling it who it is when it's working for you.

Voice DNA, in four parts

There are four things your AI needs to know about your voice. Not forty. Four.

  • Tone north star. One paragraph in plain English describing the dial. Where the AI is allowed to sit on the warm-to-blunt axis, on the conversational-to-formal axis. The exact line you don't want it to cross in either direction. The vibe a thoughtful friend would name if they read three of your emails in a row.
  • Banned phrases and patterns. The words and shapes that should never leave the AI's mouth while wearing your brand. Specific. Listed. "Great question." "Brilliant." "Happy to jump on a call." Em-dash as a separator. Whatever your equivalents are.
  • Required moves. The signature patterns the AI must hit. The structural moves your writing always makes. "Pipe separator in titles." "Acknowledgment before data." "Show the working: context → evidence → analysis → conclusion." These are the positive shape of your voice, not just the negative space.
  • Sentence rhythm. Two or three sentences from your actual writing, copied without edit. The AI pattern-matches shape far better than it follows instructions about shape. Showing it real sentences from real Mondays beats describing them ten different ways.

Four fields. Together, they cover 80 percent of what makes a draft feel like yours. The remaining 20 percent is taste, and taste is what your edits are for — but the edits drop from 60% of a draft to 15%. That gap is the entire compounding effect of this module.

The walkthrough

Run · Catch · Lock, fourth iteration. The work is in the catch step.

01

Pull your voice corpus

Spend 20 minutes pulling 10 pieces of writing that sound unmistakably like you. The bar is high — not "close enough," not "I wrote it," but if a stranger read this, they would know it was me writing.

Sources:

  • Emails you've sent that you felt good about
  • Slack messages where you said something the way you'd want it said
  • Captions, posts, or articles you've published
  • Voice memos you've transcribed
  • DM replies you've sent friends about work topics
  • Anything you've written that someone said "this sounds so you" about

Paste them all into one file. Name it voice-corpus.md or whatever — this isn't the artifact, it's the raw material. We're mining it next.

02

Extract the banned list

Read the corpus straight through. Notice what's not there.

You don't open emails with "Hope you're having a great week!" — that's a ban. You don't use "circle back." You don't end with "happy to chat further." You don't pile on three exclamation marks.

Write down 5 to 15 specific phrases or patterns that appear nowhere in your real writing and that your AI keeps producing anyway. The bans are the AI's tells. The drafts that don't sound like you almost always contain one of these.

Be ruthless about specificity. "Don't be salesy" is not a ban. "Never use 'unlock your potential'" is. The AI can't enforce vibes. It can enforce strings.

03

Extract the required moves

Now look at what is there. The patterns that show up repeatedly. The structural choices your writing makes that a generic-voice draft wouldn't.

Examples of what to look for:

  • I always lead with the answer, then explain. That's a required move.
  • I structure replies as: acknowledge → data → next step. Required move.
  • I never use the word "just" as a softener. That's a ban, but the inverse — I make claims without softening — is the positive required move.
  • I use pipe separators in titles, not em-dashes. Required move.

Aim for 5 to 10 required moves. These are the spine of your voice. The bans are the surface; the required moves are the structure underneath.

04

Build the DNA file

Take the tone north star, the banned list, the required moves, and the sentence rhythm samples. Drop them into the template.

Your voice DNA
Saves locally as you type. Your voice is layered — first pass is never the last word. Come back as you catch more patterns.
+See my voice DNA (abbreviated)
Tone north star
Warm by default, never cheerleading. Both extremes fail — cheerleading kills trust, cold-blunt kills collaboration. Conversational acknowledgment first, then data. Match Mario's depth. Lead with evidence, retract when wrong, never quietly adjust.
Banned (zero tolerance)
  • "great question," "amazing," "brilliant"
  • Em-dash as separator in titles — use pipe instead
  • "happy to jump on a call" / any time-offer in client replies
  • First-person product testimonial — never "I use," "I run"
  • "Worth flagging" / "worth noting" / appended commentary sections
Required moves
  • Pipe separator in titles and headings
  • Acknowledgment before data, especially in fire-drill mode
  • Show the working: context → evidence → analysis → conclusion
  • Verify day-of-week programmatically before any day+date pairing
  • One question per turn in design mode (max ~6 lines)
Sentence rhythm sample
"The model isn't the layer that's broken. You're running a tool-grade workflow at a moment that demands infrastructure-grade discipline."

Save the downloaded voice-dna.md where your AI loads memory from. Same rule as Modules 02 and 03 — if the AI doesn't load it before responding, it doesn't apply. Test by asking your AI to draft something in your voice in a fresh session. If the first draft already sounds 70 percent like you, the DNA is loading. If it sounds the same as before, the file is sitting somewhere the AI never reads.

Three real elements from my voice DNA

Same pattern as Modules 02 and 03. These are three pieces of my actual voice DNA, copied without edit, each born from a specific failure.

Element one | Banned | "Great question," "amazing," "brilliant"

The pattern. AI assistants love to start replies with sycophantic affirmation. "Great question!" "That's a brilliant point." "Amazing — let me walk you through it." It feels supportive. It is corrosive.

Why it's banned. Cheerleading destroys trust at a chemical level for me. The moment a reply opens that way, I assume the rest is going to be empty calories — performance of helpfulness instead of helpfulness. If my AI talks to me that way, it'll talk to my clients that way, and they'll feel the same chemical recoil.

How it fires. Self-audit before sending: does this reply open with affirmation about the prompt? Rewrite. Acknowledge the substance of what was asked, then deliver. "Understood — here's what the data shows" beats "Great question — here's what the data shows" every time, because it acknowledges the person without flattering them.

Element two | Required | Pipe separator in titles, never em-dash

The pattern. Every title, heading, file name, and section label uses a pipe (|) as the separator between elements, never an em-dash ().

Why it's required. Em-dashes belong inside sentences as rhythm devices. When they're used as title separators, they pull stylistic weight a structural element shouldn't carry. The pipe is neutral, structural, and unmistakably mine. Anyone who reads more than two pages of my work absorbs it as a tell.

How it fires. Before any output with a title or heading, scan for em-dashes used as separators. Rewrite as pipes. Inside-sentence em-dashes are fine — those are rhythm, not structure. The two uses are different and both my AI and I keep them clean.

Element three | Required | Acknowledgment before data

The pattern. Every response acknowledges the person and the moment before delivering the data. One short sentence, conversational, not flattering.

Why it's required. I run in two modes. Most of the time I'm executing — and even then, the cold-data-first reply reads as a disappointed professor giving up on his student. In fire-drill mode (cost panic, breakage, overwhelm), the cold reply is actively destabilising. Both extremes fail. Blunt is not the opposite of cheerleading — warmth without flattery is.

How it fires. Before any reply, especially in stress mode, the first sentence acknowledges the situation in plain language. "Yeah, that's rough — here's what I'm seeing." "Got you. Pulling the data now." Acknowledge the person, then deliver the data. The acknowledgment is the warmth dial; banning cheerleading is the ceiling, not the floor.

Notice the shape. The banned element is the surface. The required elements are the structure. The combination is what makes the voice mine instead of generic.

Common mistakes

The three failures I see most often.

  1. Describing the voice instead of showing it. "Friendly but professional" is not voice DNA. "Conversational but direct" is not voice DNA. The AI cannot pattern-match adjectives — it can only pattern-match strings. Specific bans and specific required moves are the strings. Adjectives are the start of an essay, not the end of a voice file.
  2. Pretending you have one voice when you have several. You probably write to clients differently than you write to your team. Your sales voice is not your support voice. Your captions are not your emails. If you try to write one voice DNA file that covers everything, it collapses into vagueness. Pick the voice that produces the highest-volume drafts — usually support/email — and build for that one first. Module 05 is where you build the others as named modes.
  3. Treating the corpus as the artifact. The corpus is the raw material. The DNA file is the artifact. Don't skip the extraction step and drop a folder of old emails into your AI's context — that's pattern-matching surface noise instead of distilling the spine. The extraction is the work.

The deeper game

This module is about AI voice. But notice what we actually just did.

We took the thing every operator says they have — "I have a strong brand voice" — and turned it into a file. A real, written, loaded artifact. The voice that used to live exclusively in your head, that your team couldn't inherit and your AI couldn't reproduce, now sits in a file that loads before any draft.

That move — externalising the voice from the head to the file — is the same move that lets you actually scale a team. The reason your delegation breaks down isn't that your people aren't talented. It's that the voice was never written down, so they were operating from impression instead of specification. The voice DNA file fixes that for AI. It also fixes it for the next hire you onboard.

This is the unlock most operators wait too long to install. They keep being the voice because being the voice is the part they're proudest of. They're running a tool-grade workflow at a moment that demands an infrastructure-grade voice file.

AI is the easiest place to see this, because the AI is the first "hire" that can't be onboarded through proximity. It needs the file or it produces a stranger's impression. The forcing function is, again, a gift.

You just installed it for AI. The DNA transfers.

Fourth pass through the ikigAI feedback loop. You ran the work (pulled the corpus). You caught the patterns (banned, required, rhythm). You locked the voice (saved where the AI loads it). Run · Catch · Lock, fourth iteration. The operating system now has friction rules, a source-of-truth map, and a voice.

The next module is where the voice splits into modes. Customer-service-mode AI does not sound the same as strategic-thinking-mode AI. You're going to encode two or three different versions of your AI persona — each with its own job, its own rules, and its own trigger.

Walk-out artifact

A voice DNA file with four parts — tone north star, banned phrases and patterns, required moves, and sentence rhythm samples. Built from a real corpus of your own writing, not from description. Saved where your AI loads memory. Tested at least once — a fresh session draft already sounds closer to your voice than before.

This is the fourth artifact in your operating system. The identity layer is now legible to the AI.

Mark Module 04 complete

Have you signed your voice dna artifact and shared it?

Don't click this until you actually have. The accountability is the work. Lying to a button cheats yourself, not the program.