AI Productivity Framework

Worldview Recursive Compression

How to Make AI Think Like You

Building the frameworks that patch LLM reasoning with your domain expertise

By Scott Farrell | LeverageAI

What You'll Learn

  • Why AI outputs feel generic—and the mechanism to fix it
  • The compression ladder that turns expertise into reusable frameworks
  • How to build your first kernel this week—with templates and checklists
  • The compounding flywheel that creates 6x productivity advantage

TL;DR

  • AI outputs feel generic because models reason from stale, averaged patterns—not your expertise.
  • Your frameworks act as worldview patches that override the model's defaults with your accumulated judgment.
  • The compression ladder: World → You → Frameworks → OS → Outputs → back. Each cycle improves all future outputs.
  • 6x productivity advantage: 3x faster × 2x win rate. The gap widens each year as your kernel compounds.
  • Start this week: 3-5 frameworks, four files, minimum viable kernel. What do you know that AI doesn't?
01
Part I: The Spine

The Stale Worldview Problem

Why your AI keeps suggesting chatbots—and what that reveals about a deeper architectural issue

"My AI keeps suggesting I build a chatbot. For everything. Customer service? Chatbot. Sales enablement? Chatbot. Knowledge management? You guessed it—chatbot."

If you've spent years developing expertise in your field, this frustration is universal. You know better answers exist—you just can't get the AI to produce them. The outputs feel competent but generic, like advice from a junior consultant who's read all the blog posts but never done the work.

This isn't a prompting problem. It's a worldview problem. And understanding why it happens is the first step to fixing it.

The Frozen Worldview

~2024
Knowledge cutoff for current LLMs. Understanding often lags even further—research shows comprehension can reflect patterns from 2015 or earlier.

Source: ArXiv, "Is Your LLM Outdated?"

LLMs are trained on data with a knowledge cutoff—typically 12-18 months behind the current date. But the problem runs deeper than just "outdated facts."

Research reveals a troubling pattern: even when models can recall recent events, their understanding often reflects patterns from years earlier. The surface knowledge may be current, but the deep reasoning draws on older paradigms.

"A knowledge cutoff date represents the point in time beyond which an LLM has no inherent knowledge of events, developments, or information. Unlike humans who continuously learn, LLMs have their knowledge 'frozen' at a specific temporal point."
— AllMo.AI, "List of Large Language Model Cut-Off Dates"

When you ask AI for strategic advice, it's not drawing on cutting-edge thinking. It's averaging across millions of blog posts, forum discussions, and articles—producing what you might call "generic internet consultant soup."

The model's worldview is frozen—and it's reasoning from patterns that may already be obsolete in your domain.

The Generic Output Problem

What Generic Looks Like vs. What You Know

❌ Without Your Frameworks
  • • "Build a chatbot" (universal recommendation)
  • • "Create a dashboard" (for any analytics question)
  • • "Automate routine tasks" (no diagnosis of which)
  • • Standard best practices from 2023 articles
  • • Advice that could apply to any business
✓ What You KNOW the Answer Should Be
  • • "Redesign this process based on AI maturity"
  • • "Augment this role given change capacity"
  • • "Apply the Autonomy Ladder for phased rollout"
  • • Recommendations reflecting YOUR patterns
  • • Context-specific, defensible reasoning

The 7-8/10 Trap

Outputs feel "pretty good"—not bad enough to reject outright. They're polished, articulate, well-structured. But something is off—you can't quite articulate what. You spend hours iterating, refining, adjusting. The output gets to 8/10 but never quite reaches 9/10 or 10/10.

Why This Happens

The Model Doesn't Know What You Know

You have 10-20+ years of domain experience. You've seen hundreds of similar situations. You've developed intuitions about what works and what doesn't. You can diagnose a situation in minutes that would take a junior person hours.

None of this is in the model's training data.

The model knows what the INTERNET knows about your domain—the average of all blog posts, all forum discussions, all articles. It doesn't know YOUR specific patterns, YOUR frameworks, YOUR judgment calls.

Think about what happens when you're in the room with a junior consultant. They might suggest "implement a chatbot" based on what they've read. But you'd intervene: "Wait, have you considered their AI maturity level? This company needs process redesign, not automation."

Your frameworks are the missing input. Without them, the AI operates like that junior consultant—well-read but lacking your pattern recognition.

The Cost of Generic

Immediate Costs

  • • Time wasted iterating on outputs that never feel right
  • • Client recommendations you can't fully defend
  • • Missed opportunities where better frameworks would have found better solutions
  • • Team members can't replicate your quality—your patterns are tacit

Compound Costs

  • • Every output without frameworks is a missed learning opportunity
  • • Knowledge stays trapped in your head instead of being externalised
  • • Model upgrades just produce generic outputs faster
  • • Competitors who build frameworks compound their advantage
10x
The improvement left on the table. Without frameworks, you cap at incremental automation (5-10% gains). With frameworks, you unlock transformative reimagination (60-90% gains).

The model isn't broken. The prompting isn't the problem. The issue is: you haven't told the AI what YOU know.

The Before/After Contrast

Consider a concrete scenario: A client asks for AI implementation recommendations. Same model. Same task. Watch what changes based on the inputs:

Same Model, Same Task, Different Inputs

❌ Without Your Frameworks

  • • "Consider implementing a customer service chatbot"
  • • "Build a dashboard for real-time analytics"
  • • "Automate routine email responses"
  • • "Start with a proof of concept to demonstrate value"

Analysis: Generic, could apply to any company, no diagnosis of specific situation

✓ With Your Frameworks Loaded

  • • "Based on your R2 AI maturity, start with human-in-the-loop assist before autonomy"
  • • "Your change capacity is 2/5—surgical intervention on one process, not transformation"
  • • "The Autonomy Ladder suggests Phase 2 (assist) before Phase 4 (act)"
  • • "Given your 50-person team with no IT, Level 5 agentic systems are too risky"

Analysis: Specific, diagnostic, applies YOUR pattern recognition to THEIR context

"The difference isn't prompting skill. It's whether the model has access to YOUR worldview."

Same model, same capabilities. The variable is what you put in context. Your frameworks act as a worldview override—replacing "generic internet average" with "YOUR accumulated expertise."

Key Takeaways

  • 1. LLMs have frozen worldviews: Training data cutoffs mean models reason from patterns 12-18+ months old—and understanding lags even further.
  • 2. Generic outputs aren't the model's fault: Without your frameworks, AI averages across "generic internet consultant soup."
  • 3. The 7-8/10 trap is real: Outputs feel "pretty good" but never quite right because they lack your specific perspective.
  • 4. Your expertise isn't in the training data: 20 years of pattern recognition, judgment calls, and frameworks exist only in your head.
  • 5. The cost compounds: Every output without frameworks is a missed opportunity to build a reusable asset.
  • 6. The before/after gap is dramatic: Same model produces "chatbot" vs "process redesign based on R2 maturity" depending on what you feed it.

This chapter named the frustration: AI outputs feel generic because models reason from stale, averaged patterns—not your expertise.

But what can you do about it? The answer isn't better prompting or waiting for smarter models. The answer is patching the model's worldview with your frameworks.

Next: Chapter 2 — Frameworks as Worldview Patches →

Chapter References

  1. 1. AllMo.AI, "List of Large Language Model Cut-Off Dates" — allmo.ai/articles/list-of-large-language-model-cut-off-dates
  2. 2. ArXiv, "Is Your LLM Outdated?" — arxiv.org/html/2405.08460v3
02
Part I: The Spine

Frameworks as Worldview Patches

How your compressed expertise overrides the model's generic reasoning

"An expert with 20 years of experience looks at a business problem and sees patterns invisible to a generalist. Within minutes, they've diagnosed the situation and know three approaches that will fail and one that might work. The AI has none of this. Unless you give it."

Your expertise isn't just "knowledge"—it's compressed pattern recognition. The AI knows everything on the internet circa 2024. What it doesn't know: YOUR 20 years of judgment calls, YOUR frameworks, YOUR hard-won intuitions about what actually works.

What Expertise Actually Is

Pattern Recognition, Not Just Information

Experts don't know more facts—they see patterns faster. Years of experience compress into intuitions. "That feels risky for a small team" represents hundreds of observations crystallised into a heuristic. "This company isn't ready for agentic AI" encodes pattern matching against 50+ similar engagements.

90%
of a consulting firm's expertise is tacit knowledge—embedded in consultants' heads, shaped by lived experience, and rarely written down.

Source: Starmind, "Unlocking Tacit Knowledge in Consulting Firms"

The Tacit Knowledge Problem

"In management consulting, firms don't sell products; they sell expertise. Yet research shows that up to 90% of a firm's expertise is tacit knowledge, embedded in consultants' heads, shaped by years of lived experience, and rarely written down."
— Starmind, "Unlocking Tacit Knowledge in Consulting Firms"

This is exactly what the AI is missing. Generic prompting can't access tacit knowledge because it was never written down. The model has no visibility into the judgment you apply instinctively.

The McKinsey/BCG Pattern

Consulting firms' real value isn't individual consultants—it's codified frameworks developed over decades. These frameworks compress hundreds of engagements into reusable patterns. New consultants inherit decades of pattern recognition on day one.

Key insight: You can build the same asset—and AI accelerates the process dramatically.

Frameworks as Worldview Patches

How Patches Work

LLMs reason from training data—their default "worldview." When you load frameworks into context, you're providing an alternative basis for reasoning. The model now reasons FROM your frameworks, not just generic patterns. It's like a software patch that updates how the system thinks.

The Compression Pattern

You're not just "writing down what you know." You're compressing intuitions into explicit decision rules. Each framework captures 100+ hours of hard-won pattern recognition.

From Intuition to Framework

Before Compression (Tacit)

"I have a feeling this is too risky for them"

After Compression (Explicit)
If team < 10 people
AND no dedicated IT
AND Level 5+ autonomy required
→ reject (change capacity insufficient)

The intuition becomes a testable rule. The rule can be applied by AI (or by team members). The tacit becomes explicit and reusable.

Frameworks vs Prompts

"This is just good prompting with extra steps." That's the common misconception. But the difference is fundamental:

Prompts Frameworks
Instructions Worldviews
Tell the model what to do Change how the model thinks
Per-output Reusable across outputs
Improve one output Improve all future outputs
Linear value Compound value

The Basis Vectors Metaphor

"Transformer-based LLMs develop an internal geometry that is both semantically structured and computationally efficient... concepts can be represented as linear combinations of basis vectors in the model's hidden representation space."
— ArXiv, "Large Language Models Encode Semantics"

The model's default basis vectors equal the average of the internet. "AI implementation" becomes a weighted average of all AI implementation advice online—resulting in generic recommendations that lack specificity.

Your Frameworks as Custom Basis Vectors

When you load frameworks into context, you provide new dimensions. "AI maturity assessment" becomes YOUR specific diagnostic. "Readiness criteria" becomes YOUR thresholds. "Recommendation patterns" become YOUR solution shapes. The model now reasons in YOUR dimensions, not generic ones.

Practical Implication

5-10 sharp frameworks > 50 vague ones.

Each framework is a precise dimension. Combinations of frameworks produce rich, specific outputs—like how RGB (3 colours) can produce millions of specific colours.

Context as Worldview Override

"Context is not just data—it's a worldview override. What you put in context literally changes how the model thinks."
Surface Understanding

"Give AI more context → get better outputs"

True but shallow. Treats context as background information.

Deep Understanding

"Context is a temporary fine-tune"

You're not giving it information—you're giving it YOUR worldview. The model thinks differently with your frameworks loaded.

The Two Flywheels

Flywheel 1: Representation

Each output teaches you what frameworks are missing. Frameworks get sharper over time. Better frameworks → better outputs → better frameworks.

Flywheel 2: Process

Better tooling → easier to load frameworks consistently. Consistent loading → more outputs use frameworks. More outputs → more learning → better frameworks.

Both flywheels compound together.

What Goes in the Kernel

The Four Core Files

1. marketing.md — Who You Are

Purpose: Voice characteristics, positioning, philosophy

Effect: Ensures outputs sound like YOU, not generic consultant-speak

Note: Internal guidance for AI, not external marketing copy

2. frameworks.md — How You Think

Purpose: Decision patterns, diagnostic tools, methodology

Effect: The explicit frameworks that compress your expertise

Pattern: "When I see X, I apply Y"

3. constraints.md — What You Don't Do

Purpose: Boundaries, risk thresholds, exclusions

Example: "Never recommend Level 5 agentic systems to companies without IT"

Effect: Prevents outputs that violate your judgment

4. style.md — How You Communicate

Purpose: Tone, formatting preferences, terminology

Contents: Document structures, heading patterns

Effect: Ensures consistency across all outputs

Why Files, Not Just Prompts

  • Files persist; prompts are ephemeral
  • • Files can be version controlled
  • • Files can be shared with teams
  • • Files can be updated based on learning
  • • Files are the "source code"—outputs are the "compiled binaries"

Key Takeaways

  • 1. Expertise is compressed pattern recognition: Years of experience crystallise into intuitions that AI lacks by default.
  • 2. 90% of expertise is tacit: Never written down, never in training data, invisible to AI.
  • 3. Frameworks are worldview patches: They temporarily override the model's generic priors with YOUR specific perspective.
  • 4. Compression is the work: Turning "this feels risky" into explicit rules is hard—but each framework captures 100+ hours of pattern recognition.
  • 5. Frameworks ≠ prompts: Prompts are instructions (linear value). Frameworks are worldviews (compound value).
  • 6. Context is a temporary fine-tune: You're not giving the model information—you're changing how it thinks.
  • 7. The kernel is four files: marketing.md (identity), frameworks.md (methodology), constraints.md (boundaries), style.md (communication).

This chapter explained the WHAT: frameworks are worldview patches that override the model's generic reasoning with your expertise.

But HOW does this compression actually work? What are the levels of compression, and how do outputs feed back to improve frameworks?

Next: Chapter 3 — The Compression Ladder →

Chapter References

  1. 1. Starmind, "Unlocking Tacit Knowledge in Consulting Firms" — starmind.ai/blog/unlocking-tacit-knowledge-in-consulting-firms
  2. 2. ArXiv, "Large Language Models Encode Semantics" — arxiv.org/html/2507.09709v1
03
Part I: The Spine

The Compression Ladder

Five levels of deliberate compression—and where most people get stuck

"Most AI users operate at levels 1 and 2. They have mental models in their heads but never externalise them. When they prompt AI, they're asking a model with an outdated worldview to guess what they would think—then spending hours correcting the results."

The leverage comes from reaching levels 3 and 4. This requires deliberate compression work. Most people skip it because it feels like overhead—it's actually the highest-leverage activity you can do.

The Five Levels of Compression

THE COMPRESSION LADDER

LEVEL 1 WORLD → YOU
MASSIVE COMPRESSION

Reading, experimenting, talking, building mental models

LEVEL 2 YOU → FRAMEWORKS
SIGNIFICANT COMPRESSION

Articles, checklists, diagrams, "what we reject and why"

LEVEL 3 FRAMEWORKS → OS
MODERATE COMPRESSION

Markdown OS with your kernel baked in (marketing.md, frameworks.md, constraints.md)

LEVEL 4 OS → ARTEFACTS
EXPANSION

Generated outputs: proposals, documents, code

LEVEL 5 ARTEFACTS → FRAMEWORKS
RECURSIVE LOOP

Outputs reveal gaps; lessons encode back into kernel

Levels 1-3: The Kernel

Durable, appreciating. This is where to invest. These assets compound over time.

Level 4: The Output

Ephemeral, regenerable. Most people over-invest here—nursing outputs instead of improving the kernel.

Level 1: World → You

Years of reading, experimenting, observing, failing. Pattern recognition develops through repetition. "I've seen this before" becomes automatic. This is massive compression: thousands of hours condense into implicit mental models.

The Problem

This compression is invisible. You can't articulate what you know. Others can't access your patterns. AI certainly can't access them. What gets lost includes intuitions you can't explain, "gut feelings" about risk, subtle pattern matching, and exception handling for edge cases.

Level 2: You → Frameworks

This is where implicit becomes explicit. "It feels risky" transforms into specific criteria for risk. "This approach usually works" becomes a documented pattern. Tacit knowledge becomes articulated doctrine.

The Compression Work

Before (Tacit)

"I have a sense this company isn't ready for autonomous AI"

After (Explicit)
Framework: AI Readiness Assessment
Indicators of insufficient readiness:
• Team < 10 AND no dedicated IT
• Change capacity score < 3/5
• Previous AI failures without post-mortem
• Leadership alignment < 7/10
If 2+ indicators → Phase 1-2 only

Why Most People Skip This

  • • Feels like overhead—"I already know this"
  • • Externalising is hard cognitive work
  • • Immediate payoff isn't visible
  • • Pressure to produce outputs feels more urgent

But the leverage of Level 2 compression is enormous. The upfront investment pays dividends on every subsequent output.

Level 3: Frameworks → Operating System

Individual frameworks combine into a coherent system. The kernel takes shape: marketing.md, frameworks.md, constraints.md. Loading the kernel becomes a repeatable process. "Your thinking" becomes a loadable context.

Computer's OS
  • • Provides consistent interface for all applications
  • • Handles common operations automatically
  • • Enforces system-wide constraints
  • • New applications inherit OS capabilities
Your Kernel
  • • Provides consistent voice for all outputs
  • • Handles common diagnostic patterns automatically
  • • Enforces your constraints and boundaries
  • • New outputs inherit your expertise
What a Mature Kernel Looks Like
kernel/
├── marketing.md # Who we are, voice, positioning
├── frameworks.md # 10-15 decision frameworks
├── constraints.md # What we never recommend
├── style.md # How we communicate
├── patterns.md # Common solution shapes
└── learned.md # Lessons from recent projects

Level 4: Operating System → Artefacts

The kernel gets applied to specific context. AI generates outputs using your frameworks. Each output is a "rendering" of your worldview applied to their situation.

This is expansion, not compression. Levels 1-3 compress complexity. Level 4 expands it back out—but now it's expansion THROUGH your lens. Generic kernel + specific context = custom output.

"You don't fix each binary individually (that's maintenance hell). You fix the source code once (all future binaries are better)."

The key insight: Outputs are regenerable. If you have the kernel, you can regenerate outputs. If the kernel improves, regenerated outputs are better. This is why Level 4 is "ephemeral"—it can always be reproduced.

Level 5: Artefacts → Back to Frameworks

The recursive loop. Outputs teach you what's missing. Good outputs validate frameworks. Bad outputs reveal gaps. The lesson goes into frameworks.md, not just the output.

What Gets Encoded

  • • "This framework needed an exception for healthcare compliance" → update
  • • "Clients with <20 employees need simpler diagnostics" → new variation
  • • "The risk threshold was too high" → adjust criteria
  • • "This pattern keeps recurring" → new framework
Kernel → Output → Evaluate → Extract → Encode → Kernel improves → Better outputs → ↻
Each pass compresses new learnings, refines existing frameworks. The kernel gets sharper over time.
"This is worldview recursive compression. Each pass compresses and refines your doctrine. The kernel gets sharper over time."

Where Most People Get Stuck

Stuck at Levels 1-2

Symptoms:

  • • "I know this stuff, I just don't have time to write it down"
  • • Mental models stay mental
  • • Each output requires re-explaining philosophy
  • • New team members can't replicate quality

Cost:

  • • AI can't access your expertise
  • • Knowledge trapped in your head
  • • No compound advantage

Stuck at Level 4

Symptoms:

  • • Spending hours iterating on outputs
  • • Making same corrections repeatedly
  • • Knowledge accumulates in output histories
  • • New outputs don't benefit from previous learning

Cost:

  • • Output-by-output maintenance
  • • No improvement to the system
  • • Linear, not compound, returns

The Solution: Invest in Levels 2-3

  • • Do the compression work ONCE
  • • Let Level 5 handle continuous improvement
  • • Treat Level 4 as ephemeral—outputs are regenerable
  • • PR the kernel, not the output

Key Takeaways

  • 1. Five levels of compression: World → You → Frameworks → OS → Artefacts → back
  • 2. Levels 1-3 are durable (the kernel): This is where to invest. These assets appreciate over time.
  • 3. Level 4 is ephemeral (outputs): Outputs can be regenerated. Don't over-invest in them.
  • 4. Level 5 closes the loop: Artefacts teach you, lessons encode back, the kernel improves, better outputs follow.
  • 5. Most people stop at Level 2: They have mental models but never externalise them. This is the gap.
  • 6. The stuck pattern is at Level 4: Iterating on outputs instead of improving the kernel.
  • 7. The leverage is at Levels 2-3: Do the compression work to build a loadable operating system.

The compression ladder shows the WHAT—five levels from raw experience to refined kernel and back.

But how do you actually BUILD the kernel? What's the architecture for two-pass compilation?

Next: Chapter 4 — Two-Pass Compilation →

04
Part I: The Spine

Two-Pass Compilation

The architecture that transforms generic outputs into you-shaped systems

"I built a proposal compiler—a Markdown operating system that generates custom 30-page proposals on spec. It works. The outputs are amazing. Then I realised: I forgot to compile myself into it first."

The diagnosis: The system behaved like a generic engineer, not a "me-shaped engineer." I had to go back and iterate on the generated system—hacking on the output instead of fixing the recipe.

The lesson: There are TWO compilation passes, not one. Skip the first, and you're stuck nursing outputs.

The Compiler Metaphor

Software Compilation AI Workflow
Source code Your kernel (judgment, constraints, frameworks)
Compiler The AI system
Compiled binary Generated artifact (code, document, proposal)
What you maintain The source (kernel), not the output

In traditional software, you "compile" source into executable. In AI-assisted work, you "compile" your worldview into the AI, then IT compiles outputs. This is TWO compilation passes, not one.

Pass 1: Compile Yourself

Pass 1 Input:
├── marketing.md — who we are, voice characteristics
├── frameworks.md — thinking tools, decision patterns
├── constraints.md — what we never recommend
├── patterns.md — go-to solution shapes
└── style.md — how we talk, lay out documents

What Comes Out

  • • A builder that thinks in YOUR dialect
  • • A "you-shaped engineer"
  • • File structure, tone, examples default to YOUR style
  • • The system itself reflects your methodology

Pass 2: Compile Outputs

Pass 2 Input:
├── You-shaped builder (from Pass 1)
├── Client/project context
├── Research and data
└── Problem to be solved
Pass 2 Output:
  • • Customised artifact (proposal, code, document)
  • • Built through YOUR lenses and frameworks
  • • Consistent with your voice and constraints
  • • Client-specific but methodology-consistent

Why Both Passes Matter

Without Pass 1 (Common)
  • • Generic system
  • • Generic voice
  • • Generic frameworks applied
  • • Output feels 7-8/10, needs extensive iteration
With Pass 1
  • • You-shaped system
  • • Your voice by default
  • • Your frameworks built in
  • • Output feels 8-9/10, minimal iteration needed

Case Study: The Proposal Compiler

Built a Markdown operating system for generating custom 30-page proposals—uses Tavily for research, Markdown files that call other Markdown files, small Python helpers for data extraction. It works. But something was missing.

Symptoms of Missing Pass 1

  • • Tone was wrong—too generic
  • • Structure didn't reflect my diagnostic sequence
  • • Examples didn't default to my style
  • • Had to iterate extensively to get "my" feel

The Fix

Re-run the entire build with an updated prompt. Treat the generated OS as ephemeral. Delete and regenerate from improved kernel. Don't nurse the broken output—fix the source.

The Before/After Gap

Same Model, Same Task, Different Architecture

"What AI should this company implement?"

❌ Without Pass 1 (Kernel Not Compiled)

  • • "Consider a customer service chatbot"
  • • "Build a dashboard for analytics"
  • • "Automate routine emails"
  • • Standard 2023 best practices

✓ With Pass 1 (Kernel Compiled Into Builder)

  • • "Based on the Three-Lens Framework, CEO sees efficiency, HR sees threat, IT sees risk—alignment needed first"
  • • "AI maturity is R2—recommend human-in-the-loop assist before autonomy"
  • • "Change capacity is 2/5—surgical intervention on ONE process"
  • • "Apply Autonomy Ladder Phase 2: AI suggests, human approves"
"Do a chatbot" vs "Surgically replace this process based on R2 maturity"—same model, different kernel.

Your Kernel Patches the Model's Worldview

Models are trained on data from 2023-2024. Your frameworks represent 2025 thinking (or beyond). The model's worldview is outdated. Your kernel acts as a "frontier patch."

Model's Default Your Kernel Override
Generic AI advice Your specific methodology
Average consulting patterns Your accumulated judgment
2024 best practices Your 2025 frontier thinking
One-size-fits-all Context-specific application

Prompts Shrink

Without Kernel

Every prompt re-explains your philosophy, your constraints, your voice, your frameworks...

With Kernel

"Use the Market Maturity Ladder (see frameworks.md)"

Prompts become references, not explanations.

Investment Allocation

Wrong Allocation (Common)
10%
Kernel
90%
Output iteration
Right Allocation
60-70%
Kernel
30-40%
Output

Key Takeaways

  • 1. Two-pass compilation: First compile your worldview into the builder (Pass 1); then the builder compiles outputs (Pass 2).
  • 2. Pass 1 is critical: Without it, you get a generic system. With it, you get a you-shaped system.
  • 3. The Proposal Compiler mistake: Forgot marketing.md and frameworks.md—got 7-8/10 outputs instead of 9/10.
  • 4. Before/after gap is dramatic: "Do a chatbot" vs "Surgically replace this process based on R2 maturity."
  • 5. Your kernel patches the model's worldview: Models trained on 2024 data; your frameworks represent 2025 thinking.
  • 6. Prompts shrink with kernels: Reference frameworks.md instead of explaining philosophy each time.
  • 7. The loop is recursive: Outputs teach you; lessons encode back; kernel improves; outputs improve.

Part I has established the doctrine: the stale worldview problem (Ch1), frameworks as worldview patches (Ch2), the compression ladder (Ch3), and two-pass compilation (Ch4).

Now it's time to see the doctrine in action. Part II shows a complete worked example.

Next: Part II — The Flagship →
Chapter 5: The Content Flywheel in Action

05
Part II: The Flagship

The Content Flywheel in Action

A worked example where each turn of the wheel improves the next

"This ebook is itself a demonstration of the doctrine. Every chapter you're reading was generated through a kernel that was refined through previous chapters. The system improved as it ran."

Part II takes the abstract doctrine from Part I and shows it working. The worked example IS the content pipeline. You're not just reading about the flywheel—you're watching it in real-time.

THE CONTENT FLYWHEEL

Each turn makes the next one better

1. IDEA

Rough concept worth exploring

2. PRE-THINK

Structured thinking about the thinking

3. RESEARCH

Deep dive using authoritative sources

4. SYNTHESIS

AI generates draft using kernel + research

5. REFINEMENT

You polish and improve

6. ENCODING

Lessons go back into kernel

7. INDEX

Final piece goes into RAG for future reference → Next idea starts with improved kernel ↻

Why It's a Flywheel, Not a Pipeline

Pipeline: Linear, one-way, each item processed independently.
Flywheel: Circular, momentum builds, each turn improves the next.

The key difference: feedback loops that compound.

Stage 1: Idea

A rough concept worth exploring surfaces. Could come from client conversation, reading, observation. Doesn't need to be fully formed—just a spark.

Stage 2: Pre-Think

Pre-thinking is meta-cognition—thinking about the thinking before generating. What voice should this piece use? What should we include and exclude? What's the angle? What frameworks apply?

Stage 3: Research

For This Ebook

External research:

  • • LLM knowledge cutoff dates
  • • Tacit knowledge in consulting (90%)
  • • Domain-specific AI performance (2x)

Internal research (RAG):

  • • The Proposal Compiler patterns
  • • Stop Nursing Your AI Outputs doctrine
  • • Context Engineering principles

Deep dive using authoritative sources. External research (web, academic papers, industry reports). Internal research (your previous content, existing frameworks).

The Research Pattern

  1. Broad search first (surface diverse patterns)
  2. Follow interesting threads (not narrow topic search)
  3. Look for transferable frameworks
  4. Capture with citations for later reference

Stage 4: Synthesis

AI generates draft using kernel + research. Kernel provides voice, frameworks, constraints. Research provides evidence, examples, citations. Output emerges through YOUR lenses.

Before/After: Kernel Impact

Without Kernel (Generic Synthesis)
  • • "Here are some ways AI can help with productivity..."
  • • Stock advice, generic framing
  • • No specific frameworks applied
  • • Could have been written by anyone
With Kernel (Kernel-Guided)
  • • "The compression ladder reveals why outputs feel generic..."
  • • Specific to the doctrine being taught
  • • Frameworks named and applied
  • • Unmistakably "our" voice

Stage 5: Refinement

You polish and improve the draft. Fix inaccuracies, adjust tone, sharpen arguments. Add nuance the AI missed. Make it truly yours.

10x
improvement in editing efficiency
Without Kernel: 50% editing time
With Kernel: 5% editing time

The risk: Judgment you apply during refinement stays in the output, not the kernel. The fix: Stage 6—extract and encode lessons.

Stage 6: Encoding

Lessons from refinement go back into the kernel. "I keep adding this nuance" → encode it. "This example always works" → add to patterns.md. "AI keeps getting this wrong" → add to constraints.md.

What I Fixed Where It Goes
Voice was too academic style.md update
Missing exception for small teams frameworks.md addition
Structure didn't flow patterns.md new template
Research citations inconsistent process.md improvement

The compounding effect: Each encoding improves ALL future outputs. 10 encodings × 100 future outputs = 1,000 improvements. The kernel appreciates with each use.

Stage 7: Index

Final piece goes into RAG/vector database. Becomes searchable for future content. Links to frameworks.md themes. Available for synthesis in future work.

  • Prevents reinventing the wheel
  • Enables consistent terminology
  • Allows future work to build on past work
  • Creates institutional memory (even for solo practitioners)

The Meta-Demonstration

You're not just reading about the flywheel—you're seeing it work. The quality of this content demonstrates the doctrine. If this ebook is useful, it validates the approach. The next ebook will be better because of lessons from this one.

Key Takeaways

  • 1. Seven stages: Idea → Pre-Think → Research → Synthesis → Refinement → Encoding → Index
  • 2. Flywheel, not pipeline: Each turn improves the next. Momentum compounds.
  • 3. Pre-Think separates understanding from creation: Don't do two jobs at once.
  • 4. 10x editing improvement: With kernel, editing drops from 50% to 5% of time.
  • 5. Encoding is critical: Lessons must go into the kernel, not just the output.
  • 6. Indexing enables compounding: Future work builds on past work.
  • 7. This ebook IS the worked example: You're watching the flywheel in action.

The content flywheel shows the STAGES of worldview compression in practice.

But what about the COMPOUNDING? How do outputs improve the kernel, and how does that multiply over time?

Next: Chapter 6 — Worldview Recursive Compression →

06
Part II: The Flagship

Worldview Recursive Compression

How outputs feed back into the kernel—and why this creates exponential, not linear, returns

"Proposal 1 took 10 hours at 40% win rate. Proposal 100 took 3 hours at 80% win rate. Same person, same market, same model. The difference: 50-60 kernel improvements accumulated along the way."

The content flywheel (Chapter 5) showed you the stages. This chapter is about the compounding—the mechanism that transforms linear effort into exponential returns.

By the time competitors start building their first kernel, you'll be generations ahead. Not because you're smarter—because you've run more cycles through the recursive loop.

The Feedback Loop Mechanism

"Recursive" isn't jargon—it describes a specific architecture where outputs feed back into inputs, creating a self-improving system.

THE RECURSIVE COMPRESSION LOOP

📦
KERNEL

frameworks.md
marketing.md
constraints.md

📄
OUTPUT

proposals, articles,
code, documents

💡
LESSONS

what worked
what failed
what's missing

GENERATE → EVALUATE → EXTRACT → ENCODE ↻

How Each Pass Compresses

1
Generate output using current kernel
2
Evaluate output (good? bad? missing something?)
3
Extract lesson ("this framework needed an exception")
4
Encode lesson into kernel
5
Generate next output—now benefiting from improved kernel

Good Outputs Validate, Bad Outputs Reveal

Both good and bad outputs contribute to the flywheel—but in different ways.

When Outputs Are Good ✓

  • • Confirms your framework is working
  • • Validates the pattern you encoded
  • • Increases confidence in that decision rule
  • • May warrant strengthening the framework

When Outputs Are Bad ✗

  • • Reveals a gap in your frameworks
  • • Shows an exception you hadn't considered
  • • Exposes a missing constraint
  • • Creates opportunity for encoding
"When you generate a good output, it validates your framework. When you generate a bad output, you learn what's missing. The lesson goes back into frameworks.md, not just into the output."

The Key Discipline

❌ Wrong

Fix the output, move on

Lesson helps once

✓ Right

Fix the kernel, then regenerate

Lesson helps forever

The Compounding Math

6x

Productivity Advantage

3x faster × 2x win rate = 6x vs competitors without kernels

The math is simple but the implications are profound. Each framework improvement doesn't just help one output—it helps all future outputs.

# Framework-Level Compounding
Fix 1 framework → Helps 100 future proposals
Fix 5 frameworks → 500 proposal-improvements
Each improvement stacks (multiple fixes benefit same proposal)
By proposal 100, you're operating with 50-60 kernel improvements

Worked Example: Proposal Evolution

Stage Kernel State Time Quality Win Rate
Proposal 1 5 frameworks, untested 10 hours 70% 40%
Proposal 10 6 frameworks, 1 refined 8 hours 80% 50%
Proposal 50 10 frameworks, 40+ improvements 4 hours 90% 65%
Proposal 100 12 frameworks, 60+ improvements 3 hours 95% 80%

By proposal 100, you're operating with 50-60 kernel improvements. Competitors starting from scratch are 100 proposals behind. The gap compounds—it doesn't narrow.

Version Control for Your Kernel

Your kernel is intellectual property. Treat it like source code—with version history, changelogs, and the ability to correlate changes with outcomes.

# frameworks.md changelog
**v1.0** (2025-01-15): Initial kernel
- 5 core frameworks
- 10 hours average proposal time
**v1.1** (2025-04-20): Post-10 proposals
- Added "AI Readiness Assessment" framework
- Updated Three-Lens for founder-led companies
- 8 hours average proposal time
**v1.2** (2025-07-15): Post-30 proposals
- Removed "Build vs Buy" framework (never used)
- Added Healthcare Compliance sub-framework
- 6 hours average proposal time
**v2.0** (2025-12-01): Post-50 proposals (major revision)
- Consolidated 7 frameworks to 5 (removed redundancy)
- Added 3 new industry-specific variations
- 4 hours average proposal time

The Flywheel Acceleration

The flywheel doesn't spin at constant speed. It accelerates as the kernel matures.

Early Flywheel (Heavy, Slow)

• Few frameworks, untested

• Each output requires significant iteration

• Learning is fast but encoding is slow

• Momentum building—feels like overhead

Mid Flywheel (Building Speed)

• Frameworks maturing

• Outputs require less iteration

• Encoding becomes routine

• Momentum visible—starting to feel helpful

Mature Flywheel (Self-Sustaining)

• Frameworks comprehensive

• Outputs mostly right first time

• Encoding is incremental refinement

• Momentum compounding and accelerating

The Turning Point

Around 20-30 outputs: The kernel becomes "good enough" that iteration drops dramatically. More energy goes to encoding than fixing. The flywheel starts to feel like it's helping, not hindering.

This is when skeptics become believers.

Why Competitors Can't Catch Up

100

Proposals Behind

Where competitors start when you've run 100 cycles through the loop

The competitive dynamics are stark. If you have 100 proposals with 60 kernel improvements, a competitor starting now has 0 proposals and 0 improvements. Even with the same model and same methodology, they're 100 cycles behind.

The Learning Advantage

  • You: Generate 80-100 proposals/year (systematic, fast)
  • Competitor: Generates 10 proposals/year (manual, slow)
  • • Each year, you learn 8-10x faster
  • • In 2 years: 200 proposals vs 20—50x more experienced
"Anyone can use Claude or GPT. The tools are commoditized. Your frameworks are unique—distilled from YOUR experience, encoding YOUR risk posture and philosophy. The frameworks are the moat. Competitors can't simply copy them because they don't have your accumulated pattern recognition."

What Copying Gets You

Copy OUTPUT

They get one proposal

Copy KERNEL

They get your v1.0
(you're on v2.3)

Copy PROCESS

They still lack 100 cycles of learning

The moat isn't a single artifact—it's the compounding flywheel itself.

Practical Encoding Rituals

Encoding doesn't happen automatically. Build rituals at three time horizons:

Per-Output Encoding
5-10 minutes

After each output, ask:

  • • What worked that should be standard?
  • • What failed that needs a constraint?
  • • What exception did I handle manually?
  • • What should AI have known?
Quarterly Review (10-20 outputs)
2-3 hours

Every quarter, review:

  • • Which frameworks are never used? Remove.
  • • Which patterns keep recurring? Codify.
  • • What new context has changed thresholds?
  • • Where are the remaining friction points?
Annual Revision (50+ outputs)
4-8 hours

Once per year:

  • • Major kernel restructure if needed
  • • Consolidate redundant frameworks
  • • Update for industry/capability changes
  • • Reset baseline measurements

Key Takeaways

  • 1 The loop is recursive: Generate → Evaluate → Extract → Encode → Generate (better). Each pass compresses learning.
  • 2 Good outputs validate; bad outputs reveal: Both feed the kernel. Neither is wasted.
  • 3 The math compounds: Fix 1 framework → 100 future improvements. By output 100, you have 50-60 kernel improvements.
  • 4 6x productivity advantage: 3x faster × 2x win rate = 6x vs competitors without kernels.
  • 5 Version control your kernel: Track changes, correlate with outcomes, enable rollback.
  • 6 The flywheel accelerates: Slow and heavy early, self-sustaining later. Turning point around 20-30 outputs.
  • 7 Competitors can't catch up: The moat is the compounding process, not any single artifact.

Part II showed the doctrine in action through the content flywheel (Ch5) and the compounding mechanism (Ch6).

Now Part III applies the same doctrine to different domains. The principle is identical—only the context changes.

Next: Part III — Variants → Chapter 7: Code Generation

07
Part III: Variants

Variant: Code Generation

Same doctrine, different domain—design documents as kernel, code as ephemeral output

"A bug is found in production. The old approach: patch the code, add a band-aid fix, accumulate technical debt. The new approach: ask 'what did the design doc miss?', update the design, regenerate the code. The design doc is the kernel. The code is ephemeral."

Part III applies the same doctrine to different domains. The compression ladder works identically—only the artifacts differ. In this chapter: code generation.

This isn't speculative. Industry leaders from GitHub to Thoughtworks are converging on the same conclusion: the specification is becoming the source of truth.

THE SPEC-DRIVEN MOVEMENT

"We're moving from 'code is the source of truth' to 'intent is the source of truth.'"

— GitHub Blog, "Spec-driven development with AI"

"The spec becomes the source of truth and determines what gets built."

— Martin Fowler, Thoughtworks

"Tessl Framework takes a more radical approach in which the specification itself becomes the maintained artifact, rather than the code."

— Thoughtworks Technology Radar, 2025

Code as Ephemeral Artifact

The paradigm shift requires a fundamental change in how we think about software artifacts.

Dimension Old Pattern New Pattern
Primary artifact Code Design document
Documentation role Afterthought describing code Input to generation
Source of truth What the code does What the design specifies
When they diverge Update docs to match code Fix design, regenerate code
Learning captured in Code patterns & comments Design docs & canon files
Review focus Line-by-line code review Design review before coding

The Technical Kernel

The compression ladder (from Chapter 3) applies directly to code, with technical artifacts replacing business artifacts.

COMPRESSION LADDER FOR CODE

World → You Years of coding experience, patterns observed, mistakes made
You → Frameworks coding_guidelines.md, architecture patterns, "what we reject and why"
Frameworks → OS Project-specific canon files, team standards, infrastructure.md
OS → Artefacts Generated code
Artefacts → Kernel Bugs reveal missing specs, update design docs, encode lessons

The Technical Kernel Files

# technical_kernel/
├── infrastructure.md
# Enterprise-level patterns
# Cloud architecture, security, data flow
├── coding_guidelines.md
# Code-level standards
# Naming conventions, error handling, testing patterns
├── team_guidelines.md
# Team-specific choices
# Framework preferences, tooling, PR workflow
├── learned.md
# Recent lessons
# "Don't use library X for Y because..."
└── PR.md
# Current work context
# What we're building now, constraints

Hierarchy matters: Enterprise → Team → Personal → Current. Each level inherits from above and adds specificity. Enterprise patterns cascade to all teams, team choices override generic enterprise defaults, and PR.md is the most specific (this pull request).

The Bug Fix Decision

How you respond to bugs reveals which paradigm you're operating in.

❌ Old Approach: Nurse the Code

Scenario: Bug found in production

  • • Patch the bug in the code directly
  • • Add band-aid fixes and edge case handlers
  • • Code accumulates complexity over time
  • • Design and implementation drift apart

Outcome: Technical debt compounds, system harder to reason about

✓ New Approach: Fix the Design

Scenario: Bug found in production

  • • Ask: what did the design doc miss?
  • • Update the design to address root cause
  • • Regenerate code from updated design
  • • Design remains source of truth

Outcome: Design captures learning, code stays aligned with intent

The Two-Pass Pattern Applied to Bugs

Pass 1: Fix the design (the kernel)

Pass 2: Regenerate the code (the output)

Same pattern as Chapter 4, different domain.

First-Pass Accuracy Compounds

17pt

Accuracy Improvement

With kernel: 82% vs 65% without → 3x productivity multiplier

When first-pass code is accurate, a virtuous cycle begins. Few retries needed means context stays clean (no failed attempts polluting history). Clean context keeps accuracy high or improves it. Quality doesn't just add—it multiplies.

The Vicious Cycle (Without Kernel)

When first-pass code is wrong, multiple retries pollute the context with failed attempts. Polluted context leads to more errors on the next task. Quality degrades over the session.

Measured Impact

Without Kernel
  • • First-pass accuracy: ~65%
  • • Average task completion: 2.3 attempts
  • • Session productivity decline: after 15-20 tasks
  • • Context at task 20: ~75% polluted
With Kernel
  • • First-pass accuracy: ~82%
  • • Average task completion: 1.3 attempts
  • • Session productivity: still sharp after 30+ tasks
  • • Context at task 20: ~40% polluted

Spec-Driven Development

The workflow is straightforward once you internalize the paradigm shift:

1
Write design doc — Intent, constraints, interface, error handling
2
AI generates code from design
3
Test the code
4
Extract learnings
5
Update design doc
6
(If needed) Regenerate code from updated design

The Delete-and-Regenerate Test

The Test

Can you delete this code and regenerate it from design alone?

If YES:

The knowledge is in the kernel where it belongs

If NO:

The knowledge is trapped in the code—extract it

When to Apply

  • • Before major refactors
  • • When requirements change significantly
  • • When new team members join
  • • Quarterly, for core systems

What Gets Extracted

When regeneration fails, ask:

  • • What decision was made in the code that isn't in the design?
  • • What edge case handling is undocumented?
  • • What implicit knowledge would a new developer need?

Extract these into design docs. Now regeneration works.

The Compounding Advantage in Code

# The Same Math (from Ch6)
Fix 1 design pattern → Helps 100 future components
Fix 5 design patterns → 500 component-improvements
By component 100, you're operating with 50+ kernel improvements

What Compounds

Architecture patterns

Once encoded, applied to all new code

Error handling

Standard patterns reduce per-component effort

Testing patterns

Consistent approach across codebase

Naming conventions

Less decision fatigue, more consistency

The Technical Debt Inversion

Old Pattern

Code accumulates debt over time

(entropy increases)

New Pattern

Design docs accumulate wisdom over time

(kernel improves)

With kernel: Each project improves future projects
Without kernel: Each project adds more to maintain

Key Takeaways

  • 1 Code is ephemeral; design is durable: The design doc is the kernel, code is the output.
  • 2 Same compression ladder, different artifacts: World → You → Frameworks → Technical Kernel → Code → back.
  • 3 When bugs are found, fix the design: Then regenerate. Don't nurse the code.
  • 4 First-pass accuracy compounds: 17-point improvement → 3x productivity via cleaner context.
  • 5 Spec-driven development: Design doc → AI generates → Test → Extract learnings → Update design.
  • 6 The delete-and-regenerate test: If you can't regenerate from design, knowledge is trapped.
  • 7 The math is identical: Fix 1 pattern → 100 future improvements. Kernel approach inverts technical debt.

Chapter 7 applied the doctrine to code generation—design docs as kernel, code as ephemeral output.

Chapter 8 applies the same pattern to a different domain: proposals and consulting work.

Next: Chapter 8 — Variant: Proposals and Consulting

08
Part III: Variants

Variant: Proposals and Consulting

Same doctrine, different domain—the Marketplace of One and industrial-scale bespoke proposals

"Win rate went from 40% to 80%. Proposal time dropped from 12 hours to 3.5 hours. Same market, same services, same person. The difference: a kernel that encoded 20 years of consulting pattern recognition into explicit frameworks."

Chapter 8 applies the compression ladder to consulting work. Same doctrine, different context: proposals become ephemeral outputs, your expertise becomes the durable kernel.

The flagship implementation: a Proposal Compiler that generates custom 30-page proposals at industrial scale.

The Marketplace of One

Traditional consulting strategy forces a trade-off:

Niche Strategy

Pick a segment, get good at it, but limit your market. Deep expertise, narrow reach.

Generalist Strategy

Serve everyone, but with generic solutions. Broad reach, shallow relevance.

The Third Option: Marketplace of One

YOUR KERNEL (constant)
├── marketing.md (your positioning)
├── frameworks.md (your methodology)
├── constraints.md (your boundaries)
└── patterns.md (your solution shapes)
+ CLIENT CONTEXT (variable)
BESPOKE PROPOSAL (unique to this client)
Every proposal is custom
Every proposal uses the same methodology
Scalable AND specific
"That's not 'niching'; it's industrial-scale bespoke."

The Proposal Compiler System

A Markdown operating system for generating custom 30-page proposals. Uses research tools for client discovery, Markdown files that call other Markdown files, and the kernel loaded before every generation.

The Four Kernel Files

marketing.md — Who You Are

• Voice characteristics, positioning, philosophy

• Ensures proposals sound like you

• Internal guidance for AI, not external copy

frameworks.md — How You Think

• Diagnostic frameworks (AI Readiness Assessment, Three-Lens)

• Decision patterns (Build vs Buy, Autonomy Ladder)

• The methodology that makes you distinct

constraints.md — What You Don't Do

• Risk thresholds ("Never Level 5+ for companies without IT")

• Deal-breakers ("Skip if change capacity < 2/5")

• Protects your reputation AND their outcomes

patterns.md — Your Solution Shapes

• Common architectures you recommend

• Phased rollout patterns

• Integration approaches

THE TWO-PASS APPLICATION

1️⃣
Pass 1: Compile Kernel

Load kernel into proposal generator

2️⃣
Pass 2: Generate Proposal

Apply kernel + client research → custom output

The Research Stage

Research becomes diagnostic when guided by frameworks. Instead of generic "What does this company do?", your kernel directs specific questions.

What Gets Researched

  • • Company background and context
  • • Industry-specific challenges
  • • Current tech stack and capabilities
  • • Leadership priorities and pain points
  • • Competitive landscape

How Kernel Shapes Research

  • • "Check indicators for Autonomy Ladder phase"
  • • "Assess change capacity using our 5-point scale"
  • • "Identify Three-Lens alignment gaps"

The Synthesis Stage

AI generates the proposal using your kernel (voice, methodology, constraints), client research (context, specifics), and template patterns (structure, sections).

Before/After (Revisiting Ch1 Contrast)

❌ Without Kernel

  • • "Consider implementing a customer service chatbot"
  • • Generic advice that could apply to any company
  • • Can't defend reasoning to client

✓ With Kernel

  • • "Based on your R2 AI maturity, start with human-in-the-loop assist before autonomy"
  • • Specific recommendations using YOUR frameworks
  • • Reasoning is transparent and defensible

The Quality Difference

Metric Without Kernel With Kernel
First draft quality 7-8/10 8-9/10
Iteration needed 4-6 rounds 1-2 rounds
Time to final 10-12 hours 3-4 hours
Client perception "Generic" "They really understand us"

The Unfair Advantage

6x

Productivity Advantage

3.4x faster × 2x win rate

Speed: 12 hours → 3.5 hours (3.4x faster)

Quality: 70% → 95% (1.4x better)

Win rate: 40% → 80% (2x better)

Where the Advantage Comes From

  • Speed: Kernel-guided generation means less iteration
  • Quality: Frameworks ensure nothing important is missed
  • Win rate: Proposals feel custom AND methodologically rigorous

The Gap Widens

80-100

proposals/year (you, systematic)

10-20

proposals/year (competitor, manual)

Each year: 4-5x more learning cycles

In 2 years: 200+ cycles vs 30 cycles

The Compounding Flywheel

Per-Proposal Learning

After each proposal, ask:

  • • What worked that should be standard?
  • • What client-specific insight generalises?
  • • What framework needed an exception?
  • • What was missing from research prompts?

Proposal-to-Kernel Encoding

What I Learned Where It Goes
Healthcare needs compliance section frameworks.md → Healthcare variation
Small teams can't absorb Level 5 constraints.md → team size threshold
Founders care about different things patterns.md → founder-led template
Research always misses competitive context research_prompts.md → competitor section
# The Flywheel Math (from Ch6)
Fix 1 framework → Helps 100 future proposals
Proposal 1: 10 hours, 40% win rate, 5 frameworks
Proposal 50: 4 hours, 65% win rate, 50+ improvements
Proposal 100: 3 hours, 80% win rate, 60+ improvements

The Moat

"Anyone can use Claude or GPT. The tools are commoditized. Your frameworks are unique—distilled from YOUR experience, encoding YOUR risk posture and philosophy. The frameworks are the moat."

What's Actually Protected

Your Frameworks

20 years of pattern recognition, encoded

Your Judgment

Risk thresholds calibrated by experience

Your Voice

Positioning that clients recognise

Your Learning

100 cycles of compounding improvement

What Copying Gets Competitors

Copy OUTPUT

Helps them once

Copy KERNEL (v1.0)

They get your starting point
(you're on v2.3)

Copy PROCESS

They still lack your 100 cycles

The moat is the compounding flywheel, not any single artifact.

Key Takeaways

  • 1 Marketplace of One: Same kernel + variable client context = infinite custom outputs. No trade-off between scale and specificity.
  • 2 Four kernel files: marketing.md (identity), frameworks.md (methodology), constraints.md (boundaries), patterns.md (solutions).
  • 3 Research is diagnostic: Kernel frameworks DIRECT what you research. Research becomes input, not filler.
  • 4 6x productivity advantage: 3.4x faster × 2x win rate. The numbers are dramatic.
  • 5 The flywheel compounds: Proposal 1 (10 hrs, 40%) → Proposal 100 (3 hrs, 80%).
  • 6 The moat is the flywheel: Not the tools, not single outputs—the compounding learning process.
  • 7 Same doctrine, different context: Everything from Part I applies. The compression ladder works for proposals exactly as it works for content and code.

Chapters 7-8 showed the doctrine applied to code generation and proposals.

Now it's time for action. Chapter 9 gives you a practical path to build your kernel this week.

Next: Chapter 9 — Build Your Kernel This Week

09
Part III: Variants (Action Chapter)

Build Your Kernel This Week

Practical steps to start building your first kernel—not someday, this week

"What do you keep explaining repeatedly? What do you wish junior team members understood? What does AI keep getting wrong in ways that frustrate you? That gap—between what you know and what AI produces—is your first framework waiting to be written."

This chapter is action, not explanation. The doctrine is established (Parts I-II). The variants are shown (Chapters 7-8). Now: practical steps to start building.

You don't need 50 frameworks. You need 3-5 to see the flywheel working.

The Minimum Viable Kernel

❌ Common Mistake

Trying to document everything before starting. "I'll build my kernel when I have time to do it properly."

Reality: You never have time; start now with minimum viable

✓ Better Approach

Start with 3-5 core frameworks, add as gaps emerge. Build it this week, improve iteratively.

Fix: 3-5 frameworks this week, expand from there

What "Minimum Viable" Means

  • • Enough to noticeably improve AI outputs
  • • Enough to see the flywheel working
  • • Small enough to build in a week
  • • Specific enough to actually guide AI reasoning

The Framework Template

Use this structure to encode your expertise into explicit, reusable frameworks:

FRAMEWORK TEMPLATE
Framework Name:
[What it diagnoses or decides]
When to use:
[Trigger conditions—when should this be applied?]
Inputs required:
[What data/context do you need?]
Process:
[Step-by-step application—how do you use it?]
Outputs:
[What decision or diagnosis does it yield?]
Failure modes:
[When does this framework NOT apply? Exceptions?]

Example: AI Readiness Assessment

Framework: AI Readiness Assessment

When to use:

When evaluating whether a company is ready for AI implementation, especially agentic or autonomous systems.

Inputs required:

  • • Team size and IT capabilities
  • • Previous AI/tech project history
  • • Leadership alignment (CEO, HR, IT perspectives)
  • • Change management capacity
  • • Current tech stack

Process:

  1. 1. Score each of 5 readiness dimensions (1-5 scale)
  2. 2. Calculate composite readiness score
  3. 3. Map to Autonomy Ladder phase recommendation
  4. 4. Identify top 2-3 gaps to address

Outputs:

  • • Readiness score (1-5 composite)
  • • Recommended Autonomy Ladder phase (1-6)
  • • Gap analysis with priorities
  • • Go/No-Go recommendation

Failure modes:

  • • Doesn't apply to pure technology evaluation (non-AI projects)
  • • Needs adaptation for enterprise vs SMB contexts
  • • Skip if client has recent successful AI deployment (update baseline)

The Four Core Files

Your kernel starts with four files. Each serves a distinct purpose:

1. marketing.md — Who You Are

Purpose: Ensure AI outputs sound like YOU, not generic consultant-speak

What goes in:

  • • Your positioning (what makes you distinct)
  • • Your philosophy (beliefs about your domain)
  • • Your voice characteristics
  • • Your values (what you stand for)

## Philosophy
- AI should augment humans, not replace
- Start surgical, earn the right to go broad
- Visible reasoning beats black-box

2. frameworks.md — How You Think

Purpose: Your diagnostic and decision frameworks, explicitly encoded

What goes in:

  • • 3-5 core frameworks (minimum viable)
  • • 10-15 frameworks (mature kernel)
  • • Using the template from Section 9.2

## Diagnostic Frameworks
- AI Readiness Assessment
- Three-Lens Framework
- Change Capacity Evaluation

3. constraints.md — What You Don't Do

Purpose: Boundaries, risk thresholds, deal-breakers

What goes in:

  • • Hard constraints (never cross these)
  • • Soft constraints (flag but don't reject)
  • • Risk thresholds (when to recommend against)

## Hard Constraints
- Never Level 5+ for companies without IT
- Never promise <4 weeks for agentic systems
- Never accept misaligned leadership

4. style.md — How You Communicate

Purpose: Formatting, tone, terminology consistency

What goes in:

  • • Document structure preferences
  • • Terminology choices
  • • Formatting patterns
  • • Length guidelines

## Structure
- Lead with recommendation, then evidence
- Bullet points, not walls of text
- Key Takeaways at end of each section

The Action Checklist: This Week

Build your minimum viable kernel in 7 days. Total time: 10-15 hours.

1-2
Days 1-2: Start marketing.md (2-3 hours)
  • □ Write your positioning in 2-3 sentences
  • □ List 3-5 beliefs/philosophy points
  • □ Describe your voice in 3-4 characteristics
  • □ Don't overthink—capture what exists in your head
3
Day 3: List your recurring patterns (1 hour)
  • □ What do you explain repeatedly?
  • □ What do juniors get wrong?
  • □ What does AI keep missing?
  • □ Write 5-7 bullet points
4-5
Days 4-5: Build 1 framework (2-3 hours)
  • □ Pick the pattern you use most often
  • □ Use the template from Section 9.2
  • □ Include failure modes (when it doesn't apply)
  • □ Test it: give AI the framework + a scenario, evaluate output
6-7
Days 6-7: Build 2 more frameworks + test (4-5 hours)
  • □ Apply same process to next two patterns
  • □ Create constraints.md with 3-5 entries
  • □ Create style.md with basic preferences
  • □ Load all four files into AI context and test

Minimum Viable Kernel Checklist

□ marketing.md created

□ Positioning (2-3 sentences)

□ Philosophy (3-5 beliefs)

□ Voice (3-4 characteristics)

□ frameworks.md created

□ 1 diagnostic framework (using template)

□ 1 decision framework (using template)

□ 1 implementation pattern (using template)

□ constraints.md created

□ 2-3 hard constraints

□ 1-2 soft constraints

□ 1-2 risk thresholds

□ style.md created

□ Document structure preference

□ Key terminology

□ Basic formatting rules

□ First test completed

□ Loaded kernel into AI context

□ Generated a test output

□ Compared to output without kernel

□ Noted 1-2 gaps for next iteration

Week 2 and Beyond

Per-Output Encoding

5 minutes after each output

  • • What worked? What failed?
  • • Add to constraints.md if wrong
  • • Add to frameworks.md if new pattern
Monthly Review

1-2 hours per month

  • • Which frameworks never used? Remove.
  • • Which patterns recurring? Add.
  • • What's changed? Update thresholds.
Quarterly Audit

2-3 hours per quarter

  • • Is kernel still accurate?
  • • Redundant frameworks? Consolidate.
  • • Positioning evolved? Update.

Common Mistakes to Avoid

Mistake 1: Over-Engineering Before Starting

Symptom: "I'll build my kernel when I have time to do it properly"

Fix: 3-5 frameworks this week, improve iteratively

Mistake 2: Frameworks Too Vague

Symptom: "Be strategic and consider the client's needs"

Fix: Specific trigger conditions, specific outputs, failure modes

Mistake 3: Frameworks Too Rigid

Symptom: "Always do X, never do Y, exactly Z steps"

Fix: Include "When this doesn't apply" section

Mistake 4: Encoding in Outputs Instead of Kernel

Symptom: Fixing the same thing in every output

Fix: If you fix it twice, encode it in the kernel

Mistake 5: Never Testing

Symptom: Kernel files exist but aren't loaded into AI context

Fix: Make loading kernel part of your workflow

The Starting Question

"What's one thing you know that AI doesn't?"

  • • That's your first framework
  • • Write it down using the template
  • • Test it by loading it and generating
  • • Evaluate: did it change the output?

The gap between what you know and what AI produces—that gap is exactly what the kernel fills.

Every time you notice the gap, you have a choice:

Fix the OUTPUT

Helps once

Fix the KERNEL

Helps forever

Choose the kernel.

Key Takeaways

  • 1 Minimum viable kernel: 3-5 frameworks: Don't over-engineer. Start small, improve iteratively.
  • 2 Four core files: marketing.md (identity), frameworks.md (methodology), constraints.md (boundaries), style.md (communication).
  • 3 Use the template: Framework Name, When to use, Inputs, Process, Outputs, Failure modes.
  • 4 Build it this week: Days 1-2 marketing, Days 3-5 first framework, Days 6-7 remaining frameworks + test.
  • 5 Encode, don't nurse: If you fix something twice, put it in the kernel.
  • 6 The starting question: What do you know that AI doesn't? That's your first framework.
  • 7 Choose the kernel: Every time you notice a gap, fix the kernel (helps forever) not just the output (helps once).

Ebook Conclusion

Part I: The Spine

  1. 1. The Stale Worldview Problem
  2. 2. Frameworks as Worldview Patches
  3. 3. The Compression Ladder
  4. 4. Two-Pass Compilation

Part II: The Flagship

  1. 5. The Content Flywheel in Action
  2. 6. Worldview Recursive Compression

Part III: The Variants

  1. 7. Code Generation
  2. 8. Proposals and Consulting
  3. 9. Build Your Kernel

The Core Message

Your AI outputs feel generic because the model reasons from stale, averaged patterns—not your expertise.

The fix isn't better prompting or bigger models.

The fix is compiling your worldview into frameworks that patch the model's reasoning.

The compression ladder creates a recursive loop where each cycle improves all future outputs.

Start today.

What's one thing you know that AI doesn't?

That's your first framework.

Appendix

References & Sources

A complete bibliography of external research, industry analysis, and practitioner frameworks

This ebook draws on a combination of primary research from major consulting firms and research institutions, industry analysis and commentary, and practitioner frameworks developed through enterprise AI transformation consulting. Sources are organized by type below.

Primary Research: LLM Knowledge & Training

AllMo.AI — "List of Large Language Model Cut-Off Dates"

Comprehensive documentation of knowledge cutoff dates for major LLMs, explaining how frozen training data affects model reasoning.

https://allmo.ai/articles/list-of-large-language-model-cut-off-dates

ArXiv — "Is Your LLM Outdated?"

Research on how LLM understanding lags behind knowledge cutoff dates, with comprehension patterns reflecting data from years earlier.

https://arxiv.org/html/2405.08460v3

ArXiv — "Large Language Models Encode Semantics"

Research on how transformer-based LLMs develop internal geometry for semantic representation, supporting the "basis vectors" metaphor for framework loading.

https://arxiv.org/html/2507.09709v1

Consulting Firm Research

Starmind — "Unlocking Tacit Knowledge in Consulting Firms"

Research on tacit knowledge in management consulting, including the finding that up to 90% of firm expertise is embedded in consultants' heads and rarely written down.

https://starmind.ai/blog/unlocking-tacit-knowledge-in-consulting-firms

Industry Analysis: Spec-Driven Development

GitHub Blog — "Spec-driven development with AI"

GitHub's analysis of the shift from "code is the source of truth" to "intent is the source of truth" in AI-assisted development.

https://github.blog/engineering/spec-driven-development-with-ai/

Martin Fowler, Thoughtworks — "Understanding Spec-Driven Development"

Thoughtworks' perspective on specifications becoming the maintained artifact rather than code.

https://martinfowler.com/

Thoughtworks Technology Radar, Volume 33 (2025)

Analysis of the Tessl Framework and spec-as-source approaches where specifications become primary maintained artifacts.

https://www.thoughtworks.com/radar

Chris Poel — "Software Engineering + AI = Future" (Medium)

Practitioner perspective on weekly regeneration from specs becoming normal practice.

https://medium.com/@chrispoel

Code Quality & AI Research

Veracode — "2025 GenAI Code Security Report"

Security analysis of AI-generated code and implications for development workflows.

https://www.veracode.com/resources/state-of-software-security

GitClear — "AI Copilot Code Quality: 2025 Research"

Research on code quality metrics when using AI coding assistants.

https://www.gitclear.com/

LeverageAI / Scott Farrell

Practitioner frameworks and interpretive analysis developed through enterprise AI transformation consulting. These sources inform the author's frameworks presented throughout the ebook.

Stop Nursing Your AI Outputs. Nuke Them and Regenerate.

Core doctrine on generation recipes as durable assets, the compilation stack, and output regeneration philosophy.

https://leverageai.com.au/stop-nursing-your-ai-outputs-nuke-them-and-regenerate/

The Proposal Compiler

Detailed case study of the Marketplace of One pattern, kernel building methodology, and compounding flywheel mathematics.

https://leverageai.com.au/wp-content/media/The_Proposal_Compiler_ebook.html

A Blueprint for Future Software Teams

Design documents as gospel, Definition of Done v2.0, model upgrade flywheel, and organizational learning patterns.

https://leverageai.com.au/a-blueprint-for-future-software-teams/

The AI Learning Flywheel: 10X Your Capabilities in 6 Months

Four-stage learning flywheel and compounding learning patterns for AI-assisted work.

https://leverageai.com.au/the-ai-learning-flywheel-10x-your-capabilities-in-6-months/

Context Engineering: Why Building AI Agents Feels Like Programming on a VIC-20 Again

Context management principles and the virtuous/vicious cycles of first-pass accuracy.

https://leverageai.com.au/context-engineering-why-building-ai-agents-feels-like-programming-on-a-vic-20-again/

Stop Picking a Niche. Send Bespoke Proposals Instead.

Economic analysis of the Marketplace of One strategy and customization economics inversion.

https://leverageai.com.au/stop-picking-a-niche-send-bespoke-proposals-instead/

Pre-Thinking Prompting: Why Your AI Outputs Fail

The two-job trap and meta-cognition patterns for AI-assisted work.

https://leverageai.com.au/pre-thinking-prompting-why-your-ai-outputs-fail-and-how-to-fix-them/

Frameworks Referenced in This Ebook

Key frameworks developed by the author and referenced throughout the text:

The Compression Ladder

World → You → Frameworks → OS → Artefacts → back

Two-Pass Compilation

Compile yourself first, then compile outputs

Worldview Recursive Compression

The compounding mechanism for kernel improvement

Marketplace of One

Same kernel + variable context = infinite custom outputs

AI Readiness Assessment

Five-dimension diagnostic for AI implementation readiness

Three-Lens Framework

CEO/HR/IT alignment diagnostic

Autonomy Ladder

Six-phase AI autonomy progression model

The Content Flywheel

Seven-stage content generation pipeline

Note on Research Methodology

This ebook integrates primary research from academic and industry sources with practitioner frameworks developed through direct consulting experience. External sources are cited inline throughout the text using formal attribution. The author's own frameworks and interpretive analysis are presented as author voice without inline citation, with underlying sources listed in this references chapter for transparency.

Research was compiled between October 2024 and December 2025. Some links may require subscription access. All statistics and claims from external sources are attributed to their original publications.

For questions about methodology or sources, contact: scott@leverageai.com.au