LeverageAI Framework Series

Progressive Resolution

The Diffusion Architecture for Complex Work

Why documents collapse in editing—and how to architect them so they don't.

A coarse-to-fine approach borrowed from AI image generation, applied to writing, code, research, and proposals.

What You'll Learn

  • Why complex work collapses—the Jenga problem explained
  • The L0-L5 Resolution Ladder for stabilizing structure before detail
  • Think/Do Loop metacognition for controlling when to back up
  • Exception protocol: Detect → Escalate → Refactor → Recompile
  • Practical applications for documents, code, research, and proposals

By Scott Farrell

LeverageAI

01
Part I: The Architecture

The Jenga Problem

Why Complex Work Collapses

"You're not failing at editing. You're committing too early."

You're deep into Chapter 7 of a proposal. Something feels off about the argument flow. You trace it back—the problem is in Chapter 3.

You fix Chapter 3. But now Chapter 4 doesn't follow logically. You adjust Chapter 4, and suddenly Chapter 6 contradicts the new framing. What started as one fix has become a game of Jenga—each correction threatening to topple the whole structure.

The sinking realisation: you might need to rewrite half the document.

If this sounds familiar, you're not alone. This pattern—where fixing one thing breaks three others—is so common in complex writing that most people assume it's inevitable. It isn't. The cascade happens for a specific, preventable reason.

The Cascade Problem

What's Actually Happening

Documents—especially complex ones—aren't linear tapes. They're networks of dependencies:

  • Chapter 3 sets up concepts that Chapter 5 builds on
  • Chapter 4 assumes a framing established in Chapter 2
  • Chapter 7's recommendations depend on Chapter 3's analysis

When you change one node, ripples propagate through every connected node. This is why "just fix that one section" almost never works for structural problems.

Why Linear Thinking Fails

We read documents linearly, so we think we should write them linearly. The mental model looks like this:

outline → draft → edit → polish

But this is like building a house by starting with the paint colour. You've made high-resolution decisions (word choices, sentence structures, paragraph flows) before low-resolution decisions (what the chapter is about, what evidence it needs, how it connects to other chapters) have stabilised.

The Premature Commitment Pattern

The Resolution Mismatch

High-Resolution Commitments

Polished prose, specific phrasings, formatted sections, word-by-word editing

Made too early

Low-Resolution Commitments

Overall structure, key claims, evidence requirements, chapter purposes

Should stabilise first

The mistake: locking in high-resolution before low-resolution stabilises. Every polished paragraph becomes a constraint on future changes. By the time you've written 30 pages, you're not free to restructure—you're playing defensive Jenga.

The 100x Rule: What Software Learned the Hard Way

Industry Evidence

"The Systems Sciences Institute at IBM has reported that 'the cost to fix an error found after product release was four to five times as much as one uncovered during design, and up to 100 times more than one identified in the maintenance phase.'"
— Celerity, "The True Cost of a Software Bug"
Cost Escalation by Development Stage
Stage Typical Cost Multiplier
Requirements $100 1x
Design $500-2,000 5-20x
Coding $2,000-5,000 20-50x
Testing $5,000-15,000 50-150x
Production $15,000-100,000+ 150-1000x
1Source: Ranger.net, "How AI QA Prevents Costly Late-Stage Bugs"

Why Costs Explode Exponentially

  • Rework compounds: Fix in production → change code → update tests → fix documentation → retrain users → handle support tickets
  • Opportunity cost: Time spent fixing cascades = time not spent building value
  • Hidden dependencies: Late-stage changes reveal dependencies you didn't know existed
  • Context switching: Engineers who wrote the code months ago now have to reload all context
"Due to exponentially escalating costs of rework, meta-work (work about work), responsibility hand-offs, interruptions, and eventually customer support, bug triage, and maintenance, fixing a production bug may cost 100x more than fixing a bug at design time."
— Eric Elliott, Medium, "The Outrageous Cost of Skipping TDD & Code Reviews"

The Document Parallel

The economics are identical for documents—just compressed in time:

  • Fix a structural problem at outline stage: 5-minute adjustment
  • Fix the same problem after 30 pages of prose: hours of cascading rework

Documents don't have "production releases," but they have the equivalent: the moment you've invested significant time in polished prose.

Multi-Scale Objects (Maps, Not Scrolls)

What Documents Actually Are

A document is not a tape you read from start to finish. It's a multi-scale object with structure at different resolutions:

Whole-document level

Thesis, major moves, overall arc

Chapter level

Purpose, key claims, evidence needs

Section level

Argument flow, transitions, supporting points

Paragraph level

Specific claims, phrasings, citations

Each scale has its own coherence requirements. Changes at one scale ripple to other scales.

Why We Miss This

  • Reading is linear: We experience documents as streams of text
  • Writing feels linear: We start at the beginning and work forward
  • Editing feels local: We zoom in on sentences and paragraphs

None of these habits prepare us to see documents as multi-dimensional structures.

The Real Unit of Change

"Fix the argument in Chapter 3" doesn't mean changing words. It means changing:

  • Chapter 3's relationship to the thesis
  • Chapter 3's setup for Chapters 4-7
  • Chapter 3's evidence requirements
  • Downstream chapters' assumptions about what Chapter 3 established

Until you see documents as relationship graphs, you'll keep being surprised by cascades.

The Failure Mode We're Solving

What Goes Wrong (The Pattern)

The Linear Workflow Failure Pattern
1 Start writing prose too early — Feels productive; actually bakes in premature commitments
2 Discover structural problem mid-document — Research reveals missing evidence, or argument doesn't flow
3 Attempt local fix — "I'll just rewrite that section"
4 Cascade begins — Adjacent sections now don't connect properly
5 Escalating rework — Each fix creates new inconsistencies
6 Eventual abandonment or "good enough" — Either scrap it or ship something that doesn't quite work

Why "Better Editing" Doesn't Fix This

  • The problem isn't that you edit poorly—it's that you're editing the wrong thing
  • Prose-level editing can't fix structure-level problems
  • No amount of sentence polish makes a fundamentally incoherent argument coherent
  • You're treating symptoms while the disease spreads

What We Need Instead

The Solution Preview

  • A way to make high-level structural decisions before committing to low-level prose
  • A systematic method to check structural coherence at each resolution
  • A protocol for when structural problems emerge: back up, don't patch

This is progressive resolution—and it's exactly how other complex systems get built (images, software, compilers).

Chapter Takeaways

1

Key insight

Documents are coupled systems. Changes at one point cascade unpredictably. The Jenga problem isn't a skills failure—it's an architecture failure.

2

The pattern

High-resolution commitments (prose) made before low-resolution structure (intent, claims, evidence) stabilises → expensive cascading rework when structure changes.

3

What to do differently

Delay prose until structure stabilises. When structure changes, don't patch prose—back up and refactor at the structural level.

We've defined the problem: premature commitment causes cascades. But what's the alternative to linear workflow?

There's a system that solves this exact problem: it's called diffusion, and it powers the AI image generators that can create coherent, detailed images from pure noise.

Next: What image generation teaches us about building complex work →

02
Part I: The Architecture

The Diffusion Insight

What Image Generation Teaches Us

The most sophisticated image generators in the world don't draw pictures left-to-right, pixel-by-pixel. They start with pure noise—and progressively refine.

Stable Diffusion, DALL-E, Midjourney—these systems can create photorealistic images from text descriptions. But they don't work like humans drawing: starting at one corner, filling in details as they go.

Instead, they begin with complete randomness—visual static—and through successive passes, shapes emerge, then structure, then detail, then texture. At no point do they commit to individual pixels until the overall composition has stabilised.

The insight: This is exactly how complex documents should be built.

How Diffusion Models Work

The Process (Simplified)

The Diffusion Generation Process

1. Pure Noise

Random pixels

2. Rough Shapes

Global structure

3. Structure

Composition emerges

4. Detail

Fine features

5. Final Image

Texture & polish

  1. Start with noise: The image begins as pure Gaussian noise—random pixels with no structure
  2. Forward diffusion (training): During training, the model learns by watching real images gradually corrupted into noise
  3. Reverse diffusion (generation): To generate, the model reverses the process—predicting and removing noise step by step
  4. Progressive refinement: Each step reveals more structure, from coarse shapes to fine details
"At their core, Diffusion Models are generative models. In computer vision tasks specifically, they work first by successively adding gaussian noise to training image data. Once the original data is fully noised, the model learns how to completely reverse the noising process, called denoising. This denoising process aims to iteratively recreate the coarse to fine features of the original image."
— Paperspace, "Generating Images with Stable Diffusion"

Coarse to Fine, Not Left to Right

What Doesn't Happen

The model doesn't decide "this pixel is red" and move on. It doesn't commit to corner-by-corner detail.

What Does Happen

Global composition stabilises first (rough shapes, overall layout), then progressively finer details emerge.

Each pass adds resolution, but never commits to high-resolution details before low-resolution structure is stable.

"In forward diffusion, an image is progressively corrupted by introducing noise until it becomes completely random noise... Reverse diffusion employs a series of Markov Chains to recover the data from the Gaussian noise by gradually removing the predicted noise at each time step. This iterative refinement process generates a realistic image with fine-grained details."
— Picsellia, "Exploring Stable Diffusion"

Why Diffusion Beats Other Approaches

The Key Advantages

1. Preserves Semantic Structure

Global meaning stabilises before local details are rendered. The generated content "aligns with the original input and maintains its intended meaning."

"It preserves the semantic structure of the input data. This leads to more coherent and consistent images, where the generated content aligns with the original input and maintains its intended meaning." — Picsellia
2. Avoids Mode Collapse

Mode collapse occurs when generative models produce limited, repetitive outputs. Diffusion's progressive approach ensures diversity and variation. By not committing to details too early, the model explores more possibilities.

"It overcomes the problem of Mode Collapse, by ensuring a wider range of features and variations in the generated images. Mode Collapse is a common issue in GAN, it refers to the phenomenon where a generative model produces limited or repetitive samples, ignoring the diversity present in the data distribution." — Picsellia
3. Produces Exceptional Quality

Iterative denoising captures "intricate details and realistic textures." Results "closely look like the target distribution."2 Quality comes from respecting the coarse-to-fine architecture.

Why These Properties Matter for Documents

Diffusion Advantage Document Parallel
Preserves semantic structure Maintains overall thesis coherence
Avoids mode collapse Prevents getting stuck in repetitive framing
Exceptional quality through iteration Polished prose after structure stabilises
Coarse before fine Intent before prose

Latent Space: Working in Compressed Resolution

The Efficiency Trick

Raw images are huge: millions of pixels at high resolution. Diffusion in pixel space would be computationally expensive. The solution: work in "latent space"—a compressed representation.

"The latent diffusion architecture reduces memory usage and computing complexity by applying the diffusion process to a lower-dimensional latent space. This distinguishes latent diffusion models like Stable Diffusion from traditional ones: they generate compressed image representations instead of using the Pixel space."
— viso.ai, "Master Stable Diffusion"

What Latent Space Means

  • Instead of manipulating individual pixels (high resolution), the model works with compressed features (low resolution)
  • Structure and meaning are preserved in the compressed space
  • Only when composition is stable does the model expand to full pixel resolution

The Writing Parallel

1 Image Generation

Latent space = compressed image features

Pixel space = final detailed image

Work in latent space first, expand to pixels later

2 Document Writing

Latent space = outline, chapter cards, structural notes

Pixel space = final prose with all the details

Work in structure first, expand to prose later

Trying to write prose directly = working in pixel space from the start

Translation to Complex Work

Your Document's "Noise"

  • In diffusion, noise is the starting point—pure randomness
  • In writing, your "noise" is the initial intent: vague ideas, general direction, raw inspiration
  • Just like diffusion refines noise into an image, progressive resolution refines intent into a polished document

The Refinement Passes

Diffusion Pass Document Pass
Noise → Rough shapes Intent → Thesis + major moves
Shapes → Structure Thesis → Chapter purposes
Structure → Details Chapters → Section arguments
Details → Texture Sections → Polished prose

The Critical Constraint

Diffusion:

Never lock in pixel colours until composition is stable

Writing:

Never lock in prose until structure is stable

Both systems work because they refuse to commit at high resolution before low resolution stabilises.

The Diffusion Mindset for Writing

What to Change

Old Mental Model (Linear)
  • "I'll start at the beginning and write my way through"
  • "Once I write something, I'll edit it until it's good"
  • "The draft is the deliverable in rough form"
New Mental Model (Diffusion)
  • "I'll start with global structure and progressively add detail"
  • "Structure stabilises before prose happens"
  • "The draft is a resolution layer, not a rough version of the final"

Practical Implications

1

Start with noise, not prose

Your first pass is intent and structure, not sentences

2

Multiple passes at increasing resolution

Each pass adds detail, not length

3

Don't polish what isn't stable

If structure might change, don't invest in prose

4

Respect the sequence

Coarse to fine, always

What This Enables

  • Structural problems caught when they're cheap to fix
  • Prose written with clear constraints (knows its purpose, evidence, connections)
  • Changes propagate cleanly because structure is explicit
  • Coherence across the whole document, not just within paragraphs

Chapter Takeaways

1

Key insight

Diffusion models succeed because they refuse to commit to high-resolution details (pixels) until low-resolution structure (composition) stabilises. Writing can work the same way.

2

The architecture

Coarse → Fine, with each resolution layer stabilising before the next begins.

3

The constraint

Never lock in prose until structure is stable—just as diffusion never locks in pixels until composition is stable.

Diffusion gives us the mental model: coarse to fine, progressive refinement. But what exactly are the "resolution layers" for documents?

Next: The Resolution Ladder—a concrete framework with six layers from Intent (L0) to Prose (L5) →

03
Part I: The Architecture

The Resolution Ladder

L0 to L5: Six Layers of Commitment

Complex work has resolution layers—like image resolution, but for meaning and commitment. Each layer has a different cost to change.

We've seen the problem (Jenga cascades) and the solution concept (diffusion's coarse-to-fine). Now we need a concrete framework: what are the actual resolution layers for documents?

This chapter defines the Resolution Ladder—six layers from Intent to Prose, each with distinct characteristics and change costs.

The Six Resolution Layers

Overview

The Resolution Ladder

From Intent to Prose: Six Layers of Progressive Commitment

Layer Name What It Contains Cost to Change
L0 Intent Why you're writing, for whom, success criteria, what's out of scope Very low
L1 Whole-Doc Silhouette One-page summary: thesis, major moves, evidence inventory Low
L2 Chapter Cards Each chapter's purpose, key claims, dependencies, evidence needs Medium
L3 Section Skeletons Headings, bullet arguments, transitions—not prose yet Medium
L4 Paragraph Plans Each paragraph's job, what evidence it uses Higher
L5 Prose + Citations Actual sentences, polished writing Highest
The principle: Don't advance to a higher resolution until the current layer passes its stabilisation gate.

L0 — Intent: The Invariant

What L0 Contains

Why you're writing

What outcome should this produce?

For whom

Who is the reader? What do they already know/believe?

Success criteria

What does "good" look like? How will you know it worked?

Out of scope

What are you explicitly NOT covering?

Why L0 Matters

  • Intent is the gravity well that keeps everything aligned
  • Without stable intent, every other decision floats
  • Changes at L0 cascade to everything—but L0 changes are cheap because nothing else exists yet

Example L0 (Strategy Document)

Intent Specification

Intent:

- Why: Help CFO make kill/fix/double-down decisions on AI portfolio

- For whom: Skeptical CFO, $500K AI spend, needs board-ready material

- Success: CFO can defend decisions at next board meeting

- Out of scope: Technical implementation, vendor selection, hiring decisions

Stabilisation Gate for L0

Before advancing to L1, verify:

L1 — Whole-Doc Silhouette: The One-Page Draft

What L1 Contains

Thesis

The core claim or argument in one paragraph

Major Moves

The sequence of arguments/sections (not chapters yet)

Evidence Inventory

What you have, what you need, what's uncertain

Why L1 Matters

  • L1 is the "blurry whole picture" that fits in your head (and in a context window)
  • At L1, you can evaluate whether the document will satisfy intent

Example L1 (Strategy Document)

Whole-Document Silhouette

Thesis:

Current AI spend is fragmented across orphaned pilots;

consolidation under governance will convert waste into compound returns

Major moves:

1. Current state: audit shows X% waste, Y% overlap, Z failure rate

2. Why this happens: misaligned incentives, missing governance

3. Framework: The AI Investment Steward model

4. Recommendations: kill/fix/double-down decisions with rationale

5. Roadmap: 90-day implementation with milestones

Evidence inventory:

Have: audit data, industry benchmarks, internal metrics

Need: competitor analysis, specific ROI projections

Uncertain: HR readiness data (may need to soften claims)

Stabilisation Gate for L1

Before advancing to L2, verify:

L2 — Chapter Cards: The Evidence Contract

What L2 Contains

For each chapter, you create a "card" that specifies:

  • Purpose: What this chapter does in service of the thesis
  • Key claims: What this chapter asserts
  • Dependencies: What earlier chapters must establish for this one to work
  • Evidence needs: What data/quotes/examples this chapter requires

Why L2 Matters

L2 is where you make explicit commitments about evidence. Before writing a chapter, you should know what it needs to prove and what proof you have.

If evidence is missing, you discover it at L2 (cheap) not L5 (expensive).

Example Chapter Card

Chapter Card: Chapter 3 — Current State Analysis

Purpose:

Establish baseline AI spend and failure modes to create urgency

Key claims:

  • • X% of current spend is duplicated across teams
  • • Y% of pilots have stalled without clear kill/proceed decision
  • • Total waste estimated at $Z per year

Dependencies:

  • • Chapter 2 must have established why AI portfolio thinking matters
  • • Reader must understand the Three Traps framework (from Ch2)

Evidence needs:

  • • Internal audit data have
  • • Industry failure rate benchmarks have - McKinsey/Gartner
  • • Specific examples of pilot stalls need 2-3 cases

Evidence budget: 3 data points, 2 case references, 1 framework reference

The Evidence Budget Concept

At L2, you don't need final quotes—you need to know what kind of evidence each chapter requires.

  • L2-level thinking: "Need industry benchmarks"
  • L5-level thinking: "Need McKinsey 2025 State of AI, page 47, second paragraph"

L2 is about knowing the category; L5 is about having the specific instance.

L3 — Section Skeletons: Arguments Before Prose

What L3 Contains

Section headings

The bones of each chapter

Bullet arguments

What each section will say (not how it says it)

Transitions

How sections connect to each other

Why L3 Matters

L3 is the last layer before prose—it's your final check on argument structure. At L3, you catch: missing arguments, illogical sequences, redundant sections.

Fixing these at L3 costs minutes; fixing them at L5 costs hours.

Example Section Skeleton

Chapter 3: Current State Analysis — Section Skeleton

## 3.1 The Audit Methodology

- How we gathered data (audit process, interviews, spend analysis)

- What we measured (initiatives, spend, status, outcomes)

- Limitations and caveats

## 3.2 What We Found: The Numbers

- Total spend: $485K across 12 initiatives

- Breakdown by department and initiative type

- Headline: 7 of 12 in pilot purgatory

[TRANSITION: Numbers tell part of the story. Patterns tell the rest.]

## 3.3 The Patterns We See

- Pattern 1: Duplication (3 teams, same problem)

- Pattern 2: Pilot stalls (no clear criteria)

- Pattern 3: Shadow AI (ungoverned tools)

[TRANSITION: These patterns aren't random—they trace to systemic causes.]

L4 — Paragraph Plans: The Last Mile Before Prose

What L4 Contains

  • Each paragraph's job: What it needs to accomplish
  • Evidence assignment: Which specific evidence this paragraph uses
  • Length guidance: Roughly how much space this gets

Why L4 Matters

L4 is where you assign evidence to specific locations. At L4, you discover if you have enough evidence (or too much). L4 transforms the abstract skeleton into concrete writing tasks.

Example Paragraph Plan

Section 3.2: What We Found: The Numbers

Para 1 (2-3 sentences): Open with headline number

Job: Establish total AI spend to create context

Evidence: Internal audit data - "$485,000 across 12 initiatives"

Tone: Factual, no judgment yet

Para 2 (4-5 sentences): The duplication problem

Job: Show waste from overlap

Evidence: Audit finding - "Three teams building chatbots independently"

Quote: Include internal observation if available

Para 3 (3-4 sentences): The stalled pilots

Job: Show decision paralysis

Evidence: "7 of 12 initiatives in 'pilot' status for 6+ months"

Link: Connect to "One-Error Death Spiral" trap

Para 4 (2 sentences): Industry context

Job: Normalise (this isn't just us)

Evidence: McKinsey stat on pilot failure rates

Tone: "This is common—and fixable"

L5 — Prose + Citations: The Final Layer

What L5 Contains

  • Actual sentences, polished writing
  • Specific citations with sources
  • Formatting, voice, readability

Why L5 Is Last

  • L5 is expensive to change—polished prose takes effort
  • By the time you reach L5, you should know exactly what each paragraph needs to do
  • L5 is about how to say it, not what to say—that was decided at L3-L4

The "Only Prose After Stabilisation" Rule

If you're tempted to write prose before L4, ask: "Do I know what this paragraph needs to do?"

If the answer is fuzzy, you're not ready for L5. Writing prose too early = creating expensive commitments you may need to unwind.

Why the Ladder Works

Change Costs at Each Layer

Layer What Changes Look Like Time Cost
L0 "Actually, this is for the Board, not the CFO" 5 minutes
L1 "Let's swap the order of these major moves" 15 minutes
L2 "Chapter 4 needs a different evidence type" 30 minutes
L3 "These two sections should be merged" 1 hour
L4 "This paragraph needs a different example" 30 minutes
L5 "Rewrite this section's prose from scratch" 3+ hours

What "Stabilise" Means

  • Stabilisation ≠ perfection
  • Stabilisation = "good enough that we won't need to undo it"
  • You can always refine later, but you shouldn't have to restructure

Chapter Takeaways

1

Key insight

Documents have six resolution layers, from Intent (L0) to Prose (L5). Each layer has different change costs. Stabilise each layer before advancing.

2

The framework

L0 (Intent) → L1 (Silhouette) → L2 (Chapter Cards) → L3 (Section Skeletons) → L4 (Paragraph Plans) → L5 (Prose)

3

The rule

Don't advance to the next resolution layer until the current one passes its stabilisation gate. Evidence planning (L2) happens before evidence placement (L4).

We have the resolution layers—but who decides when to advance and when to back up? Progressive resolution needs a supervisor: a metacognitive controller.

Next: The Think Loop and Do Loop—how to manage your own resolution transitions →

04
Part I: The Architecture

The Metacognitive Controller

Think Loop and Do Loop

The Resolution Ladder tells you what layers exist. But who decides when to advance and when to back up? You need a supervisor—a metacognitive controller.

Having a framework is necessary but not sufficient. You also need a way to operate the framework: to decide what resolution you're working at and whether to continue, advance, or retreat.

This is metacognition—thinking about thinking. In progressive resolution, metacognition takes the form of two interacting loops.

Two Loops, Not One

The Standard Model (Single Loop)

How most people work:

  1. 1 Receive task
  2. 2 Do work
  3. 3 Check if work is done
  4. 4 If not done, do more work
  5. 5 Repeat until finished

Problem:

This loop only checks "is the work done?"—not "is this the right work?" or "am I working at the right level?"

The Progressive Resolution Model (Dual Loop)

Do Loop (Object-Level)
  • Generate the next artifact at current resolution
  • Execute the task as understood
  • Produce outputs
Think Loop (Meta-Level)
  • Watch the Do Loop
  • Ask: Are we solving the right problem?
  • Ask: At the right resolution?
  • Ask: With the right constraints?
  • Decide: Continue, advance, or back up?

The Do Loop: Getting Work Done

What the Do Loop Does

  • Executes within a resolution layer
  • Drafts, writes, organises, polishes
  • Follows the current plan
  • Produces artifacts (intents, silhouettes, cards, skeletons, paragraphs, prose)

Do Loop Characteristics

Focused

Works on the current task

Local

Doesn't question the bigger picture

Productive

Creates output

Myopic

Can lose sight of whether output serves the goal

When the Do Loop Runs Alone

Without a Think Loop:

  • • You draft prose for hours without checking if the structure makes sense
  • • You polish sentences while the argument has a hole
  • • You finish a chapter only to realise it doesn't serve the thesis
  • • You hit "done" on something that doesn't meet intent

The Think Loop: Watching the Work

What the Think Loop Does

  • Monitors the Do Loop
  • Runs gate checks
  • Decides resolution transitions (advance or back up)
  • Maintains alignment with intent

Think Loop Questions

The Think Loop periodically asks:

  1. 1. Alignment: Does what we're producing still satisfy the intent?
  2. 2. Resolution fit: Are we working at the right layer, or should we zoom in/out?
  3. 3. Dependency coherence: Are our assumptions about other layers still valid?
  4. 4. Evidence status: Do we have what we need, or are we hoping it appears?
  5. 5. Scope discipline: Are we staying within bounds, or has scope crept?

When to Run the Think Loop

Before starting a new layer

Is the previous layer stable?

After completing a draft at any layer

Does it pass its gate?

When stuck or struggling

Is the struggle because we're at the wrong resolution?

When something feels off

Trust the instinct and check alignment

Gate Checks: What the Think Loop Evaluates

L0 → L1 Gate

Before advancing from Intent to Silhouette:

☐ Purpose is clear in one sentence

☐ Audience is specific enough to make decisions

☐ Success criteria are concrete

☐ Scope is bounded

L1 → L2 Gate

Before advancing from Silhouette to Chapter Cards:

☐ Thesis serves intent

☐ Major moves build toward thesis

☐ Evidence inventory is complete enough

☐ The whole document is visible in the silhouette

L2 → L3 Gate

Before advancing from Chapter Cards to Section Skeletons:

☐ Every chapter serves the thesis

☐ Dependencies are consistent

☐ Evidence is planned for every key claim

☐ Missing evidence has explicit tasks, not hopes

L3 → L4 Gate

Before advancing from Skeletons to Paragraph Plans:

☐ Sections flow logically

☐ No argument gaps

☐ Transitions connect smoothly

☐ Skeleton matches chapter card's purpose

L4 → L5 Gate

Before advancing from Paragraph Plans to Prose:

☐ Every paragraph has a clear job

☐ Evidence is assigned to specific paragraphs

☐ Evidence actually exists (confirmed, not assumed)

☐ Paragraph sequence flows

The Think Loop's Three Decisions

Decision 1: Continue

Conditions:

Gate passes, work is progressing, no red flags

Action:

Keep working at current resolution

Decision 2: Advance

Conditions:

Current layer is stable, gate passes, ready for more detail

Action:

Move to next resolution layer

Decision 3: Back Up

Conditions:

Gate fails, structural problem detected, can't proceed cleanly

Action:

Return to appropriate lower resolution layer

Why Backing Up Is Hard (And Essential)

The Sunk Cost Trap

  • You've written 10 pages of prose
  • You realise Chapter 3's framing is wrong
  • The temptation: "I'll just adjust the prose—I can't throw this away"
  • The reality: Patching L5 when the problem is at L2 creates hidden inconsistencies

When to Back Up

Signal What It Means Back Up To
Prose doesn't flow despite editing Structure problem L3 or L2
Evidence doesn't support the claim Claim problem L2
Chapter doesn't serve thesis Thesis or chapter problem L1 or L2
Not sure what document is for Intent problem L0
Everything feels off Unknown—check from L0 L0

The Counterintuitive Truth

Backing up feels like losing progress. But:

  • • Time spent patching wrong-layer problems: hours with poor results
  • • Time spent backing up and fixing at the right layer: fraction of the time, clean results

Backing up is the faster path when the problem is structural.

Running the Loops in Practice

1 Pattern: Time-Boxed Think Loop
  • • Do Loop runs for 30-60 minutes
  • • Think Loop interrupts: "Gate check—how are we doing?"
  • • If clear, continue. If fuzzy, run fuller assessment.
2 Pattern: Transition-Triggered Think Loop
  • • Whenever you're about to advance layers, run Think Loop
  • • Before starting L3 after finishing L2: "Is L2 actually stable?"
  • • Before writing prose: "Is the paragraph plan solid?"
3 Pattern: Discomfort-Triggered Think Loop
  • • When work feels hard in a bad way: pause and run Think Loop
  • • "Am I struggling because this is difficult, or because I'm at the wrong resolution?"
  • • Often, struggle signals a layer mismatch
4 Pattern: Completion Think Loop
  • • After finishing any artifact, run Think Loop
  • • "Does this pass its gate?"
  • • "Am I ready to advance?"

The Think Loop's Vocabulary

Internal Dialogue

Train yourself to ask these questions:

  • ? "What resolution am I working at right now?"
  • ? "Is this the right resolution for what I'm doing?"
  • ? "Does the previous layer support what I'm trying to do here?"
  • ? "If this changes, what else changes?"
  • ? "Am I polishing something that might need restructuring?"

Team Vocabulary

When working with others (or with AI):

  • "Let's check what resolution we're at"
  • "Should we back up a layer?"
  • "Is L2 stable enough to proceed?"
  • "That feels like an L5 concern—are we ready for L5?"

Chapter Takeaways

1

Key insight

Progressive resolution needs a metacognitive controller—a Think Loop that watches the Do Loop and decides when to continue, advance, or back up.

2

The two loops

Do Loop (object-level work), Think Loop (meta-level supervision)

3

The Think Loop's job

Run gate checks, maintain alignment with intent, decide resolution transitions. Its job is NOT to "be smart"—it's to control resolution, scope, and commitment.

We know how to advance through layers (Think Loop gives permission). But what about when things go wrong? When a gate fails?

Next: The Exception Protocol—Detect, Escalate, Refactor, Recompile—the systematic way to handle problems without creating cascades →

05
Part I: The Architecture

The Exception Protocol

Detect → Escalate → Refactor → Recompile

Gates fail. Evidence disappears. Structures break. What then?

The Think Loop's gate checks will sometimes say "no." A claim can't be supported. A chapter doesn't serve the thesis. The flow doesn't work. When this happens, you need a systematic response—not panic, not patching, not hoping it resolves itself.

This chapter defines the Exception Protocol: the four-step process for handling problems without creating cascades.

"Don't glue the wobbling block; rebuild the layer above it."

Why Local Patching Fails

The Instinct

  1. 1 You discover a problem at L5 (prose doesn't flow)
  2. 2 Natural response: "I'll rewrite this paragraph"
  3. 3 If that doesn't work: "I'll rewrite this section"
  4. 4 Still not working: "I'll rewrite the whole chapter"

Why This Creates Debt

  • You're patching at the level where symptoms appear, not where causes live
  • Prose doesn't flow because the structure has a problem
  • Rewriting prose without fixing structure = hiding the problem
  • The inconsistency becomes invisible but doesn't disappear
  • Eventually, it surfaces somewhere else—or creates subtle incoherence readers can feel but can't name

The Compounding Effect

Fred Brooks documented this pattern decades ago: fixing a defect has a 20% to 50% chance of introducing another defect.3 Each patch creates new stress points that require additional patches.

The Patching Spiral
1

Patch 1

Makes prose work locally, but strains adjacent prose

2

Patch 2

Fixes adjacent prose, but now the section feels forced

3

Patch 3

Smooths the section, but now it doesn't quite match the chapter's purpose

N

Patch N

You've spent hours polishing something that still doesn't work, and you can't see why anymore

The Exception Protocol

Overview

When a gate fails or a structural problem emerges:

1

DETECT

Signal that a constraint has failed

2

ESCALATE

Move up to where the problem is representable

3

REFACTOR

Fix the problem at that layer

4

RECOMPILE

Regenerate downstream artifacts

Why This Order Matters

  • Detect without escalate = vague unease without action
  • Escalate without refactor = knowing what's wrong but not fixing it
  • Refactor without recompile = fixed upstream but stale downstream

The sequence is a complete response; skip steps and problems persist.

Step 1: Detect

What Detection Looks Like

Signals that something has failed:

  • Gate check fails (explicit)
  • Writing feels unusually hard (implicit)
  • You keep rewriting the same section (implicit)
  • Prose "sounds right" but doesn't quite make sense (implicit)
  • Evidence doesn't actually support the claim you want to make (explicit)

Types of Failures

Failure Type Example Symptom
Evidence gap "I need a stat but can't find one" Claim feels unsupported
Structural flaw "These sections don't connect" Flow problems
Intent drift "This doesn't feel like what we set out to write" Vague wrongness
Scope creep "We're now covering X which wasn't in scope" Expanding without decision
Dependency violation "This assumes something we didn't establish" Logical gaps
The Key Move

Name the failure. Vague discomfort is hard to act on. "I detected an evidence gap in the Chapter 4 claim about ROI" is actionable.

Step 2: Escalate

What Escalation Means

  • Identify the resolution layer where the problem is representable
  • This is usually higher (lower number) than where you noticed it
  • The goal: find the layer where fixing the problem is cheap and complete

Escalation Examples

Noticed At Problem Type Escalate To
L5 (prose) Flow doesn't work despite editing L3 (section skeleton)
L5 (prose) Claim can't be supported L2 (chapter card)
L4 (para plan) Don't know what this paragraph should do L3 (section skeleton)
L3 (skeleton) Chapter doesn't serve thesis L1 (silhouette)
L2 (chapter card) Not sure who this is for L0 (intent)
The Escalation Question

Ask: "At what resolution layer does this problem become a simple, contained statement?"

  • • "The prose doesn't flow" = vague (might be L5, might be L2)
  • • "Section 3.2 and 3.3 are redundant" = clear (L3 problem)
  • • "Chapter 4's purpose is unclear" = clear (L2 problem)
  • • "We're not sure if this is for the CFO or the Board" = clear (L0 problem)

Step 3: Refactor

What Refactoring Means

  • Change the artifact at the escalation level to fix the problem
  • Don't change downstream artifacts yet—they depend on this layer
  • Make the change clean and complete at this level

Refactoring Examples

Escalated to L2 (Chapter Card):

Old: "Chapter 4 claims 40% ROI"

Problem: Evidence only supports 15-25% range

Refactored: "Chapter 4 claims 15-25% ROI" or "Chapter 4 claims directional improvement without quantifying"

Escalated to L3 (Section Skeleton):

Old: Section 3.2 and 3.3 cover overlapping ground

Problem: Redundancy

Refactored: Merge into single section, reassign points clearly

Escalated to L1 (Silhouette):

Old: Major move 3 is "detailed implementation plan"

Problem: Implementation is out of scope per L0

Refactored: Major move 3 becomes "high-level roadmap only"

The Refactoring Mindset

You're not fixing prose—you're fixing the blueprint. Changes at this level are architectural, not cosmetic. A good refactor makes the downstream work obvious.

Step 4: Recompile

What Recompilation Means

  • Regenerate all downstream artifacts that depended on what you changed
  • Don't try to "edit" downstream—regenerate from the new state

Recompilation Scope

If you refactored L2:
  • → Regenerate L3 (section skeletons) for affected chapters
  • → Regenerate L4 (paragraph plans) for affected sections
  • → Regenerate L5 (prose) for affected paragraphs
If you refactored L1:
  • → Regenerate L2 (chapter cards) that depended on changed elements
  • → Then cascade: L3, L4, L5 for affected areas

Why Regenerate, Not Edit

  • Edited downstream artifacts carry assumptions from the old state
  • "Let me just tweak the prose to reflect the new claim" = likely to miss subtle inconsistencies
  • Regeneration from the new blueprint ensures alignment
  • This is why AI-assisted work makes progressive resolution powerful: regeneration is cheap4—when the cost of generating and regenerating approaches zero, it becomes cheaper to rebuild than to patch

The Protocol in Action: A Scenario

Scenario: Writing Chapter 4

You're at L5, writing prose. Something feels wrong—the argument isn't landing.

1

Step 1: Detect

You pause and name it: "The claim that AI saves 40% costs isn't supported by the evidence I have. I've been trying to write around it."

2

Step 2: Escalate

The problem isn't prose—it's the claim. Escalate to L2 (Chapter 4's card).

Looking at the card: "Key claims: AI deployment saves 40% operational costs."

3

Step 3: Refactor

Options: (A) Find evidence that supports 40%, (B) Change claim to what evidence supports: "15-25% cost reduction", (C) Change claim to qualitative.

You choose B. Refactored L2 claim: "Key claims: AI deployment reduces operational costs by 15-25%."

4

Step 4: Recompile

With the new L2 in place:

  • • L3: Update section skeleton to reference lower number
  • • L4: Adjust paragraph plans to present evidence for 15-25%
  • • L5: Regenerate prose with accurate claim

Result: 30 minutes of protocol execution → Clean, supportable argument → No hidden inconsistencies

(Contrast: hours of prose patching that never quite works)

When NOT to Use the Protocol

Simple Fixes Stay Local

Not every problem requires escalation:

  • • Typo in prose → fix at L5
  • • Awkward phrasing → fix at L5
  • • Missing word → fix at L5
The Test

Ask: "Is this problem contained, or does it reveal something upstream?"

  • • Contained: fix locally
  • • Upstream implication: run the protocol

When in doubt, escalate—it's cheaper to check upstream than to patch and hope.

Chapter Takeaways

1

Key insight

When something breaks, don't patch at the level where you noticed it. Back up to where the problem is representable, fix it there, and regenerate downstream.

2

The protocol

Detect (name the failure) → Escalate (find the right layer) → Refactor (fix the blueprint) → Recompile (regenerate downstream)

3

The mantra

"Don't glue the wobbling block; rebuild the layer above it."

We've defined the architecture (Resolution Ladder), the controller (Think/Do Loop), and the exception handler (Detect→Escalate→Refactor→Recompile). But this isn't new—software has been doing this for decades.

Next: The software parallel—why progressive resolution is proven architecture, not experimental theory →

06
Part I: The Architecture

The Software Parallel

Why This Isn't New

Progressive resolution isn't experimental theory. Software engineering has proven this architecture for 50 years.

The patterns we've described—resolution layers, stabilisation gates, escalation protocols—map directly to how software is built. Compilers, development methodologies, cost research: all point to the same conclusion.

This chapter grounds progressive resolution in software evidence. Not because writing is software, but because both are instances of complex work that fails when you commit to details before structure stabilises.

The Mapping: Software ↔ Documents

Resolution Layers in Both Domains

Software ↔ Document Layer Mapping
Layer Software Engineering Document Writing
L0 Requirements / Specification
What we're building, for whom, success criteria
Intent + Audience
Why we're writing, for whom, success looks like
L1 System Architecture
High-level components, data flow, major subsystems
Whole-Doc Silhouette
Thesis, major moves, evidence inventory
L2 Module / Interface Design
Component responsibilities, contracts between modules
Chapter Cards
Each chapter's purpose, claims, dependencies
L3 Function Design
Function signatures, algorithms, internal logic outline
Section Skeletons
Headings, bullet arguments, transitions
L4 Detailed Implementation Plan
Pseudocode, data structures, edge cases
Paragraph Plans
Each paragraph's job, evidence assigned
L5 Code
Actual implementation in programming language
Prose + Citations
Actual sentences, polished writing
Tests Unit / Integration Tests
Does the code do what the spec says?
Coherence Checks
Does prose match intent? Claims supported?

The Same Principle

In both domains: higher-numbered layers depend on lower-numbered layers. Changes cascade downward. Stabilise structure before committing to detail.

Intermediate Representation: Why Compilers Use Layers

How Compilers Work

"An intermediate representation (IR) is the data structure or code used internally by a compiler or virtual machine to represent source code. Use of an intermediate representation such as this allows compiler systems like the GNU Compiler Collection and LLVM to be used by many different source languages to generate code for many different target architectures."
— Wikipedia, "Intermediate representation"

The Compilation Pipeline

SRC

Source Code

Human-readable

AST

Abstract Syntax Tree

Parsed structure

IR

Intermediate Rep.

Optimisable form

ASM

Assembly

Architecture-specific

BIN

Machine Code

Executable

"The compilation process successively lowers the code through a series of such representations, taking advantage of the useful properties of each representation."
— Cornell CS, "Intermediate Representations"

Why Compilers Don't Skip Layers

  • High-level optimisations are applied at IR (intermediate) level
  • Target-specific details only applied at later stages
  • Changes at source level propagate cleanly through each representation
  • Jumping from source to machine code would be intractable
The Parallel

Document IR = your L1-L3 artifacts (silhouette, chapter cards, skeletons). These are intermediate representations that exist between intent and prose, allowing you to optimise structure before committing to final form.

The 100x Rule: Why Late Changes Cost More

Industry Research

"The Systems Sciences Institute at IBM has reported that 'the cost to fix an error found after product release was four to five times as much as one uncovered during design, and up to 100 times more than one identified in the maintenance phase.'"
— Celerity, "The True Cost of a Software Bug"

Why the Multiplier Exists

"Due to exponentially escalating costs of rework, meta-work (work about work), responsibility hand-offs, interruptions, and eventually customer support, bug triage, and maintenance, fixing a production bug may cost 100x more than fixing a bug at design time, and over 15x more than fixing a bug at implementation time."
— Eric Elliott, Medium

The Document Implication

The cost curve applies to documents just as it does to software:

  • Fix at intent (L0): 5 minutes
  • Fix at silhouette (L1): 15 minutes
  • Fix at chapter cards (L2): 30 minutes
  • Fix at prose (L5): hours of cascading rework

The same structural problem at L5 costs 10-100x more time than at L1.5

Requirements and Success: The Research

Clear Requirements Before Development

"One standout statistic was that projects with clear requirements documented before development started were 97 percent more likely to succeed."
— The Register, "268% higher failure rates for Agile software projects"
"Putting a specification in place before development begins can result in a 50 percent increase in success, and making sure the requirements are accurate to the real-world problem can lead to a 57 percent increase."
— The Register, "268% higher failure rates for Agile software projects"

Waterfall vs. Iterative

"The Standish Group's Chaos Report provides additional context, documenting that waterfall projects experience a forty-nine percent failure rate compared to agile projects' ten to eleven percent failure rate."
— CoreStory, "Specification-Driven Development as an Enabler of Agile Methodology"

Why This Isn't Waterfall

The Misconception

"Resolution layers" might sound like waterfall—planning everything upfront before doing anything. But there's a crucial difference:

Waterfall
  • Lock structure early
  • Don't revisit requirements
  • Specifications are immutable
  • Linear, one-way progression
Progressive Resolution
  • Iterate at each layer until stable
  • Back up when needed
  • Structures are changeable (just cheaper early)
  • Advance only when stable, retreat when not
"Waterfall's failure stemmed not from the existence of specifications but from their immutability once created. The expense of modifying specifications in waterfall projects created lock-in effects that prevented adaptation to changing requirements. In contrast, modern specification-driven development treats specifications as mutable, low-cost artifacts that can be modified as frequently as needed."
— CoreStory, "Specification-Driven Development"

The Key Difference

  • 1. Waterfall: Lock structure → can't change → failure when reality doesn't match plan
  • 2. Progressive Resolution: Iterate freely at each layer → advance when stable → back up when needed → structure serves reality
"Specification-driven development does not represent a return to waterfall methodology but rather the removal of technical constraints that have historically limited agile methodology's effectiveness... When artificial intelligence can generate implementations from specifications in minutes rather than months, the cost of iteration approaches zero, enabling true responsiveness to changing requirements."
— CoreStory, "Specification-Driven Development"

Connection: Context Engineering

Progressive resolution connects to another principle: effective AI context management requires tiered structures, not monolithic context. The resolution layers create natural memory tiers.

Effective AI context management requires a tiered memory architecture—like CPU cache hierarchies—rather than monolithic context, where data is organised by access frequency.

L1 (Working Context) = Current section prose (L4-L5)

L2 (Reference Context) = Structure and cards (L1-L3)

L3 (Archive) = Full document history (accessed rarely)

For a deeper dive on context management patterns, see Context Engineering: Why Building AI Agents Feels Like Programming on a VIC-20 Again.

Chapter Takeaways

1

Key insight

Progressive resolution is proven architecture—software engineering has used this pattern for decades (compiler IR, requirements-first development, cost curve research).

2

The evidence

100x cost for late-stage fixes. 97% more success with clear requirements. Compilers use intermediate representations for the same reasons we use document layers.

3

Not waterfall

Progressive resolution isn't "plan everything first." It's "iterate freely at each layer, advance when stable, back up when needed." AI makes regeneration cheap enough for this to work.

We've built the complete architecture: Resolution Ladder, Think/Do Loop, Exception Protocol, and grounded it in software evidence.

Now it's time to see it in action with a complete worked example.

Next: Building a strategy document with progressive resolution—from intent to polished deliverable →

07
Part II: In Practice

The Worked Example

Building a Strategy Document with Progressive Resolution

Theory is nice. Now let's build something.

This chapter walks through a complete worked example: a consultant building a 20-page AI strategy recommendation for a CFO. We'll trace the document from intent through polished prose, showing how each resolution layer functions and how the exception protocol handles problems that emerge.

L0 — Lock Intent

Starting Point

Before any structure, lock the invariants: why are we writing this, for whom, what does success look like?

L0: Intent Specification

Why:

Help CFO make kill/fix/double-down decisions on AI portfolio

For whom:

Skeptical CFO, $500K AI spend, needs board-ready material

Secondary: Board members (will see excerpts)

Success criteria:

CFO can defend recommendations at next board meeting

Clear decision framework for each initiative

ROI case substantiated with evidence

Out of scope:

Technical implementation details

Vendor selection

Hiring recommendations

Gate Check: L0

Before advancing:

Purpose is clear in one sentence: "Help CFO make portfolio decisions"

Audience is specific: Skeptical CFO needing board material

Success is concrete: CFO can defend at board meeting

Scope is bounded: No implementation, no vendors, no hiring

Gate passes → Advance to L1

L1 — Draft the Silhouette

The One-Page Draft

L1: Whole-Document Silhouette

Thesis:

Current AI spend is fragmented across orphaned pilots.

Consolidation under portfolio governance will convert waste into compound returns.

Major moves:

1. Executive Summary

Key findings, recommendations, ROI case

2. Context: Why AI Portfolio Thinking

Set up why scattered pilots fail, introduce portfolio lens

3. Current State Analysis

Audit findings, waste patterns, failure modes

4. The AI Portfolio Framework

Decision framework for kill/fix/double-down

5. Recommendations

Specific decisions for each initiative

6. Implementation Roadmap

90-day milestones, governance structure

Evidence inventory:

HAVE: Internal audit data, industry benchmarks (McKinsey/Gartner)

NEED: ROI projections, competitor analysis

UNCERTAIN: HR readiness data (may need to soften claims)

Gate Check: L1

Before advancing:

Thesis serves intent: Yes—enables portfolio decisions

Major moves build toward thesis: Context → Analysis → Framework → Recommendations

~ Evidence inventory: Some gaps—create research task for ROI projections

Whole document visible in silhouette: Yes

Gate passes (with research task noted) → Advance to L2

L2 — Create Chapter Cards

Example Chapter Card

Chapter Card: Chapter 3 — Current State Analysis

Purpose:

Establish baseline AI spend and failure modes to create urgency for portfolio approach

Key claims:

  • • 35% of current AI spend is duplicated across teams
  • • 7 of 12 pilots have stalled without clear kill/proceed decision
  • • Total waste estimated at $175K per year

Dependencies:

  • • Chapter 2 must establish why portfolio thinking matters
  • • Reader must understand "pilot purgatory" concept from Ch2

Evidence needs:

  • • Internal audit data have
  • • Industry failure rate benchmarks have - McKinsey
  • • Specific pilot stall examples need 2-3 cases
Chapter Card: Chapter 4 — The AI Portfolio Framework

Purpose:

Provide decision framework that CFO can apply to any AI initiative

Key claims:

  • • Three-lens evaluation: Strategic fit, Technical feasibility, ROI potential
  • • Clear decision matrix: Kill / Fix / Double-Down
  • • Framework produces defensible, consistent decisions

Dependencies:

  • • Chapter 3 must have established current problems to solve

Evidence needs:

  • • Industry framework support have - portfolio theory literature
  • • Example application to 2-3 pilots can construct from audit

Gate Check: L2

Before advancing:

Every chapter serves thesis

Dependencies are consistent (no circular refs, all setups exist)

Evidence planned for key claims

~ Missing evidence has explicit tasks: Need pilot stall examples

Gate passes → Advance to L3

L3 — Build Section Skeletons

Example Skeleton

L3: Chapter 3 Section Skeleton

## 3.1 Audit Methodology

- Brief: how we gathered data (interviews, spend analysis)

- Scope: 12 initiatives across 4 departments

- Limitations caveat (3 sentences)

## 3.2 The Numbers

- Total spend: $485K across 12 initiatives

- Department breakdown (table)

- Headline: 7 of 12 in pilot purgatory

[TRANSITION: Numbers tell part. Patterns tell the rest.]

## 3.3 The Patterns

- Pattern 1: Duplication (3 chatbot projects)

- Pattern 2: Pilot stalls (no criteria for proceed/kill)

- Pattern 3: Shadow AI (ungoverned tools)

[TRANSITION: Patterns trace to systemic causes → Ch4 addresses]

## 3.4 What This Costs

- Direct waste: $175K/year estimate

- Opportunity cost: unable to compound learnings

- Risk: competitors moving faster

Exception Protocol in Action

The Problem Emerges

At L4 (paragraph planning for Chapter 5):

Realise the ROI claim for Initiative 7 is unsupported. The chapter card says "projected 25% cost reduction" but evidence only supports 10-15% range.

Running the Protocol

1

DETECT

"The ROI claim for Initiative 7 (25% reduction) cannot be supported by available evidence. I've been trying to write around it but the paragraph doesn't land."

2

ESCALATE

This isn't an L4 problem (paragraph wording). The claim itself is in the L2 chapter card. Escalate to L2.

Checking L2: "Key claims: Initiative 7 delivers 25% cost reduction"

3

REFACTOR (at L2)

Options considered:

  • A. Find better evidence (none available)
  • B. Change claim to match evidence: "10-15% cost reduction"
  • C. Reframe as qualitative: "meaningful operational improvement"

Decision: Option B. Update chapter card claim.

Refactored L2:

"Key claims: Initiative 7 delivers 10-15% cost reduction (conservative estimate based on industry benchmarks)"

4

RECOMPILE

With corrected L2:

  • ✓ Check L1: Does 10-15% still serve thesis? Yes—still demonstrates value
  • ✓ Regenerate L3: Update section skeleton references
  • ✓ Regenerate L4: Update paragraph plans for this claim
  • ✓ Regenerate L5: Write prose with accurate claim

Result:

  • • 30 minutes total (including investigation)
  • • Clean, supportable claim
  • • No hidden inconsistencies
  • • CFO can defend the number at board meeting

Contrast: hours of prose patching trying to make 25% "sound right" without evidence

L4 → L5: Final Generation

Sample Paragraph Plan (L4)

Section 3.2 Paragraph Plans

Para 1: Open with headline

Job: Establish total spend context

Evidence: "$485,000 across 12 initiatives"

Length: 2-3 sentences, factual tone

Para 2: The duplication pattern

Job: Show waste from overlap

Evidence: "Three teams building customer service chatbots independently"

Length: 3-4 sentences

Para 3: Industry context

Job: Normalise ("this is common")

Evidence: McKinsey pilot failure stat

Length: 2 sentences

Sample Prose Output (L5)

Our audit identified $485,000 in AI-related spending distributed across 12 distinct initiatives. This figure represents direct technology costs; when fully-loaded with internal time allocation, the true investment approaches $600,000 annually.

Perhaps most striking: three separate teams are independently building customer service chatbots. Each team selected different platforms, trained separate models on similar data, and operates without awareness of parallel efforts. Conservative estimates place the duplication cost at $95,000—funds that could instead accelerate a consolidated, higher-quality solution.

This fragmentation mirrors industry patterns. McKinsey's 2025 State of AI survey found that 62% of enterprise AI initiatives remain stuck in pilot phases,6 unable to demonstrate production value or secure ongoing investment.

Sidebar: What If We'd Written Prose First?

The Alternative Timeline

Day 1: Start writing Chapter 3. Get 2,000 words in.

Day 2: Write Chapter 4. Realise the framework doesn't quite match how Chapter 3 set things up.

Day 3: Go back and "adjust" Chapter 3 to fit Chapter 4.

Day 4: Write Chapter 5. The ROI claim doesn't have evidence. Try to write around it.

Day 5: Discover Chapter 5's hedged language doesn't match Chapter 4's confident framework.

Day 6: Rewrite portions of Chapters 3, 4, and 5. Things still feel off.

Day 7: Show draft to colleague. "The structure is confusing."

Day 8-10: Major restructure. Most of the prose gets thrown out.

Total time: 10+ days. Half the prose discarded. Frustration throughout.

Progressive Resolution Timeline

Day 1: L0 (30 min) + L1 (2 hrs)

Day 2: L2 chapter cards (3 hrs)

Day 3: L3 skeletons (2 hrs) + catch evidence gap, run exception protocol (30 min)

Day 4-5: L4 + L5 writing (prose flows because structure is solid)

Day 6: Review, minor polish

Total time: 6 days. Minimal rework. Confidence throughout.

Chapter Takeaways

1

The sequence

L0 (intent) → L1 (silhouette) → L2 (chapter cards) → L3 (skeletons) → L4 (paragraph plans) → L5 (prose). Each layer with gate checks before advancing.

2

Exception protocol in practice

When the ROI claim failed at L4, we escalated to L2, refactored the claim, and recompiled downstream. 30 minutes vs. hours of prose patching.

3

The payoff

Structure work at L0-L3 feels slower but prevents the cascade rewrites that make linear workflows fail. Total time is less, confidence is higher.

We've seen progressive resolution work for a strategy document. But there's a deeper reason this architecture succeeds with AI assistance: it solves the context window problem.

Next: Why this architecture works with AI—the context window benefits of progressive resolution →

08
Part II: In Practice

Context Window Benefits

Why This Works with AI

Progressive resolution isn't just good craft—it's a context window hack that turns the model's biggest weakness into a design constraint you can exploit.

AI language models have finite context windows—a limit on how much text they can "see" at once. Complex documents push these limits. Without structure, AI loses coherence across long work.

Progressive resolution solves this by design. This chapter explains why the architecture that prevents Jenga cascades also happens to be the architecture that enables long-form AI-assisted work.

The Context Problem

What Context Windows Are

  • Context window = how much text the AI can "see" at once
  • Measured in tokens (roughly ¾ of a word)
  • Current models: 128K-200K tokens7 (large, but not infinite)
  • An 80-page document: ~40,000 tokens (fits technically)
  • An 80-page document + instructions + history + formatting: pushes limits

Why It's Not Just About Size

Even if everything technically fits:

Attention Dilution

Model attends less precisely to earlier content as context grows8

Coherence Degradation

Consistency drops as context grows larger

Instruction Following

Later instructions may override earlier ones

Lost Structure

The document's architecture becomes noise

The Research

"Without external knowledge graphs to provide structured memory, LLMs inevitably lose coherence over time—their flat context windows simply cannot maintain the complex relationships and temporal consistency needed for reliable long-term operation."
— Anthony Alcaraz, LinkedIn
"All examined LLMs exhibit significant performance degradation in multi-turn interactions. The 'LLMs Get Lost' study demonstrates an average 39% drop in performance when tasks span multiple conversation turns rather than single prompts."
— Anthony Alcaraz, LinkedIn

What This Means for Documents

Without structure:

1 Write Chapter 1 → send to AI
2 Write Chapter 2 → AI has "forgotten" Chapter 1's nuances
3 Write Chapter 7 → AI has no real sense of Chapters 1-6
Final product: locally coherent paragraphs, globally incoherent document

How Progressive Resolution Solves It

The Key Insight

Low-resolution layers compress the entire document into something that always fits.

What Always Fits in Context

Layer Content Typical Size
L0 Intent spec ~200 tokens
L1 Whole-doc silhouette ~500 tokens
L2 All chapter cards ~1,500 tokens
Total persistent context ~2,200 tokens

What swaps in per task:

  • • Current section's L3 skeleton
  • • Current section's L4 paragraph plans
  • • Current section's draft prose
  • • Neighbouring sections for continuity
  • • Relevant evidence excerpts

The Architecture

Context Window Architecture

┌─────────────────────────────────────────────────────────┐

│ CONTEXT WINDOW │

│ │

│ PERSISTENT HEADER (always present, ~2-3K tokens) │

├── Intent spec (L0)

├── Whole-doc silhouette (L1)

├── All chapter cards (L2)

└── Key constraints / voice guidelines

│ WORKING SET (swapped per task, ~5-15K tokens) │

├── Current section skeleton (L3)

├── Current paragraph plans (L4)

├── Current prose draft (L5)

├── Neighbouring sections for continuity

└── Relevant evidence/citations

│ │

└─────────────────────────────────────────────────────────┘

Why This Works

1. Global Coherence

Intent (L0) and structure (L1-L2) are always present

2. Local Precision

Working set has full detail for current task

3. Efficient Use

No wasted tokens on sections not being worked

4. Continuity

Neighbouring sections prevent abrupt transitions

5. Constraint Compliance

Voice and guidelines always visible

"Seeing the Future"

What Low-Res Enables

With L0-L2 always in context, the AI (and you) can:

  • • See the whole document's structure while writing any section
  • • Check if current prose serves the thesis
  • • Verify section serves its chapter card's purpose
  • • Ensure consistency with what comes before and after
  • • Catch drift before it happens

The Closed Loop

At any point in writing, you can evaluate:

  • ? "Does this paragraph serve its job (from L4)?"
  • ? "Does this section serve its chapter (from L2)?"
  • ? "Does this chapter serve the thesis (from L1)?"
  • ? "Does this document serve the intent (from L0)?"

Without low-res layers present, these questions can't be answered accurately—you're relying on memory and hope.

The "Future State" Visibility

The low-res silhouette is essentially a compressed future state of the document. You can evaluate "if we execute this plan, will the outcome satisfy intent?" before committing to expensive prose work.

This is why progressive resolution feels like "seeing into the future"—because the low-res plan makes the future state visible and evaluable.

Practical Patterns

Pattern 1: Prompt Structure

Prompt Template for AI-Assisted Writing

[PERSISTENT HEADER]

Intent: {L0 spec}

Thesis: {from L1}

Current chapter purpose: {from L2 card}

Previous section summary: {1-2 sentences}

[WORKING SET]

Section skeleton:

{L3 for this section}

Paragraph plan:

{L4 for current paragraphs}

Evidence available:

{relevant citations}

[TASK]

Write the prose for paragraphs 3-4 of this section.

Maintain voice: {guidelines}

Length target: {from L4}

Pattern 2: Continuity Bridging

When moving to a new section:

  • Include last 2-3 paragraphs of previous section
  • Include skeleton of next section
  • Ensure AI sees the seams, not just the current piece

Pattern 3: Coherence Checks

Periodically:

  • Load full document outline + current chapter prose
  • Ask AI: "Identify any inconsistencies between this chapter and the overall thesis"
  • Flag issues for exception protocol

Pattern 4: Evidence Injection

Don't keep all evidence in context:

  • Keep evidence assignments in L4 paragraph plans (lightweight)
  • Inject actual evidence only when writing the paragraph that uses it
  • Reduces context bloat, maintains precision

What This Enables

1. Documents Longer Than Context
  • • Each section written with global structure visible
  • • Continuity maintained through bridging
  • • Coherence checked through periodic sweeps
  • • No section is "forgotten"—its purpose lives in L2
2. Consistent Voice Across Length
  • • Voice guidelines in persistent header
  • • Every prose generation sees the same tone requirements
  • • No drift across chapters
  • • Revision for voice consistency is minimal
3. Claim-Evidence Alignment
  • • Chapter cards (L2) always present
  • • Every prose passage knows what it must claim
  • • Evidence requirements are explicit
  • • No "orphan claims" or "orphan evidence"
4. Efficient Regeneration
  • • When exception protocol triggers
  • • Only regenerate affected sections
  • • Persistent header stays constant
  • • Regeneration is surgical, not global

Why AI Makes Progressive Resolution Powerful

The Pre-AI Tradeoff

Before AI-assisted writing:

  • • Working at low resolution = manual overhead
  • • Creating chapter cards = time spent not writing
  • • Regenerating prose = expensive rewriting

Result: People skipped structure work because prose was the bottleneck.

The Post-AI Shift

With AI-assisted writing:

  • • Working at low resolution = cheap planning
  • • Creating chapter cards = fast structured thinking
  • • Regenerating prose = trivially cheap

Result: Structure work now pays off because regeneration is cheap.

The Inversion

Progressive resolution becomes more economical, not less, with AI:

  • Time spent on L0-L2: pays off 10x at L5
  • Exception protocol regeneration: costs minutes, not hours
  • Low-res planning: now the highest-leverage activity

Chapter Takeaways

1

Key insight

Progressive resolution solves the context window problem by keeping low-resolution structure (L0-L2) permanently in context while swapping high-resolution detail (L3-L5) per task.

2

The architecture

Persistent header (~2-3K tokens of structure) + Working set (~5-15K tokens of current detail) = coherent long-form documents within finite context.

3

The benefit

"See the future" (evaluate the whole document) while working on any section.

We've built the framework and seen it work with AI. Now: how does this same architecture apply to other domains?

Next: Progressive resolution for code generation (spec-first development) →

09
Part III: Applications

Code Generation

Spec-First Development

Developers already think in progressive resolution—they just don't call it that. Spec-first development is the same architecture applied to code.

If you've ever written a specification before coding, you've done L0-L2 work. If you've ever refactored by changing the spec and regenerating, you've run the exception protocol.

This chapter makes the pattern explicit and shows how AI-assisted coding amplifies its power.

The same Jenga problem that plagues documents plagues codebases. Same architecture, same solution.

The Code Jenga Problem

Symptoms

1 You're in file 7 when you realise the data model in file 3 is wrong
2 Fix file 3, now files 4-6 have type mismatches
3 Fix the types, now the API contract doesn't match
4 Fix the API, now the tests fail
5 Fix the tests... and you've spent your day playing whack-a-mole

Why It Happens

  • High-resolution commitments: Detailed implementation code
  • Low-resolution structure: Architecture, data models, API contracts
  • Same pattern: committing to implementation before architecture stabilises

The Difference from Documents

Code Has:
  • • Syntax checks (it "compiles")
  • • Tests for verification
  • • Change tracking (git)
But:

The cascade problem is identical.

Structural changes in code propagate just like structural changes in documents.

The Code Resolution Ladder

Mapping to Documents

Document → Code Layer Mapping
Document Layer Code Equivalent
L0: Intent What problem are we solving?
L1: Silhouette System architecture
L2: Chapter Cards Module/component specs
L3: Section Skeletons Function signatures, interfaces
L4: Paragraph Plans Pseudocode, algorithm outlines
L5: Prose Implementation code

The Code Flow

spec.md → architecture.md → interfaces.ts → implementation.ts → tests

Each layer stabilises before advancing.

Why This Matches

spec.md = L0-L2: What are we building, for whom, what's the architecture?

interfaces = L3: What are the contracts between components?

implementation = L4-L5: How does each function work?

tests = Verification: Does high-res match low-res intent?

Evidence: Spec-First Works

The METR Study

Jumping straight to L5 (code) without L0-L2 (spec) makes you slower, even with AI assistance.

Why Skipping Structure Costs Time

Without Spec:

→ AI generates plausible-looking code that doesn't quite fit

→ You edit the code to fit your mental model

→ The edits introduce inconsistencies

→ You debug the inconsistencies

→ You realise the architecture is wrong

→ You restructure... and wish you'd started with a spec

With Spec:

→ Spec forces you to think through what you're building

→ AI generates code that matches the spec

→ Mismatches are caught at spec level (cheap)

→ Implementation flows from clear contracts

→ Time "lost" on spec pays back 5-10x in reduced debugging

The Code Exception Protocol

The Problem

You're implementing Chapter 9 of your codebase (a new feature). You realise the data model doesn't support what you need.

Apply the Protocol

1

DETECT

"The User model doesn't have a lastLoginAt field, and the feature requires it."

2

ESCALATE

This isn't an L5 (implementation) problem. It's an L3 (interface/model) problem. Possibly L2 (component spec) if we need to decide how login tracking works.

3

REFACTOR (at L2-L3)

  • • Update component spec: "User model includes login tracking"
  • • Update interface: Add lastLoginAt: Date to User type
  • • Consider: Is this a database migration? What's the impact?
4

RECOMPILE (L4-L5)

  • • Regenerate affected implementation code
  • • Update tests for new field
  • • Run migrations

Why Regenerate, Not Patch

Traditional Patching:
  • → Add field to model
  • → Add field to serialisation
  • → Add field to API response
  • → Update tests
  • → Hope you didn't miss anything
AI-Assisted Regeneration:
  • → Change the spec
  • → Regenerate affected modules
  • → Diff to verify the change propagated correctly
  • → Tests validate alignment

The AI difference: Regeneration cost approaches patching cost, so the "cleaner" approach (refactor and regenerate) is now also the faster approach.

Spec-First Development Pattern

The Specification as Compression

"The spec IS the compression. Writing the specification IS the thinking work. The spec IS compressed understanding. AI receives high-density input → produces high-density output."

What Goes in the Spec (L0-L2)

# Feature Spec: User Login Tracking

## Purpose (L0)

Track user login times for security and analytics.

## Requirements (L1)

- Store last login timestamp

- Store login count

- Accessible via user profile API

- Update on each successful authentication

## Component Design (L2)

- User model: add lastLoginAt, loginCount fields

- Auth service: update fields on successful login

- Profile API: expose lastLoginAt in response

- Migration: add columns with null defaults, backfill

## Interface (L3)

- User.lastLoginAt: Date | null

- User.loginCount: number

- ProfileResponse includes lastLoginAt

What the AI Generates (L4-L5)

From this spec, AI can generate:

  • • Database migration
  • • Model definition changes
  • • Service layer updates
  • • API endpoint modifications
  • • Test scaffolds

Key: The spec is ~30 lines. The generated code is ~300 lines. The spec is where thinking happens; code is where execution happens.

Why Code Degrades Without Progressive Resolution

The Degradation Pattern

Early in Session:
  • • Follows project conventions exactly
  • • Handles edge cases unprompted
  • • Uses appropriate abstractions
Late in Session:
  • • Reverts to generic patterns
  • • Misses established conventions
  • • Suggests approaches you rejected

The Root Cause

Without low-resolution structure (spec, conventions) in context:

  • • AI generates locally coherent code
  • • But locally coherent ≠ globally consistent
  • • Style drifts, patterns diverge, conventions erode
  • • The codebase becomes a patchwork of mini-styles

The Fix

Keep structure (spec, conventions, patterns) in context permanently:

  • spec.md = what we're building (L0-L2)
  • conventions.md = how we write code (style guide)
  • patterns.md = our architectural decisions

This is the same "persistent header" pattern from Chapter 8, applied to code.

Practical Application

For Individual Developers

1

Before coding

Write a spec (even 10 lines helps)

2

Define interfaces first

Types, contracts, signatures before implementation

3

When stuck

Back up to spec—is the spec clear?

4

When refactoring

Change spec first, regenerate implementation

For Teams

Spec review before implementation review: Catch issues at L2, not L5
Shared conventions in context: Every AI session gets the team's patterns
Regeneration as workflow: "The spec changed, regenerate affected modules"
Tests as gates: Don't merge until tests confirm spec alignment

The Nuke and Regenerate Connection

When the exception protocol triggers in code, you're doing what we call "Nuke and Regenerate":

"When AI makes recompilation cheap enough to approach patching speed, the calculation inverts. Nuke and Regenerate becomes the faster path, not just the cleaner one."

The mindset shift:

  • Old: "I spent an hour writing this function—I'll debug it"
  • New: "I spent an hour writing this function—the spec was wrong—I'll regenerate from the fixed spec"

The time you "lose" is the time you would have spent debugging a wrong-spec implementation.

For more on regeneration economics, see Stop Nursing Your AI Outputs.

Chapter Takeaways

1

Key insight

Spec-first development is progressive resolution for code. Specification = L0-L2. Implementation = L3-L5. Same architecture, same benefits.

2

The evidence

Developers who skip spec work are 19% slower even with AI. Structure before detail pays off.

3

The action

Write specs before code. When something breaks, change the spec, not the code. Regenerate implementation from corrected specs.

Progressive resolution works for documents (Ch7) and code (Ch9). What about research—where the "document" is accumulated knowledge?

Next: Progressive resolution for research workflows (synthesis over accumulation) →

10
Part III: Applications

Research and Analysis

Synthesis Over Accumulation

Research fails when it accumulates facts without synthesising understanding. The Jenga problem for research isn't cascading edits—it's context bloat that drowns insight.

Research feels like it should be additive: more sources = more knowledge. But without structure, more sources = more noise.

The same progressive resolution architecture that prevents document cascades prevents research drowning.

"Facts accumulate linearly. Understanding requires synthesis. As context grows, synthesis degrades."

The Research Jenga Problem

How Research Fails

1 Start broad: Search for sources on the topic
2 Accumulate: Read Source 1, note interesting points
3 More accumulation: Read Sources 2-10, add more notes
4 Growing pile: 50 bullet points, no structure
5 Attempt synthesis: "Now let me write up what I found"
6 Overwhelm: Too much material, conflicting claims, no clear thread
7 Force it: Write something that mentions everything but argues nothing
8 Result: "Report" that reads like a bibliography with transitions

Why This Pattern Persists

  • Research feels productive while accumulating
  • Stopping to structure feels like "not doing research"
  • The synthesis failure only shows up at the end
  • By then, the investment feels too large to restart

The Parallel to Documents

Document Failure Research Failure
Write prose before structure Gather facts before framework
Realise structure wrong late Realise frame wrong late
Patch prose to fix structure Force-fit facts into weak frame
Jenga cascade Research drowning

The Research Resolution Ladder

Mapping to Document Layers

Document → Research Layer Mapping
Document Layer Research Equivalent
L0: Intent Research question + success criteria
L1: Silhouette Hypothesis + evidence needs
L2: Chapter Cards Themes + claims-to-support
L3: Section Skeletons Organised findings by theme
L4: Paragraph Plans Specific evidence for claims
L5: Prose Synthesised write-up

The Research Flow

sources.md → findings.md → synthesis.md → kernel_update.md

sources.md = Raw material (L5 equivalent—high-resolution detail)

findings.md = Organised themes (L3 equivalent)

synthesis.md = Patterns and conclusions (L1-L2 equivalent)

kernel_update.md = Compressed takeaways (L0 equivalent)

The Inversion

Notice the direction reversal:

  • • In documents: Work from L0 (intent) down to L5 (prose)
  • • In research: Collect at L5 (sources), compress up to L0 (kernel)

But the principle is the same: You can't work effectively at one resolution without the adjacent resolutions stabilised. Research without a hypothesis (L1) wanders. Synthesis without organised findings (L3) forces.

Progressive Research Architecture

Step 1: Define the Research Question (L0)

Before searching:

  • ? What question are we answering?
  • ? What would a good answer look like?
  • ? What would disconfirm our hypothesis?
  • ? What's out of scope?

Step 2: Hypothesis + Evidence Needs (L1)

Before reading deeply:

  • Tentative thesis: "We expect to find X"
  • Evidence we need: "To support this, we need..."
  • Counter-evidence to seek: "To challenge this, we look for..."

Step 3: Themed Findings Organisation (L2-L3)

As sources come in:

  • • Assign findings to themes (don't just list)
  • • Note which theme each finding supports
  • • Flag contradictions explicitly
  • • Identify gaps in coverage

Step 4: Synthesis (L1 Updated)

Before writing:

  • • Update hypothesis based on findings
  • • State the argument the evidence supports
  • • Acknowledge what remains uncertain

Step 5: Write-Up (L4-L5)

Now write:

  • • Each section has assigned evidence
  • • No orphan facts (everything serves the argument)
  • • No orphan claims (everything has support)

Checkpoint Discipline as Gates

The Research Gate Pattern

What "Done" Actually Means

Gates at Each Layer

L0 → L1 Gate (Before searching broadly)

☐ Research question is specific (not "learn about X")

☐ Success criteria defined

☐ Hypothesis stated (even if tentative)

L1 → L2 Gate (Before deep reading)

☐ Evidence needs are explicit

☐ Counter-evidence needs are explicit

☐ Scope boundaries clear

L2 → L3 Gate (After initial source collection)

☐ Sources assigned to themes

☐ Coverage gaps identified

☐ Contradictions flagged

L3 → L4 Gate (Before writing)

☐ Each theme has sufficient evidence

☐ Synthesis reflects the actual evidence (not the hoped-for evidence)

☐ Uncertainties documented

The Exception Protocol for Research

Detecting Failures

Signal What It Means
"I have 50 sources but don't know what to argue" L1 failure (no hypothesis)
"The evidence doesn't support what I thought" L1 needs update, not forcing
"I keep finding more but never feel ready to write" L2 failure (no themes)
"My write-up feels like a list" L3 failure (no structure)

Applying the Protocol

Example: Evidence Contradicts Hypothesis

Halfway through research, you realise the evidence contradicts your hypothesis.

Old way:

Ignore contradiction, cherry-pick supporting evidence, write weak synthesis.

Protocol way:

  1. 1. Detect: "Evidence contradicts hypothesis"
  2. 2. Escalate: This is an L1 problem (hypothesis, not findings)
  3. 3. Refactor: Update hypothesis to reflect what evidence actually shows
  4. 4. Recompile: Re-organise findings under new hypothesis, write from there

The Payoff:

  • • Research that actually argues something (not just surveys)
  • • Synthesis that reflects evidence (not wishful thinking)
  • • Time saved by not forcing incompatible frames

Facts vs. Understanding

The Critical Distinction

Facts (What Accumulates)

Source A says X, Source B says Y...

List grows, understanding doesn't

Understanding (What Should Accumulate)

The pattern: X and Y are manifestations of underlying principle P...

Compression creates insight

What This Means

  • Collecting facts = gathering high-resolution detail
  • Building understanding = creating low-resolution structure
  • Both are needed, but understanding must guide collection, not follow it

The Compression Principle

Research Succeeds When:
  • • Facts compress into patterns
  • • Patterns compress into principles
  • • Principles compress into actionable insight
Research Fails When:
  • • Facts pile up without compression
  • • No patterns emerge because no frame guides collection
  • • "Insight" is just a summary of facts

Practical Patterns

Pattern 1: Research Question Template

# Research Question (before starting)

Question: [Specific question, not "learn about X"]

Hypothesis: [Tentative answer to test]

Would confirm: [Evidence that would support hypothesis]

Would challenge: [Evidence that would challenge hypothesis]

Out of scope: [What we're not investigating]

Pattern 2: Theme Bins

# Theme: [Name]

Claim this theme supports: [What you'll argue]

Evidence:

- Source A: "[Quote]" — supports because...

- Source B: "[Quote]" — supports because...

Tensions:

- Source C says X, but Source D says Y. Resolution: ...

Pattern 3: Synthesis Checkpoint

# Synthesis Checkpoint (before writing)

Hypothesis update: [Has it changed? How?]

Strongest evidence for: [Top 3]

Strongest evidence against: [Top 3]

Remaining uncertainties: [What we don't know]

Argument we can make: [Given above, what's defensible?]

Chapter Takeaways

1

Key insight

Research fails not from too few sources but from accumulating facts without synthesising understanding. Progressive resolution prevents research drowning by structuring collection around hypotheses and themes.

2

The architecture

Research question (L0) → Hypothesis + evidence needs (L1) → Themes (L2-L3) → Evidence-to-claim mapping (L4) → Write-up (L5)

3

The distinction

Facts accumulate linearly. Understanding requires compression. Research without structure produces lists, not arguments.

Progressive resolution works for documents, code, and research. One more variant: proposals at scale.

Next: How progressive resolution turns proposals into "compiled artifacts" →

11
Part III: Applications

Proposal Generation

Compiled Artifacts at Scale

Every proposal feels custom. But the structure repeats. Progressive resolution reveals proposals as compiled artifacts—generated from frameworks, not created from scratch.

Consultants, agencies, and service providers write proposals constantly. Each proposal seems unique (different client, different problem, different scope). Yet the patterns repeat: context → diagnosis → solution → value → terms.

What if you could maintain the "source code" and compile each proposal?

Proposals are the clearest case for progressive resolution because the kernel (your frameworks) is already stable. You're just compiling it for each client.

The Proposal Paradox

The Perceived Uniqueness

  • "Every client is different"
  • "We can't use templates—it would feel generic"
  • "Proposals need to be bespoke"

The Actual Structure

Proposal Section What Varies What Stays Constant
Context Client situation Industry patterns
Diagnosis Specific symptoms Diagnostic framework
Solution Tailored scope Core methodology
Value Client-specific ROI Value model
Terms Engagement specifics Pricing structure

The Insight

20%

What varies (client-specific details)

80%

What stays constant (frameworks, methodology, patterns)

Progressive resolution perspective: The 80% is L0-L2. The 20% is L3-L5. Stabilise the frameworks; compile the proposals.

The Proposal Resolution Ladder

Mapping

Proposal Resolution Layers
Layer Proposal Equivalent
L0: Intent Your firm's positioning + what this proposal must achieve
L1: Silhouette This client's situation → your diagnosis → proposed solution
L2: Chapter Cards Each section's purpose + evidence needs for this client
L3: Section Skeletons Argument structure per section
L4: Paragraph Plans Specific claims + client evidence
L5: Prose Polished proposal text

The Two-Layer Kernel

For proposals, L0 is actually two layers:

Global L0 (Your Firm's Kernel)
  • • Your positioning and worldview
  • • Your frameworks and methodology
  • • Your constraints (what you never recommend)
  • • Your patterns (go-to solution shapes)

Stable across all proposals11

Client L0 (This Specific Proposal)
  • • This client's situation
  • • What this proposal must achieve
  • • Why they're talking to you
  • • What success looks like for them

Varies per proposal

Frameworks as Source Code

The Compiler Metaphor

"Strategic frameworks serve as reusable source code that can be compiled infinitely into bespoke proposals, where each proposal is a regenerable binary rather than a unique artifact, creating compound returns through kernel improvement."

What This Means

Source code = Your frameworks, methodology, patterns, constraints

Compiler = AI (with progressive resolution prompting)

Binary = The finished proposal

What you maintain = The source (frameworks), not the output (proposals)

The Economic Insight

Traditional Approach
  • → Write proposal → client says no → proposal dies
  • → Next proposal: start from scratch (or copy-paste and hope)
  • → Each proposal is independent; no compounding
Kernel Approach
  • → Proposal fails → learn from it → update frameworks
  • → Next proposal: compiled from improved kernel
  • → Each proposal strengthens the kernel; compounding value

The Proposal Workflow

Step 1: Load the Kernel (Global L0)

# Kernel Files (your permanent assets)

- marketing.md: Who we are, voice, positioning

- frameworks.md: Thinking tools, diagnostic patterns

- constraints.md: What we never recommend

- patterns.md: Go-to solution shapes

- style.md: How we write, lay out documents

This is your "persistent header"—loaded for every proposal.

Step 2: Define Client Intent (Client L0)

# Client Brief

Client: [Name, industry, size]

Situation: [What's happening, why they're looking]

Urgency: [Why now?]

Decision-makers: [Who's reading, what do they care about]

Proposal goal: [What action do we want them to take?]

Competition: [Who else are they talking to? What's our edge?]

Step 3: Draft Silhouette (L1)

# Proposal Silhouette (the one-page proposal plan)

Opening hook: [Why this client, why now, why us]

Diagnosis: [What we see in their situation]

- Pattern 1 from frameworks.md

- Pattern 2 from frameworks.md

- Pattern 3 specific to them

Solution: [What we propose]

- Core methodology from patterns.md

- Tailored scope for their situation

- Phasing/timeline

Value case: [Why it's worth it]

- ROI model applied to their numbers

- Risk if they don't act

Terms: [What we're asking]

- Investment

- Timeline

- Next steps

Step 4: Chapter Cards (L2)

For each section, specify:

  • • What this section must achieve
  • • What evidence/examples to include
  • • What framework elements to surface
  • • What client-specific data to weave in

Step 5: Generate (L3-L5)

With kernel + client brief + silhouette + cards:

  • • AI can generate section skeletons
  • • AI can generate paragraph plans
  • • AI can generate prose

Key: The generation isn't from scratch—it's from structured intent through stable frameworks.

When Proposals Fail (And How to Fix Them)

The Usual Failure Mode

You realise in Section 4 that Section 2's framing doesn't quite fit. You rewrite Section 2. Now Section 3 feels disconnected. You're playing Jenga.

Apply the Protocol

1

DETECT

"Section 2 framing doesn't support what Section 4 needs to argue"

2

ESCALATE

Is this an L5 problem (wording) or an L2 problem (section purpose)?

  • • If the section's job is wrong: L2
  • • If the argument flow is broken: L3
  • • If specific claims don't land: L4
3

REFACTOR

Update the chapter card or silhouette

"Section 2 now sets up X instead of Y"

4

RECOMPILE

Regenerate Section 2's prose from corrected structure

Why Regeneration Works for Proposals

  • Proposals are short (10-20 pages typically)
  • Regenerating a section: minutes
  • Patching a misaligned section: can take longer and leave traces
  • The math favours refactor + regenerate

Compound Returns from Kernel Improvement

The Flywheel

Each proposal is an opportunity:

  • • Did the client respond positively? Why?
  • • Did an objection reveal a framework gap?
  • • Did a competitor beat us on something?

Feed learnings back into the kernel:

  • • Strengthen frameworks that worked
  • • Address gaps that surfaced
  • • Remove patterns that didn't land

Next proposal: Compiled from an improved kernel.

This kernel improvement pattern is covered in depth in Worldview Recursive Compression.

The Compounding Calculation

Traditional (Linear)
  • • Proposal 1: 10 hours
  • • Proposal 10: 10 hours (no improvement)
  • • Proposal 100: 10 hours
Kernel-Driven (Compounding)
  • • Proposal 1: 10 hours (plus kernel setup)
  • • Proposal 10: 6 hours (kernel refined)
  • • Proposal 100: 3 hours (mostly compilation)10

Why Scale Matters

At proposal 100:

  • Linear approach: spent 1,000 hours, have 100 disconnected proposals
  • Kernel approach: spent 500 hours, have 100 proposals + a valuable kernel asset

The kernel becomes intellectual property. The proposals are just instances.

Practical Patterns

Pattern 1: Kernel-First Session Start

Every proposal session:

[Load kernel files: marketing.md, frameworks.md, patterns.md, constraints.md]

[Load client brief]

[Ask: What patterns from our frameworks apply to this client?]

Pattern 2: Template-As-Scaffold

Use templates not as fill-in-the-blank, but as L2-L3 scaffolds:

  • • Template defines section purposes (L2)
  • • Template provides skeleton structure (L3)
  • • Content is generated/compiled (L4-L5)

Pattern 3: Feedback Loop

After every proposal (win or lose):

  • What worked? (Add to frameworks)
  • What didn't land? (Update or remove from patterns)
  • What objection surprised us? (Add to constraints or FAQ handling)
  • What competitor advantage did we see? (Sharpen positioning)

Pattern 4: Version Control for Kernels

Treat kernel files like code:

  • • Track changes
  • • Know why things changed
  • • Be able to revert if something stops working
  • • Different "branches" for different client types if needed

Chapter Takeaways

1

Key insight

Proposals are compiled artifacts, not unique creations. Your frameworks are source code; each proposal is a binary. Maintain the source, regenerate the binaries.

2

The architecture

Global kernel (L0) + Client brief (L0) → Silhouette (L1) → Section cards (L2) → Skeletons (L3) → Prose (L4-L5)

3

The economics

Linear proposal work doesn't compound. Kernel-driven work compounds with every proposal, creating durable intellectual property.

We've seen progressive resolution applied to documents, code, research, and proposals. It's time to synthesise: what's the paradigm shift, and what do you do next?

Next: The closing chapter—from linear to diffusion, and your action checklist →

12
Closing

The Paradigm Shift

From Linear to Diffusion

We began with a question: Why do complex documents keep falling apart?

We end with an answer: You were committing too early.

This chapter synthesises the framework, states the paradigm shift in clear terms, provides an action checklist for immediate application, and points forward to what this enables.

From Jenga to diffusion. From cascade to coherence. From patching to recompiling.

The Paradigm Shift

The Old Model

Writing is linear.

The Assumed Workflow:

  1. Outline
  2. Draft
  3. Edit
  4. Polish
  5. Ship

What Actually Happens:

  1. Outline (rush through)
  2. Draft (make detailed commitments early)
  3. Discover structural problems (late)
  4. Edit (patch, patch, patch)
  5. Polish (lipstick on structural problems)
  6. Ship ("good enough")

Why it fails: You made high-resolution commitments (prose) before low-resolution structure (intent, claims, evidence) stabilised. Every downstream artifact inherits upstream assumptions. Change upstream, and downstream cascades.

The New Model

Writing is diffusion.

The Architecture:

  1. Start with intent (L0)
  2. Stabilise silhouette (L1)
  3. Stabilise chapter structure (L2)
  4. Build section skeletons (L3)
  5. Plan paragraphs (L4)
  6. Write prose (L5)
  7. When something breaks: back up, don't patch

Why it works: You delay expensive commitments until cheap-to-change structure stabilises. Each layer has a gate. Problems caught early are cheap to fix. AI makes regeneration nearly free.

What You Now Have

Ch3
The Resolution Ladder

Six layers with increasing commitment cost:

L0: Intent → L1: Silhouette → L2: Chapter Cards → L3: Section Skeletons → L4: Paragraph Plans → L5: Prose

The rule: Don't advance until the current layer passes its gate.

Ch4
The Think/Do Loop

Two loops working together:

  • Do Loop: Execute at current resolution
  • Think Loop: Supervise—decide to continue, advance, or back up

The job: Control resolution and commitment, not just productivity.

Ch5
The Exception Protocol

When gates fail:

  1. Detect (name the failure)
  2. Escalate (find the right layer)
  3. Refactor (fix the blueprint)
  4. Recompile (regenerate downstream)

The mantra: Don't glue the wobbling block; rebuild the layer above it.

Ch6
The Grounding

This isn't new—software proved it:

  • • 100x cost for late-stage fixes5
  • • 97% more success with requirements first12
  • • Compilers use intermediate representations for the same reason
Ch8
The AI Benefits

Progressive resolution solves the context window problem:

  • • Low-res structure fits permanently (persistent header)
  • • High-res detail swaps in per task (working set)
  • • Long-form coherence without pretending the whole document fits
9-11
The Applications

Same architecture, different domains:

  • Code: Spec-first development
  • Research: Hypothesis-guided collection
  • Proposals: Kernel-compiled artifacts

What Changes

For Individuals
  • Stop: Starting with prose when structure isn't stable
  • Start: Working through resolution layers with explicit gates
  • When stuck: Ask "Am I at the right resolution?" before pushing harder
For Teams
  • Vocabulary: "What resolution are we at?" / "Is L2 stable?" / "Should we back up?"
  • Review practice: Review structure (L1-L2) before reviewing prose (L5)
  • Collaboration: Share intent specs, chapter cards, skeletons—not just drafts
For AI-Assisted Work
  • Prompting: Include persistent header (L0-L2) + working set (current section)
  • Regeneration: When something's wrong, change the blueprint and regenerate
  • Coherence: Check structure-to-prose alignment periodically

What This Isn't

It's Not Waterfall
  • Waterfall: Lock structure early, don't change it
  • Progressive resolution: Iterate freely within and across layers, but advance only when stable
  • • AI makes iteration cheap; the question is when to commit, not whether to iterate
It's Not "Just Plan Better"
  • • "Plan better" is one layer (L1)
  • • The insight: every layer needs to stabilise before advancing
  • • And when you fail, you back up to the right layer, not patch at the wrong one
It's Not Only for Long Documents
  • • Short pieces benefit too (just fewer layers to work through)
  • • The judgment: How complex is the work? More complexity = more layers needed
  • • Simple email? L5 is fine. 50-page proposal? Full ladder.

The One-Line Summary

Stop playing Jenga. Start thinking in resolution.

More fully: Complex work fails when you commit to high-resolution details before low-resolution structure stabilises. Progressive resolution—coarse-to-fine with stabilisation gates—prevents cascading failures by treating documents (and code, and research, and proposals) like diffusion: noise → shapes → structure → detail.

Action Checklist

Before Your Next Complex Document

During Writing

When Something Breaks

After Delivery

Where This Leads

Immediate: Better Documents
  • • Less rework
  • • Cleaner structures
  • • Defensible claims
  • • Faster execution (counterintuitively)
Medium-term: Portable Skill
  • • Same architecture works for code, research, proposals, plans
  • • Vocabulary transfers ("resolution layer," "stabilisation gate," "exception protocol")
  • • Teams can coordinate more effectively
Long-term: Compound Returns
  • • Each project improves your frameworks
  • • The "kernel" becomes an asset
  • • You stop nursing outputs and start building systems

The Closing

We started with a game of Jenga—one fix threatening to topple everything.

We end with an architecture borrowed from image generation, validated by software engineering, and applicable to any complex work: progressive resolution.

The problem was never your editing skills.

The problem was commitment timing.

Now you know when to commit—and when to back up.

Stop playing Jenga.

Start thinking in resolution.

TL;DR

  • The problem: Complex documents collapse because you commit to details before structure stabilises
  • The solution: Progressive resolution—work coarse-to-fine with stabilisation gates at each layer
  • The framework: L0 (Intent) → L1 (Silhouette) → L2 (Chapter Cards) → L3 (Skeletons) → L4 (Paragraph Plans) → L5 (Prose)
  • When it breaks: Detect → Escalate → Refactor → Recompile (don't patch prose; fix structure)
  • The insight: AI makes regeneration cheap—structure work now has the highest ROI
R
Appendix

References & Sources

Complete bibliography of external research, industry analysis, and frameworks referenced throughout this ebook.

This ebook draws on peer-reviewed research, consulting firm analyses, industry publications, and practitioner frameworks. External sources are cited throughout the text; author frameworks are presented as interpretive analysis and listed here for transparency.

Numbered Inline Citations

1 How AI QA Prevents Costly Late-Stage Bugs

Cost escalation table by development stage—supports the 100x rule for late-stage fixes. (Chapter 1)

https://www.ranger.net/post/ai-qa-prevents-late-stage-bugs

2 Exploring Stable Diffusion

Picsellia's analysis of diffusion model quality: "exceptional visual quality, capturing intricate details and realistic textures." (Chapter 2)

https://www.picsellia.com/post/exploring-stable-diffusion-revolutionizing-image-to-image-generation-in-computer-vision

3 The Mythical Man-Month at 50

Analysis of Brooks's finding that fixing a defect has a 20% to 50% chance of introducing another defect—supporting why local patching compounds problems. (Chapter 5)

https://kieranpotts.com/mythical-man-month-50

4 Does User-Driven Design Still Need User-Centered Design?

UX Tigers analysis of how AI-driven regeneration shifts the economic optimal strategy: "the cost of generating and regenerating code drops to near zero" making correction cheaper than prevention. (Chapter 5)

https://www.uxtigers.com/post/2025-answers

5 The True Cost of a Software Bug

IBM Systems Sciences Institute research reporting the 100x cost multiplier for fixing bugs in production vs. design phase—applied to document resolution layers. (Chapter 6)

https://www.celerity.com/insights/the-true-cost-of-a-software-bug

6 The State of AI in 2025: Agents, Innovation, and Transformation

McKinsey Global Survey (1,993 participants) showing nearly two-thirds of organizations still in experimentation/pilot phases with AI, only 33% scaling. (Chapter 7)

https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

7 Effective Context Engineering for AI Agents

Anthropic engineering documentation on context window sizes: frontier models support 200K+ token context windows, up from 4K-8K tokens in earlier generations. (Chapter 8)

https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents

8 Lost in the Middle: How Language Models Use Long Contexts

Stanford/UC Berkeley research demonstrating that LLMs perform worse with larger contexts on certain tasks—a "lost in the middle" phenomenon where information buried in long contexts gets ignored as attention diffuses. (Chapter 8)

https://arxiv.org/abs/2307.03172

9 Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

METR randomized controlled trial finding that experienced developers using AI tools took 19% longer to complete tasks than without AI—despite perceiving themselves as 20% faster. Supports the case for specification-first development. (Chapter 9)

https://arxiv.org/abs/2507.09089

10 AI Reduces Proposal Development Time by 70%

Consulting Success industry analysis showing AI-assisted proposal workflows reduce development time by up to 70% while improving win rates—supporting the kernel-driven compounding economics. (Chapter 11)

https://www.consultingsuccess.com/consulting-business-models

11 Unlocking Tacit Knowledge in Consulting Firms

Starmind research finding that up to 90% of firm expertise is embedded in consultants' heads and rarely written down—explaining why kernel capture creates durable intellectual property. (Chapter 11)

https://starmind.ai/blog/unlocking-tacit-knowledge-in-consulting-firms

12 268% Higher Failure Rates for Agile Software Projects

The Register analysis showing projects with clear requirements documented before development started were 97% more likely to succeed—validating specification-first approaches. (Chapter 12)

https://www.theregister.com/2024/06/05/agile_failure_rates/

Primary Research

How AI QA Prevents Costly Late-Stage Bugs

Cost escalation table by development stage—supports the 100x rule for late-stage fixes.

https://www.ranger.net/post/ai-qa-prevents-late-stage-bugs

The True Cost of a Software Bug

IBM Systems Science Institute research on the 100x cost multiplier for production bugs vs. design-phase fixes.

https://www.celerity.com/insights/the-true-cost-of-a-software-bug

Generating Images with Stable Diffusion

Technical explanation of coarse-to-fine denoising process in diffusion models.

https://blog.paperspace.com/generating-images-with-stable-diffusion/

Exploring Stable Diffusion

Reverse diffusion explanation and why diffusion beats other generative approaches.

https://www.picsellia.com/post/exploring-stable-diffusion-revolutionizing-image-to-image-generation-in-computer-vision

Master Stable Diffusion

Latent space compression explanation—working in compressed representation before expanding to full resolution.

https://viso.ai/deep-learning/stable-diffusion/

Intermediate Representation (Wikipedia)

Definition of intermediate representation in compiler design—foundational concept for the software parallel.

https://en.wikipedia.org/wiki/Intermediate_representation

Intermediate Representations (Cornell CS)

Academic explanation of layered compilation process and why each layer stabilizes before advancing.

https://www.cs.cornell.edu/courses/cs4120/2023sp/notes/ir/

268% Higher Failure Rates for Agile Software Projects

The Register analysis showing projects are 97% more likely to succeed with requirements defined before development.

https://www.theregister.com/2024/06/05/agile_failure_rates/

Specification-Driven Development

CoreStory analysis of waterfall vs. Agile failure rates and the role of specification in reducing iteration cost.

https://corestory.ai/post/specification-driven-development-agile-methodology-reducing-iteration-cost-in-ai-assisted-software-engineering

Industry Analysis

The Outrageous Cost of Skipping TDD & Code Reviews

Eric Elliott's analysis of exponential cost curves for late-stage fixes in software development.

https://medium.com/javascript-scene/the-outrageous-cost-of-skipping-tdd-code-reviews-57887064c412

LinkedIn Commentary

LLMs Get Lost Study Analysis

Anthony Alcaraz's summary of research showing 39% performance drop in multi-turn LLM tasks without structured context.

https://www.linkedin.com/posts/anthony-alcaraz-b80763155_without-external-knowledge-graphs-to-provide-activity-7329485920779304960-qw5t

LeverageAI / Scott Farrell

Practitioner frameworks and interpretive analysis developed through enterprise AI transformation consulting. These articles inform the conceptual frameworks presented throughout this ebook but are not cited inline to maintain narrative flow.

Context Engineering: Why Building AI Agents Feels Like Programming on a VIC-20 Again

Source for tiered memory architecture concepts and context window management strategies discussed in Chapter 8.

https://leverageai.com.au/context-engineering-why-building-ai-agents-feels-like-programming-on-a-vic-20-again/

Breaking the 1-Hour Barrier

METR study analysis showing 19% slower performance when skipping specification work—key evidence for spec-first development in Chapter 9.

https://leverageai.com.au/breaking-the-1-hour-barrier/

Stop Nursing Your AI Outputs

Source for the "nuke and regenerate" concept—why regeneration beats patching when working with AI, referenced in Chapter 9.

https://leverageai.com.au/stop-nursing-your-ai-outputs-nuke-them-and-regenerate/

The Proposal Compiler

Foundational article for treating frameworks as source code and proposals as compiled artifacts, core to Chapter 11.

https://leverageai.com.au/the-proposal-compiler/

Worldview Recursive Compression

Source for kernel improvement flywheel concept—how frameworks compound through use, referenced in Chapter 11.

https://leverageai.com.au/worldview-recursive-compression-how-to-better-encompass-your-worldview-with-ai/

Note on Research Methodology

Sources were compiled between 2024-2025 and verified for accessibility at time of publication. Primary research draws from peer-reviewed studies and established industry analysts. Statistics are cited with original sources where available.

Some links may require subscription access or registration. Where paywalled content is cited, the specific claim or statistic referenced is noted in the chapter text.