A Blueprint for Future Software Teams
A practical guide for compounding learning with AI
📘 Want the complete guide?
Learn more: Read the full eBook here →
Here’s a claim that might make you uncomfortable: your code is ephemeral. Delete it. Regenerate it tomorrow. The design document is the asset now.
I’ve been building software for two decades. The last year fundamentally changed how I think about what we’re actually producing. It’s not the code. The code is compiled output. What matters is the thinking that produces it—and whether that thinking compounds across your team over time.
This is a blueprint for how software teams learn when the AI model itself doesn’t learn from your conversations.
The Frozen Model Problem
Here’s what most people don’t fully internalise: AI doesn’t learn like people think it learns.
The models take six months to bake in an AI lab. They cost millions of dollars to train. They can’t learn from your chats last quarter. When you close the session, it’s gone.
So the question becomes: if the model is frozen, how does your team actually learn?
The answer: the learning happens in the scaffolding around the model, not in the model itself.
“The gradient updates are happening in your repo, not in the GPU.”
The Three-Layer Canon
Build meta-context that persists across sessions. Instead of updating individual prompts, update shared knowledge files that get injected into every AI conversation.
These files become “soft weights”—text-based conditioning that acts like fine-tuning you can read and version control.
Layer 1: Personal
learned.md — Your personal notes on what keeps going wrong. Current date. Current model versions. Low-pressure, experimental.
Layer 2: Team
coding.md, infrastructure.md — Shared conventions. “We use AWS, not GCP.” “Our API patterns look like this.” Things everyone working on this codebase should follow.
Layer 3: Organisation
security.md, architecture.md — Tim from security writes his brain into these files once. Now every developer’s AI session inherits Tim’s security thinking. No meeting required.
Knowledge flows upward. You notice something in your personal learned.md that applies to the team. You PR it. Seniors notice team patterns that should be org-wide. They promote it. The system learns without meetings.
Design Documents as Gospel
When AI is not allowed to write code (plan mode), it thinks harder. The world-model side stays online. It’s not burning cycles on syntax. It’s focused on coherence.
This maps to healthy senior/junior dynamics:
- Bad pattern: Junior disappears for 3 days, comes back with a 2,000-line PR, and you realise they missed the point.
- Good pattern: Junior brings you a design doc first. You catch problems in 5 minutes, not 3 days.
The same discipline applies to AI. Separate thinking from typing:
- Gather context (screenshots, requirements, bug reports)
- Feed context + canon files into AI in plan mode
- Get a design document—review this first
- Only then: implement per design
At this point, coding is compilation. The design document is source of truth.
If the code has bugs, you don’t nurse it forever. You figure out what the design doc missed, update it, and regenerate. The code is ephemeral. The design captures the learning.
Definition of Done v2.0
If your team adopts this pattern, what actually changes in how you work?
The new Definition of Done:
- Design document created and reviewed
- Code implemented per design
- AI conformance check: “Does code match the design?”
- Tests pass
- Learnings extracted: “What went wrong? What surprised us?”
- Canon updated: Relevant learnings PRed into the appropriate level
That last part is critical: every piece of work leaves a fossil in the team’s brain.
The Learning Extraction Ritual
At the end of each ticket, developers answer three questions:
- What were the sticking points? Where did the design fail to anticipate reality?
- What surprised you? Edge cases, integration issues you didn’t expect?
- What should we never do again?
These answers get triaged:
- Personal learning only → Add to your
learned.md - Applies to the team → PR into
coding.md - Org-wide principle → Senior promotes to org-level canon
You’re not just refactoring code. You’re refactoring knowledge upward into more general, more reusable form.
Architect Review Shifts
For tech leads and architects, this changes what you review:
| Old Model | New Model |
|---|---|
| Review every line of code | Review design documents |
| Catch bugs in PRs | Catch conceptual mistakes in design |
| Meeting-heavy knowledge transfer | Canon files transfer knowledge passively |
| Bottleneck on senior time | Seniors shape canon; AI enforces it |
The architect’s job shifts from code reviewer to canon curator. You’re shaping the team’s shared thinking infrastructure.
The Model Upgrade Flywheel
When a new model drops—say, Claude Opus 4.5—what happens depends on whether you’ve built this scaffolding.
Without scaffolding: You start from zero. The new model is smarter, but it doesn’t know your patterns.
With scaffolding: You plug the smarter model into your existing canon. All your accumulated knowledge gets executed by a more capable engine.
“Better model + better scaffolding + better judgment = exponential improvement, not linear.”
I experienced this when Opus 4.5 dropped. I had months of infrastructure baked in. My output quality jumped immediately—not because I did anything differently, but because the same scaffolding was run by a smarter engine.
Better outputs mean I’m reading better stuff. I become more critical. I capture finer-grained learnings. Which feeds back into the canon. Which makes the next upgrade even more powerful.
This is the flywheel. Every model release becomes a phase change, not just an upgrade.
Cross-Pollination Without Meetings
Traditional pattern:
- Security expert: “Follow these guidelines.”
- Everyone else: “That’s Tim’s job. I’ll half-listen and forget.”
- Result: Security doc rots. Reality drifts.
New pattern:
- Tim writes his security brain into
security.md - Every developer’s AI session automatically applies Tim’s rules
- No meeting. No training. No forgetting.
Tim’s thinking gets replayed thousands of times through the AI. Same for performance experts, ops experts, anyone with specialist knowledge.
This solves the oldest problem in software teams: how do you share what’s in senior engineers’ heads without burning all their time in meetings?
Getting Started This Week
You don’t need to implement everything to start capturing value:
- Create one shared file:
coding.mdin your team’s repo - Seed it: Add 5-10 conventions your team follows but aren’t documented
- Require it: When using AI for design/implementation, include this file as context
- Update it: When something goes wrong that the file would have prevented, add it
One file. One week. Watch what happens.
After a month, you’ll have dozens of learnings captured. After a quarter, you’ll wonder how you worked without it. And when the next model upgrade drops, you’ll feel the compounding firsthand.
The code is ephemeral. The design documents are gospel. The canon is how your team learns.
Build the scaffolding now. Harvest the compounding later.
What’s in your team’s shared canon? I’d love to hear what patterns you’re capturing.
Discover more from Leverage AI for your business
Subscribe to get the latest posts sent to your email.
Previous Post
The Fast-Slow Split: Breaking the Real-Time AI Constraint