I built a regulated-industry AI platform in seven days. One person.
This week I shipped tokenised PII, encrypted vaults, nightly compliance scanning, cross-platform social media intelligence, a seven-agent security council, and full marketing across two verticals.
I tell you what to build, what to kill, and why your first AI idea is almost certainly wrong.
The proof is live on the internet right now.
Built This Week
A regulated-industry AI platform shipped in seven days. Not a prototype. Not a strategy deck. A working system for compensation firms with runtime security, content operations, and compliance infrastructure baked in.
Senior Management Layer for Compensation Firms
A complete management layer for compensation firms, built on OpenClaw, a platform I shipped for dental first, then ported to a second vertical in a day.
Regulation scanner built in 4 hours, then wired into the broader operating stack.
Tokenisation proxy and encrypted storage so sensitive data is never casually exposed to agents.
Nightly advisory council and compliance scanning to catch drift before it becomes risk.
Incremental content fingerprinting and three-stage injection defence, not prompt hope.
The frameworks below explain how.
What AI Is Actually For
Building just collapsed to near-zero. Anyone can ship production software in a day.
- – The new bottleneck is knowing what to build, not whether a team can type fast enough to ship it.
- – Most AI projects fail before the model matters because the idea was weak, the lane was wrong, or the system could never survive governance.
The frameworks below are how I decide.
They're why 80% of AI projects fail while mine don't.
What the Research Says (Not Just My Opinion)
If this sounds strong, it's because the evidence has moved.
Independent research from McKinsey, BCG, the Australian Institute of Company Directors and Harvard Governance is now saying the same thing:
- • McKinsey: AI value only shows up when the CEO leads the transformation and the C-suite builds its own "AI muscle", rather than delegating it to IT.1,2
- • McKinsey's latest State of AI research shows that only a small minority of organisations are true "AI high performers"—and they win by redesigning workflows and business models, not by adding tools to existing processes.3
- • BCG: CEOs must be at the centre of data and AI conversations, leading the big strategic calls, not just approving tools or dashboards.4,5
- • AICD and governance bodies: boards now have explicit expectations to oversee AI risk and governance; they cannot outsource their duty of care to the CIO or a vendor.6,7
My practice is simply the mid-market, implementation-ready version of this: making sure your AI portfolio, capital allocation and governance match what the evidence says works—not what the loudest vendor is selling.
1 McKinsey – Building the AI muscle (2025) · 2 McKinsey – Economic potential of generative AI (2023) · 3 McKinsey – State of AI 2025 · 4 BCG – CEOs must lead data conversations (2025) · 5 BCG – As AI changes work (2025) · 6 AICD/HTI – Director's Guide to AI Governance (2024) · 7 Harvard Law – The Artificially Intelligent Boardroom (2025)
Board & C-Suite AI Briefing Partner
AI is now a board-level skill, not an IT hobby.
Boards and CEOs are being told they are personally accountable for AI risk, governance and capital allocation. They can't delegate understanding of AI's impact to a CIO who is also still learning, or to developers who are focused on pilots rather than enterprise risk.
You can safely treat your ERP as plumbing.
You can't treat AI that way. You need your own, board-ready view of strategic use of AI, and how it's changing profit, risk and defensibility in your business.
That's the gap I fill.
My work with leadership teams typically looks like:
- • Private working sessions with the CEO and C-suite to separate signal from noise in AI—what matters in your industry, what doesn't, and where the real risks sit
- • Long-form conversations about how AI should change your market position, not just your process diagrams
- • Regular check-ins during pilots and deployments so executive decisions on kill / fix / double-down are grounded in evidence, not fear of the first visible error
The goal isn't to turn you into prompt engineers. It's to make sure your strategy, capital allocation and risk posture reflect what AI is actually doing to your industry, rather than whatever the last vendor deck claimed.
Why leadership teams work with me
Translator between business, IT and AI
I speak fluent boardroom and fluent terminal, and I bridge the two so your AI strategy doesn't get lost in translation.
- IT: AWS Solutions Architect, Salesforce Certified Application & System Architect, TOGAF-certified Enterprise Architect
- Business: Master of Management (Innovation), two decades of delivery across insurance, manufacturing and services
- AI: 26 articles + 15 ebooks on AI deployment and governance in 2025 [insights] [linkedin]
Built Before the Market Caught Up
Prediction track record matters. I build the thing early, publish the architecture, and then watch the market rename it six months later.
- SiloOS — shipped before AWS AgentCore; security-first agent containment before the hyperscalers caught up [article]
- Discovery Accelerator — visible reasoning system for decision quality and rejected alternatives, before most teams had even named the discovery problem [article]
- ask CLI — shipped before Claude Code; terminal-first plan/act loops before coding agents became fashionable [article]
Also called the SaaS decline, spec-driven development, and the death of the customer-facing chatbot as a starting project.
Board-side thinking partner and communicator
I spend much of my time in the room with CEOs, CFOs and boards—framing AI in the language of capital allocation, risk appetite and competitive moat. Briefings are designed to survive the next board meeting, not just the next sprint.
Steward of moat and capital, not just a project helper
I'm not here to help you finish a sprint. I'm here to help you decide which experiments to kill, which to double-down on, and how to turn AI spend into long-term defensibility. The goal isn't a faster chatbot—it's a compounding asset your competitors can't copy.
Why Most AI Projects Fail: The Four Traps
Industry research documents a 40–90% AI project failure rate. Most failures are not model failures. They come from bad selection, bad instrumentation, bad maturity matching, and teams building things that never should have been built.
The First Idea Trap
Taking the first obvious AI idea without pressure-testing whether it is actually the highest-leverage move.
Example:
Operations sees "automate customer intake = save 2,200 hours/month." Revenue sees "$300K expansion revenue at risk from lost upsell calls." HR sees "attrition risk from removing meaningful work." Net result: lose $300K to save $200K.
Impact: $30-40B in wasted AI investment. 95% of pilots fail because companies solve the wrong problem by seeing it through only one dimension.
The One-Error Death Spiral
Shipping AI without baseline metrics or observability. The first visible error happens, and nobody can prove the system is still outperforming humans.
Example:
Agent makes 15 mistakes out of 1,000 tasks (98.5% success). Executive asks "How often?" No observability = no data. Project cancelled despite possibly outperforming humans at their 3.8% error rate.
Impact: "One error = kill it" dynamic destroys projects that might be succeeding.
Maturity Mismatch
Treating AI deployment as SaaS procurement instead of software development. Jumping to R3-R4 automation when the organisation is only ready for R0-R1.
Example:
Going straight to "AI handles customer refunds automatically" when organization lacks prompt version control, regression tests, or observability.
Impact: 80%+ failure rate, wasted $50K-$300K, team concludes "AI doesn't work for us."
The Building Trap
Spending months building something that should never have existed. When shipping is close to free, building the wrong thing becomes the most expensive mistake you can make.
Example:
Team proudly ships a customer-facing chatbot because it looked like the fastest win. The real leverage was internal claims triage, nightly advisory packs, and governed decision support. They built the boss fight first and burned six months.
Impact: Large build velocity applied to low-value ideas just accelerates waste.
The AI Investment Steward Model
Portfolio management + governance frameworks + ownership transfer = compounding returns
Capital Allocation Discipline
Treat AI the way a CFO treats capital allocation—clear investment thesis, quarterly reviews, kill/fix/double-down decisions, explicit risk appetite. No random experiments, systematic evaluation.
Advisory and stewardship typically represent 10–30% of your AI budget so that the other 70–90% compounds instead of leaking into orphaned pilots
Automated Controls, Not Committees
Governance lives in the system, not in a PDF. PII redaction, observability and decision logs are built into AI workflows so that every call is traceable, auditable and explainable. Safety and compliance are enforced by architecture and automation, not standing meetings and slide decks.
The underlying tools can change over time—the point is a governed pattern your teams can apply everywhere
Build Capability, Not Dependency
The stack is composable and open—documented patterns, reference implementations and a trained internal team. After 12–18 months, most clients reduce to quarterly or board-level reviews because the organisation can run AI as part of normal operations.
You own the code, frameworks and operating model. No vendor lock-in, and no permanent dependence on me to make basic decisions
The Operating System Behind the Seven-Day Build
These aren't blog posts. They're the operating system behind the seven-day build.
Each one solves a specific failure mode I've seen kill projects.
The Lane Doctrine
Your safe AI project is usually the boss fight. The Lane Doctrine tells you to deploy AI where physics is on your side: batch, reviewable, governable work before real-time theatre.
Diagnostic: Are you picking the easiest-looking project, or the one the system can actually survive?
Compliance Cosplay
Governance that cannot block an unauthorised decision is theatre. Runtime authority, proof, and in-path enforcement matter more than policy PDFs and committee rituals.
Diagnostic: Can your system stop the bad decision now, or only explain it afterwards?
Stop Automating, Start Replacing
AI is not here to grease bad processes. It is here to replace workflows that should never have existed in their current form.
Diagnostic: Why do these fourteen steps exist at all?
The Cognition Supply Chain
Your AI knows more about blue whales than your business because your context architecture is broken. This framework fixes retrieval, exploration, compression, and compounding cognition.
Diagnostic: Does your AI have structured access to your thinking, or just your files?
Discovery Accelerators
Multi-agent reasoning architecture for showing rejected alternatives, defending decisions, and making the discovery process visible enough for boards and regulators.
Real-World Projects
Strategic architecture and innovative thinking driving business transformation
Covermore Travel Insurance
Challenge: Amadeus travel systems dominated the distribution channel, limiting direct relationships with key partners.
Strategic Solution: Designed and delivered complete disintermediation architecture—removing dependency on Amadeus while maintaining continuity.
Breakthrough Impact: Enabled enduring partnership with Flight Centre that became foundation for successful IPO.
Key Insight: Strategic architecture isn't just technical—it's about removing barriers to high-value relationships. Systematic thinking about dependencies and alternatives creates breakthrough opportunities.
Dynaquest
Challenge: HubSpot subscription costs escalating while core workflows (outbound sales, training, onboarding) needed custom automation.
Innovative Solution: AI-first architecture replacing conventional SaaS. Purpose-built workflows with systematic governance from day one.
Breakthrough Impact: Owned infrastructure, no vendor lock-in, custom workflows tuned to exact business needs. Systematic approach vs. feature accumulation.
Key Insight: Challenging conventional SaaS wisdom with systematic AI-first thinking. When you control the architecture, you control the evolution. Governance isn't bureaucracy—it's ownership.
"Strategic breakthroughs come from systematic thinking about dependencies, alternatives, and ownership—not from following vendor roadmaps."
Systematic Governance in 90 Days
Fast proof or fast kill—no 6-month POCs that die quietly
Audit & Clarity
Weeks 1–4
What: AI Portfolio Review—audit all vendors, tools and projects, plus 2–3 working sessions with the CEO and C-suite to align on where AI should and should not play in the strategy
Output: Kill/Fix/Double-Down decisions, AI waste identified ($50K–$200K typical)
Deliverables: 90-day roadmap, Readiness Scorecard, board-ready summary
Govern & Pilot
Weeks 5–8
What: Design and implement a Company AI Gateway plus a 10-day pilot on a single high-value workflow, with weekly executive check-ins to review results, surprises and kill/scale decisions
Output: Shadow AI brought under governance, 15–40% improvement measured against baselines
Deliverables: Observability dashboard, evaluation framework, kill/scale criteria agreed with leadership
Scale & Transfer
Weeks 9–12
What: Move successful pilots into production, train internal teams and hand over ownership, including an executive debrief on what you've learned about AI in your own context and how that changes the next 12–24 months of capital allocation
Output: Internal AI capability with clear metrics and governance
Deliverables: You own the code, frameworks and operating model. Ongoing advisory available where it makes economic sense.
Investment context
Most clients are already investing $150K–$1M+ annually in AI. Governance and portfolio stewardship typically represent 10–30% of that spend and pay for themselves by eliminating wasted investments in the first 90 days.
How We Work Together
Start with assessment. Then scope the right engagement based on your readiness, AI spend, and strategic priorities.
Assessment
10-minute Readiness Scorecard. Know your score, gaps, pathway.
FREE
Discovery
30-minute call. Discuss situation, challenges, current AI spend.
FREE
Audit
2–3 weeks. Deep review of all AI investments, plus dedicated working sessions with the CEO and C-suite.
$15K–$25K
Engage
Custom scope based on readiness + priorities.
CUSTOM
Typical Engagements
AI Portfolio Review & C-Suite Briefing
Audit all AI spend, vendors, tools and pilots. Classify: Kill / Fix / Double-down. Includes 2–3 executive working sessions to align on risk, ROI and strategic direction. Deliver 90-day roadmap + waste report ($50K–$200K typically identified).
Ongoing AI Investment Stewardship
Portfolio stewardship, quarterly reviews, governance oversight, vendor evaluation, board presentations and ongoing CEO / C-suite working sessions. Scope varies: retained advisory or project-based support.
Governed Implementation Support
Hands-on support from an architect who's also your strategic advisor: design the Company AI Gateway, run the 10-day pilot, move into production with governance baked in. The goal is to build durable capability, not a dependency.
Investment Depends On
Your Current AI Spend
Are you investing $150K, $500K, or $1M+ annually? Scope scales accordingly.
Readiness Score
Score 0-10 needs foundation-building. Score 17+ ready for systematic deployment. Different pathways, different investments.
Strategic Priorities
Governance urgency? Shadow AI risk? Failed pilots to salvage? Board pressure? Each shapes the engagement.
Context for Investment
Industry data shows 72% of AI projects fail. If you're spending $500K on AI, that's ~$360K wasted annually at the documented failure rate.
Systematic governance prevents waste + improves working spend. Portfolio review typically identifies $50K-$200K in eliminable spending—often paying for itself in eliminated waste alone.
Or schedule a discovery call to discuss your situation
Is This For You?
This IS for you if:
- 25–500 employees, typically $20M–$500M revenue
- Currently investing $150K–$1M+ annually in AI (tools, pilots, vendors)
- Have executive or board-level sponsorship and budget authority
- Tried AI with underwhelming/failed results
- Board/competitors pressuring for AI ROI
- Staff using shadow AI (ChatGPT with company data)
- Value prudent investment over hype-chasing
- Willing to kill failed projects (no sunk-cost fallacy)
- Want ownership and flexibility (not vendor lock-in)
- Australian company or significant AU operations
This is NOT for you if:
- <25 employees or <$150K annual AI spend
- >500 employees with dedicated AI teams
- Pure startup <2 years old (too early for governance)
- Want "AI strategy deck" without implementation
- Seeking cheapest vendor (my work typically sits in the $180K–$300K/year band when fully engaged)
- Not yet investing in AI (come back when ready)
- Expect AI to "solve everything" (we're skeptics)
- Can't commit to killing failed projects
Not Ready to Commit? Start Here.
Are You Ready for AI?
Take the 10-minute Readiness Scorecard (32 points across 8 dimensions). Instant results: where you are, what's missing, what to fix first.
Take Free AssessmentDIY AI Governance
Step-by-step implementation plan: Audit → Gateway → Pilot → Scale. Includes checklists, tool recommendations, success criteria.
Download Free TemplateSee How It Works
5 documented case studies: Professional services, manufacturing, healthcare, SaaS, financial services. ROI breakdowns, timelines, lessons learned.
View Case StudiesDiscuss your AI challenges, explore fit, no obligation