Enterprise AI Governance Series

The AI Readiness Staircase

Who Owns What in Enterprise AI Preparedness

Four layers. Four owners. Most organisations are stuck on the first step thinking they are done.

85% of enterprises plan agentic AI. Only 21% have the governance to support it.

After reading this ebook, you will be able to:

Scott Farrell

LeverageAI — leverageai.com.au

April 2026

Part I: The False Confidence Trap

Your AI Readiness Assessment Is Probably Missing Three Layers

Why most organisations think they're AI-ready when they're standing on the ground floor of a four-storey building.

Eighty-five per cent of enterprises plan to deploy agentic AI within two years. Only twenty-one per cent have the governance frameworks to support it1. That is a four-to-one ambition-to-readiness gap — and it is not closing.

This is not an awareness gap. Every executive in every boardroom knows AI matters. This is not a budget gap — global enterprise AI spending will hit $665 billion in 20262. And it is not a talent gap, though plenty of consultants would like to sell you that story.

This is an architecture gap. The gap is not belief. It is machinery.

85%

of enterprises plan agentic AI deployments

21%

have governance frameworks ready

Source: Deloitte, State of AI in the Enterprise 2026

The Incumbent Mental Model

Ask most enterprises what "AI readiness" means. You will hear a familiar checklist:

Data quality programme

Some APIs exposed

Governance committee formed

Leadership enthusiasm

A few pilots running

That checklist feels complete. It is not. This is what we call Layer 0.5 — application readiness. Necessary. Not sufficient. The front gate of a four-storey building.

"Most 'AI readiness' assessments in the market are toy-grade. They stop at culture, data, and maybe APIs. That is not readiness. That is the welcome mat."

Why the False Confidence Persists

The market's readiness frameworks reinforce it. Gartner, McKinsey, Deloitte — their assessments typically evaluate culture, data, talent, leadership, and strategy. These things matter. But they stop before the hard architectural layers.

None of them ask: "Can your system technically prevent an unauthorised AI decision from executing right now?"

None of them ask: "Can you produce a decision-time evidence chain without forensic reconstruction?"

13%

of companies fully ready for AI — down from 14% the year before

98%

report increased urgency to deliver on AI

Source: Cisco AI Readiness Index, 2024

Cisco's 2024 AI Readiness Index captures the paradox cleanly: global readiness has actually declined — only 13% of companies are fully ready, down from 14% a year earlier3. Meanwhile, 98% of leaders report increased urgency. 85% believe they have less than 18 months to act.

Urgency rising. Readiness falling. That is the hallmark of organisations confusing desire with architecture.

Even among organisations that feel "strategically ready" (40%), only 30% report governance readiness, 20% talent readiness, and sub-50% infrastructure readiness4.

The Cost of Layer Confusion

Ninety-five per cent of enterprise AI pilots fail to deliver business value, with most never scaling beyond the experimental phase5. That is a staggering number, but the more interesting statistic is hiding underneath it.

Eighty per cent of AI pilots succeed technically but fail to reach production6. Read that again. The technology works. The organisation cannot absorb it.

95%

of AI pilots fail to deliver business value

MIT / Argano

80%

succeed technically yet never reach production

AIM Research / Deloitte

AI projects fail at twice the rate of non-AI IT

RAND Corporation

73%

of deployments fail to achieve projected ROI

AI Governance Today

The failure layer is not technology. It is not Layer 1. The failures cluster in Layers 2 through 4: runtime safety, decision-time authority, and proof infrastructure — layers that most readiness assessments do not even acknowledge exist.

The RAND Corporation found that AI projects fail at twice the rate of non-AI IT projects, driven primarily by governance and organisational issues rather than technical capability limitations7. The biggest issues are misaligned goals, weak data infrastructure, unclear ownership, and lack of cross-functional support — all of which keep companies stuck perpetually in pilot mode.

The AI-Allergic Enterprise

Consider the archetype. A large regulated insurer. The whole organisation is allergic to AI. Banned access. It is like a dirty word.

They run waterfall. Senior executives try to kick off projects, business analysts specify everything upfront. A very old-world organisation. When you run that way, it is very hard for AI to get a look-in. Every initiative gets thwarted by complicated systems, layers of governance, and endless issues. So nothing happens.

This organisation believes its caution equals readiness. It does not. Caution without architecture is well-dressed paralysis.

The portfolio steward cannot answer "are we AI-ready?" because no shared model exists for what "ready" actually means across the full stack. They are not alone. Only 14% of organisations say they are fully ready for AI deployment8.

The AI Readiness Staircase

The building has four floors. Most enterprises are standing in the lobby calling it the penthouse. Here is the full model — defined in depth in Chapter 2, introduced here so you can see the shape of what you are missing.

4
Proof-Carrying Receipts

Can every consequential decision produce a portable evidence bundle with signed authority, signed data, and signed execution graph?

3
Decision-Time Authority

Can actions be made structurally impossible without valid authority — not merely logged after the fact?

2
Runtime-Safe Agent Execution

Can untrusted agents be isolated, mediated, permissioned, and observed safely?

1
Application AI-Fit

Can systems be read, acted on, wrapped, and governed at all? This is Layer 0.5 — where most stop.

Each layer depends on the one below it. You cannot skip floors.

Most organisations are somewhere in Stage 1 — and many have not even completed it. They have policies, PowerPoints, and a nice committee biscuit plate. They do not have the machinery.

This staircase is not another maturity model that measures where you are and awards a comforting score. It tells you who owns what and what breaks when layers are confused. The ownership mapping is the novel contribution, not the layer count.

"Policies and committees are necessary but not sufficient. The question is whether governance has teeth — runtime enforcement, not just documented intention."

The Capital Urgency

This is not a "wait and see" situation. The infrastructure is being built whether you are ready or not.

~$700B

in AI infrastructure investment by hyperscalers in 2026 alone

Amazon: $200B projected capex

Google: $175–185B

Meta: $115–135B

Stargate: $500B over four years

Sources: CNBC / Axios, February 2026

Hyperscalers are committing roughly $700 billion to AI infrastructure in 20269. Amazon alone is projecting $200 billion in capital expenditure, up from $131 billion in 2025. The Stargate project targets $500 billion in AI infrastructure by 2029, with an initial $100 billion deployment already underway10.

The infrastructure is being built. Enterprises that fail to map readiness across all four layers will find their programmes stranded against infrastructure they cannot safely use. Vendor renewals and platform investments made without an AI-fitness lens become stranded spend.

Regulatory expectations are hardening. The EU AI Act reaches full enforcement in August 202611. APRA CPS 230 compliance deadlines are live12. "We didn't know" stops being an excuse — and quickly.

What This Ebook Covers

This ebook takes you from believing AI readiness is a single checkbox to understanding it as a four-layer staircase with distinct owners — and shows you how to take the first step without triggering organisational antibodies.

Chapter 2
The Four-Stage Staircase

What each layer requires and why the ordering is a dependency chain, not a suggestion.

Chapter 3
Four Layers, Four Owners

The ownership mapping that prevents the "everyone owns it so nobody owns it" deadlock.

Chapter 4
When Layers Are Confused

Four failure modes with real evidence — and the meta-pattern behind enterprise AI collapse.

Chapter 5
Which Step Are You On?

Per-layer self-assessment questions that test machinery, not feelings.

Chapter 6
The Portfolio Steward's First Move

What to do Monday morning — one bounded, non-threatening project that produces useful output regardless.

References
Sources & Citations

All external research, consulting reports, and industry data referenced throughout.

Key Takeaway

  • AI readiness is not a binary. It is a staircase with four architectural layers.
  • Most organisations are on Step 1 and calling it the top.
  • The gap between ambition (85%) and governance (21%) is not awareness — it is architecture.
  • Application readiness is necessary but not sufficient — it is Layer 0.5, not the finish line.
  • The real question: can your enterprise bind authority, evidence, and execution together at decision time?
Part I: The False Confidence Trap

The Four-Stage Staircase

A sequential architectural dependency — not four checkboxes, but a load-bearing stack where each layer requires the one below it.

A portfolio steward in a regulated insurer is asked at the steering board: "What is our AI readiness posture?"

He can talk about data quality. He can mention a few pilots. He can reference the governance committee. But he has no structured model. No shared vocabulary. No way to say "we are at Stage X with these gaps and here is who owns closing them."

This chapter gives him — and every enterprise leader — that model.

1

Application AI-Fit

The question it answers:

Can your systems participate in AI-mediated work at all?

This is what most organisations call "AI readiness." We call it Layer 0.5 — the front gate, not the building. AI readiness is not just an organisational capability issue. It is also an application estate problem. Can your software stack actually be used by AI in a sane way?

What application AI-fit means

An application's fitness for AI-mediated operations can be assessed across eight dimensions. The detailed audit methodology belongs in a separate treatment — here, the dimensions set the frame:

Access Posture

APIs, events, webhooks, CLI, connectors? Or browser-only?

Model Legibility

Objects, entities, actions externally accessible? Or hidden behind screen logic?

Actionability

Can AI safely perform actions with scoped permissions? Or read-only at best?

Documentation Quality

Machine-readable schemas, examples, edge cases?

Integration Friendliness

MCP/connectors/tool interfaces? Or "dentistry without anaesthetic"?

Governance Compatibility

Can interactions be logged, constrained, authorised, audited?

Replacement Exposure

If closed and awkward, how easy to wrap, sidestep, or replace?

Strategic Criticality

How central to future AI-enabled workflows?

The architecture rule hiding underneath

The question is not "is the UI modern?" The question is: where does truth and enforcement live? If truth lives in the backend, good news. If truth lives in the JavaScript front end, things get grubby. A supposedly "modern" SPA-heavy architecture can actually be worse for AI than an older, more server-centric forms-and-data design.

Four strategic options for weak-fit applications

U

Use it directly

Nice API, clear model, good docs, manageable permissions

W

Wrap it

Build adapter/façade/connector around it so agents can interact cleanly

C

Contain it

Keep in the stack, minimise its role in future AI workflows

R

Replace it

Accept switching cost because long-term AI friction will cost more than migration pain

Bad application AI-fit is not just an integration inconvenience. It is a portfolio risk. If a critical application is not AI-ready, every AI initiative touching it becomes slower, more expensive, less reliable, and harder to govern. Gartner predicts that through 2026, organisations will abandon 60% of AI projects unsupported by AI-ready data11.

"AI does not care whether your UI is pretty. It cares whether your system has a trustworthy, governable, machine-usable contract."
2

Runtime-Safe Agent Execution

The question it answers:

Can untrusted agents be isolated, mediated, permissioned, and observed safely?

This is the containment layer. The shift from "manners to physics." Stop trying to make AI trustworthy. Build systems where trustworthiness is irrelevant.

What runtime safety means

Zero trust for AI: the agent never earns your trust. The architecture renders trust unnecessary. Containment by design:

Scoped Permissions

Agent can only do what its key allows — refund:$500, email:send

Stateless Execution

Each invocation starts fresh. No memory accumulation. No data leakage across tasks.

Tokenisation

Agent never sees real PII — [NAME_1], [EMAIL_1]. Proxy hydrates on output.

Blast Radius Management

One agent's failure cannot contaminate other systems. The walls are structural, not advisory.

Why this layer is now urgent

Eighty per cent of organisations have already encountered unintended actions by their AI agents: 39% report agents accessing unauthorised systems, 32% downloading sensitive data, 33% sharing restricted information. Twenty-three per cent report agents tricked into revealing access credentials12.

Forty per cent of enterprise workflows will involve autonomous agents by 202613. Gartner predicts over 40% of agentic AI projects will be cancelled by end of 2027 due to escalating costs, unclear value, or inadequate risk controls14. Microsoft released the Agent Governance Toolkit in April 2026 — the first toolkit to address all ten OWASP agentic AI risks with deterministic, sub-millisecond policy enforcement15.

The market is moving. Enterprises without runtime containment are exposed.

3

Decision-Time Authority

The question it answers:

Can actions be made structurally impossible without valid authority — not merely logged after the fact?

This is the In-Path Authority pattern. Governance that runs with the decision, not after it. Authority is potential; policy evaluation is activation. Delegated authority on its own is too abstract — a mandate in principle. Policy-as-code turns that mandate into a machine-verifiable test at runtime.

The three governance layers

The Governance Stack — covered in depth elsewhere — provides the internal architecture for this layer and the one above it. In brief:

Data Governance

What is true enough to use? Most orgs have this

AI Governance

What is the model allowed to do? Being built

Authority Infrastructure

Who may act, right now, and where is the proof? Missing in most

The authority layer nobody built. That is the gap. And it is measurable: organisations using tiered authorisation models experience 76% fewer agent safety incidents than those using binary autonomous/non-autonomous authorisation16.

Only 34% of organisations with governance policies use any technology to actually enforce them17. That enforcement gap is where compliant-on-paper programmes break down in practice.

"Governance without staging is policy without enforcement. Policy documents do not prevent shadow AI deployments. Architecture does."
4

Proof-Carrying Receipts

The question it answers:

Can every consequential decision produce a portable evidence bundle with signed authority, signed data, and signed execution graph?

This is the pinnacle. The Decision Attestation Package. Not many applications, and frankly not many enterprises, stack up to this today. But it is the destination.

What proof-carrying means

Three cryptographic commitments that travel with the decision, not in a separate filing cabinet:

Signature 1
Signed Authority

Who was permitted to act — provably, not reconstructed from memory

Signature 2
Signed Data

What evidence was observed — immutably, not editable after the fact

Signature 3
Signed Graph

What deterministic machinery produced the outcome — reproducibly

These are not audit requirements bolted on after deployment. They are structural outputs of a well-designed decision system. If your AI system cannot produce these three signatures, it is not governed. It is monitored. Monitoring tells you what happened. Governance proves what was authorised.

Where most organisations are today: logs only — mutable, deniable, unreliable. Level 0: No Provenance.

Why this matters now

Regulatory Timeline
Aug 2026 EU AI Act full enforcement — high-risk AI rules active
Jul 2026 APRA CPS 230 — pre-existing contracts must comply18
Feb 2026 NIST NCCoE — agent identity and authorisation concept paper
Now Cyber insurers writing AI exclusions. SOC 2 auditors asking about AI change management.19

Gartner names Digital Provenance as a Top Strategic Technology Trend for 202618. The EU AI Act reaches full enforcement in August 2026, requiring documented risk management, robust data governance, detailed technical documentation, automatic logging, human oversight, and safeguards for accuracy and robustness19. Penalties reach EUR 15 million or 3% of worldwide turnover.

NIST's February 2026 concept paper explicitly establishes that agent identity and authorisation require new infrastructure, not just policy20. In the 2026 compliance environment, screenshots and declarations are no longer sufficient — only operational evidence counts21.

"Signed authority, data provenance, and receipt infrastructure — that's the pinnacle of AI readiness. But not many applications or infrastructure stacks are going to measure up to that yet."

The Sequential Dependency

The staircase is not four independent checkboxes. It is a dependency chain.

4

You cannot produce proof-carrying receipts without decision-time authority infrastructure to sign against.

3

You cannot enforce decision-time authority without runtime containment that makes enforcement structural rather than aspirational.

2

You cannot deploy runtime containment if your applications are opaque, GUI-bound fossils that can only be accessed by humans clicking through menus.

1

Application fitness is the foundation. Everything else is built on top of it. Skipping it creates the false confidence trap.

The staircase rewards climbing in order. Skipping layers creates the false confidence trap described in Chapter 1. And knowing which layer you are on is only useful if you know who is responsible for climbing. That is where we go next.

Key Takeaway

  • The staircase is four layers: Application Fitness → Runtime Safety → Decision-Time Authority → Proof-Carrying Receipts
  • Each layer depends on the one below it — this is a dependency chain, not a buffet
  • Most enterprises are somewhere in Stage 1 and have not started Stage 222
  • The full staircase is architecturally demanding — but Stage 1 is a legitimate, non-threatening starting point
  • The goal is not to be at Stage 4 tomorrow. The goal is to know which stage you are on.
Part II: The Ownership Map

Four Layers, Four Owners: Who Does What

The real unlock is not the layer count — it is knowing who owns what. "Everyone owns AI readiness" is how nobody owns it.

Fifty-eight per cent of leaders identify disconnected governance systems as the primary obstacle preventing them from scaling AI responsibly22. Not technology. Not budget. Not talent. Disconnected governance.

McKinsey's State of AI research is blunter: high performers are three times more likely to have senior leaders demonstrating ownership of and commitment to AI initiatives23. Three times. Not a marginal edge. A structural advantage.

The pattern is clear. Organisations with clear, senior ownership of AI initiatives get 3× better outcomes. Organisations with distributed, vague ownership get governance theatre. The unlock is not more layers. It is knowing who owns each layer.

better outcomes when senior leaders own AI initiatives

58%

cite disconnected governance as the primary scaling obstacle

Sources: McKinsey 2025; Dataversity 2026

The Common Ownership Failures

Before we map what should happen, look at what actually happens in most enterprises today. Four patterns, all fatal:

The Portfolio Steward Thinks He Owns It All

He is told "you own AI readiness." That is too much. He owns application fitness. He does not own runtime containment, authority infrastructure, or receipt architecture. Asking him to own the whole mountain is politically daft and organisationally messy.

Governance Thinks They Own It All

Risk and compliance write policies, form a committee, publish principles. But their scope is Layers 3–4, not the application estate. They cannot fix a GUI-bound Salesforce instance.

Enterprise Architecture Thinks It Is Someone Else's Problem

EA focuses on future-state diagrams and reference architecture. But the runtime containment patterns, tool mediation, and transition roadmaps are exactly their territory.

Projects Invent Their Own Governance

Every project creates its own little AI governance snowflake. Team A uses Slack approvals. Team B uses email sign-offs. Team C has no approval at all. Museum of inconsistent nonsense.

"The portfolio steward does not have to own AI governance. He only has to make sure portfolio decisions are not made blind to it."

The Ownership Mapping

This is the centrepiece of the staircase model. Each layer has a primary owner with a bounded remit and a clear executive line. Later chapters reference this table — it is the canonical assignment.

1

Layer 1 — Application Fitness

Primary Owner: Portfolio Steward / CIO

What they own:
  • Assess application estate AI-fit across the eight dimensions
  • Identify systems that are poor candidates for AI-mediated operations
  • Flag applications that need wrappers, safer interfaces, or containment
  • Feed AI-fit into upgrade, renewal, replacement, and legacy decisions
  • Prevent new projects from deepening dependency on structurally poor-fit systems
  • Gate investment decisions with AI-readiness awareness

Influences but does not own: Enterprise reference architecture for agentic AI; runtime control-plane patterns; authority infrastructure

"My responsibility is to ensure the application estate does not become the avoidable blocker to safe future AI adoption."

2

Layer 2 — Runtime Safety

Primary Owner: Enterprise Architect / Security

What they own:
  • Target-state design for AI-mediated operating model
  • Cross-estate patterns: what does an AI-mediated future require from applications, integration, security, and decision architecture?
  • Runtime containment architecture: identity, isolation, tool mediation, controlled access paths
  • Transition roadmaps from current estate to governed agent operations
  • Reference architectures for governed agents
  • Deciding where wrappers, façade layers, policy enforcement points, or replacement pathways are needed

Influences but does not own: Application fitness assessment (Layer 1 — they consume it); authority model and policy controls (Layer 3 — they architect for it)

"We design the future-state architecture that makes governed AI operations structurally possible."

3

Layer 3 — Decision-Time Authority

Primary Owner: Governance / Risk / CISO

What they own:
  • Delegated authority policy — who may authorise what
  • Approval scopes and risk tiers
  • Human oversight requirements
  • Model risk controls and data admissibility standards
  • Incident handling
  • Runtime enforcement: governance that runs with the decision, not after it24

Influences but does not own: Architecture design (they specify requirements; EA designs patterns); application fitness (they set governance compatibility requirements)

"We ensure that AI decisions cannot execute without valid, provable authority."

4

Layer 4 — Proof-Carrying Receipts

Primary Owner: Governance / Risk (shared with EA)

What they own:
  • Receipt standards — what constitutes sufficient proof
  • Evidence-chain requirements
  • Attestation architecture design (shared with EA)
  • Non-repudiation standards and regulatory evidence requirements
  • Decision Attestation Package specifications

Influences but does not own: Implementation of the attestation system (projects and EA implement)

"Every consequential AI decision must produce portable evidence that it was authorised, evidenced, and correctly executed."

The Own vs Influence Distinction

This is the critical design principle. Every role has a bounded ownership zone and a wider influence zone. The distinction prevents both paralysis and overreach.

Role Owns Influences
Portfolio Steward Application fitness, investment gating Governance design, architecture direction
Enterprise Architect Runtime architecture, future-state design Application fitness findings, governance requirements
Governance / Risk Authority model, policy, proof standards Architecture patterns, application requirements
Projects Implementation against patterns Nothing — they consume

The portfolio steward owns application fitness but influences governance design. Governance owns authority infrastructure but consumes application readiness findings. EA owns future-state architecture but consumes governance requirements. Projects consume everything. They do not author doctrine.

"Own vs influence. The portfolio steward OWNS application fitness but INFLUENCES governance design. Governance OWNS authority infrastructure but CONSUMES application readiness findings. That distinction is the difference between clarity and chaos."

Why This Split Works Politically

Each owner has a bounded, defensible remit. Nobody is asked to boil the ocean. The portfolio steward is not suddenly "the AI czar." His territory is narrow and clean: application estate and investment gating. Governance does not need to understand application architecture — they set requirements; EA designs patterns. EA does not need to own policy — they architect for it. Projects do not carry the weight of inventing governance — they comply with it.

This prevents the two deadly patterns:

"Everyone owns it"

Nobody owns it. Nothing moves. Every decision requires a committee. Every committee requires another meeting. Accountability dissolves into collective shrugging.

"One person owns it all"

Politically daft, organisationally messy, and they will burn out. No single executive has the span of control across application architecture, runtime security, governance policy, and evidence infrastructure.

The Egg-on-Face Scenario

The portfolio steward is not mainly afraid of abstract AI theory. He is afraid of a very concrete governance and delivery embarrassment.

The Scenario

He approves spend. Signs off direction. Carries legacy risk. Lets a project proceed. Renews or extends a platform.

Later the organisation realises that application is a poor fit for the next operating model.

Then everyone stares at each other like a committee of damp parrots.

That is the "egg-on-face" scenario. Not because he was foolish, but because the assessment criteria were incomplete. He did everything right by the rules of the old game. The game changed.

AI-readiness is now another reasonable assessment dimension alongside security, supportability, lifecycle status, and project impact. Not exotic. Not evangelical. Just the next version of a very old enterprise architecture question: can this system participate in the operating model we are moving toward?

"No one wants to be the bloke who okays a big project on an application, then AI comes along and he has to say 'oh yeah, but that application can't do AI.' That's the egg-on-face scenario."

The Clean Political Message

For the Portfolio Steward

"Being ready for something doesn't mean you're asking for it. Preparedness is not advocacy. It is stewardship."

For Enterprise Architecture

"AI readiness is not just an organisational capability issue. It is also an application architecture issue — and that makes it a legitimate enterprise architecture concern."

For Governance / Risk

"Policies and committees are necessary but not sufficient. The question is whether governance has teeth — runtime enforcement, not just documented intention."

For the Board

"An organisation is not AI-ready because its apps are accessible. It is AI-ready when applications, runtime controls, and proof-carrying governance all line up."

How the Layers Connect in Practice

1

Portfolio Steward assesses application fitness

Findings feed down ↓

2

Enterprise Architect designs future-state architecture informed by fitness findings

Patterns satisfy ↓

3

Governance / Risk specifies what authority and proof must look like

Requirements flow to ↓

P

Projects implement against all of the above. Each layer is distinct, but the information flow is continuous.

Key Takeaway

  • Clear ownership prevents the "everyone owns it so nobody owns it" deadlock
  • The staircase has four layers and four primary owners — each with bounded remit
  • Own vs influence is the critical distinction — bounded remit prevents political paralysis
  • The portfolio steward owns application fitness, not the whole AI governance mountain
  • Projects consume doctrine. They do not author it.
  • Senior ownership is the single biggest differentiator: 3× better outcomes (McKinsey)25
Part II: The Ownership Map

What Happens When Layers Are Confused

Layer confusion is not a theoretical risk. It is the primary mechanism behind enterprise AI failure. Each missing layer creates a specific, predictable failure mode.

This is not a risk workshop scenario. These are field reports from enterprises that skipped Layers 2 through 4.

Unintended AI Agent Actions — Already Happening

80%

of organisations have encountered unintended agent actions

39%

report agents accessing unauthorised systems

32%

report agents downloading sensitive data

23%

report agents tricked into revealing credentials

Ninety-six per cent of IT professionals consider AI agents a growing security risk. Sixty-six per cent believe the threat is immediate. Yet 82% are already using AI agents, and 98% plan to expand use next year24.

Failure Mode 1

Layer 1 Mistaken for Total Readiness

The organisation completes an application readiness assessment. Systems have APIs. Data is accessible. MCP connectors are built. They declare "AI-ready" and proceed to deploy agentic workflows.

First agent crosses a permission boundary. Accesses data outside its scope. Makes a decision without authority. Produces an output that cannot be audited. The pilot is paused. Confidence collapses. The executive says: "AI isn't ready for us."

But AI was ready. The enterprise was not — specifically, Layers 2, 3, and 4 were absent.

Why this happens

Layer 1 is the most visible and tangible layer. It produces artefacts: API inventories, compatibility scores, integration plans. Layers 2–4 are invisible until they fail. You do not notice the absence of runtime containment until an agent goes rogue. Market readiness frameworks reinforce this — they measure what is easy to measure and stop.

The cost

Companies abandoning projects consistently cite costs, privacy concerns, and governance gaps as primary reasons25. Forty-two per cent of organisations have abandoned most of their AI projects26.

Failure Mode 2

Governance Without Machinery — Compliance Cosplay

The organisation has all the governance artefacts:

AI usage policies
Responsible AI principles
Governance charter
Committee meetings
Risk register updated

None of these run at decision time. None can technically prevent an action from executing. Three-quarters of organisations have AI policies, but none run at decision time27.

The Compliance Cosplay Diagnostic

Three questions that reveal whether governance is infrastructure or theatre:

1

Can the system prevent unauthorised execution?

Not "detect" after the fact. Actually prevent at decision time.

2

Can you prove authority from system evidence, not reconstructed logs?

Evidence generated at decision time, not pieced together by investigators.

3

Can you produce the evidence chain without investigation?

If a regulator called today, could you pull it in minutes — not months?

If the answer to any of these is "no," the organisation has compliance cosplay, not governance. Only 34% of organisations with governance policies use any technology to actually enforce them27. That enforcement gap is where compliant-on-paper programmes break down in practice.

Governance-as-Documentation
  • ×Policies exist — in a SharePoint folder
  • ×Reviews happen quarterly — after the damage
  • ×Dashboards show metrics — historical, not preventive
  • ×Logs exist somewhere — mutable and deniable
Governance-as-Infrastructure
  • Authority evaluated at runtime, every decision
  • Unauthorised actions structurally impossible
  • Evidence produced automatically at decision time
  • Proof is immutable, portable, audit-ready

Architecture-first governance outperforms policy-first in every industrial environment measured. Organisations deploying AI governance platforms are 3.4× more likely to achieve high governance effectiveness28. Gartner predicts 60% of organisations will fail to realise AI value by 2027 due to incohesive governance29.

"Committees don't enforce anything. Runtime authority does. Hope is not governance. It's the absence of governance."

Failure Mode 3

Projects Inventing Governance Ad Hoc

No enterprise-wide authority model exists. Layer 3 is absent. Each AI project fills the vacuum with whatever seems reasonable locally.

The Museum of Inconsistent Nonsense

Team A uses Slack approvals.

Team B uses email sign-offs.

Team C has no approval at all.

Team D built a custom dashboard that nobody checks.

Result: different approval workflows, different risk classifications, different evidence trails, different definitions of "authorised." No auditor can make sense of it.

Projects are under delivery pressure. They need to ship. Enterprise governance frameworks are either absent or too slow and vague to apply. Projects fill the vacuum with whatever seems reasonable — and the result is fragmented, inconsistent, unauditable governance.

RAND Corporation identified the pattern precisely: misaligned goals, weak data infrastructure, unclear ownership, and lack of cross-functional support keep companies stuck perpetually in pilot mode30. Every project-level governance snowflake increases the cost of future consolidation and audit.

The doctrine was never authored. It was improvised twelve times.

Failure Mode 4

The Regulatory Cliff Without Layer 4

An organisation deploys AI in a regulated context — insurance, finance, healthcare. A regulator asks: "Show me the decision-time evidence chain for this AI-mediated action."

The organisation has: logs (mutable), dashboards (post-hoc), committee minutes (not connected to specific decisions). The organisation does not have: signed authority, signed data, signed execution graph — portable proof that this specific decision was authorised, evidenced, and correctly executed at the moment of action. "We'll investigate and get back to you" is not an adequate answer in the 2026 compliance environment.

The Regulatory Pressure Is Real and Hardening

Aug 2026

EU AI Act — Full Enforcement

Rules for high-risk AI require documented risk management, robust data governance, automatic logging, human oversight, and accuracy safeguards. Penalties up to €15M or 3% of worldwide turnover31.

Jul 2026

APRA CPS 230 — Full Compliance

Pre-existing service provider contracts must comply. Complex AI supply chains — cloud providers, data vendors — become points of failure32.

Feb 2026

NIST NCCoE — Agent Identity Concept Paper

Explicitly establishes that agent identity and authorisation require new infrastructure, not just policy33.

Now

Insurance and Audit Pressure

Cyber insurers are writing AI exclusions. SOC 2 auditors are asking about AI change management today34.

In the 2026 compliance environment, screenshots and declarations are no longer sufficient — only operational evidence counts35. The organisation that cannot produce decision-time evidence is not just non-compliant. It is uninsurable and unauditable.

"Under the EU AI Act, organisations may need to produce decision-time evidence chains on demand. 'We'll investigate and get back to you' may not be sufficient."

The Meta-Pattern

Each failure mode has the same root cause: layer confusion. The organisation invested in one layer, declared victory, and discovered too late that other layers were absent.

Layer 1 alone → Agents with access but no containment Most dangerous
1 + 2 only → Contained agents making unauthorised decisions
1 + 2 + 3 only → Authorised decisions that cannot be proven

Myth vs Reality

Myth

"We have APIs and a governance committee. We're AI-ready."

Reality

You are at Layer 0.5 with governance theatre. Ready for AI demos, not AI production.

Myth

"Our AI policies cover this."

Reality

Policies that do not run at decision time are wallpaper, not governance.

Myth

"We'll add governance later, once the pilot proves value."

Reality

Governance retrofitted after deployment is archaeology, not architecture. Build it in or pay 10× to bolt it on.

Key Takeaway

  • Layer confusion is the primary mechanism behind enterprise AI failure
  • Each missing layer creates a specific, predictable failure mode
  • 80% of organisations have already experienced unintended agent actions (SailPoint)
  • Compliance cosplay — policies without enforcement — is the most common governance failure
  • The regulatory environment is hardening: EU AI Act (August 2026), APRA CPS 230, NIST AI RMF
  • The fix is not more policies. It is mapping the staircase, assigning owners, and building machinery.
Part III: Climbing the Staircase

Which Step Are You On?

A self-assessment that tests machinery, not feelings. Per-layer readiness questions that find gaps — not award scores.

Cisco's 2024 AI Readiness Index captures a paradox that should unsettle every board: global readiness has actually declined — only 13% of companies are fully ready, down from 14% a year earlier. Meanwhile 98% of leaders report increased urgency and 85% believe they have less than 18 months to act36.

The largest decline was in infrastructure readiness. Fifty-four per cent say their infrastructure cannot scale for rising workloads37.

Urgency rising. Readiness falling. This is the hallmark of organisations confusing desire with architecture. They want to move but do not know where to stand. The self-assessment in this chapter is not another culture survey. It is an architectural diagnostic.

13%

fully ready — down from 14%

54%

say infrastructure cannot scale

85%

believe they have <18 months to act

Source: Cisco AI Readiness Index, 2024

How to Use This Assessment

This is structured as per-layer questions. It is not a scoring model — it is a gap-finding instrument. The goal is not a number. The goal is clarity on:

Which layers are genuinely addressed?

Which layers are assumed but not real?

Where is the next real gap?

Who should own closing that gap? (See Chapter 3)

Organisations should assess per-layer, not globally. "We're 60% AI-ready" is meaningless. "We're solid on Layer 1, absent on Layer 2, and have theatre in Layer 3" is actionable.

1

Application Fitness

Assess with: Portfolio Steward / CIO

Do you have an inventory of major applications that would participate in AI-mediated workflows?

For each critical system, do you know whether it has a usable API — or only human interfaces?

Do you know where business logic lives — in the backend (good) or the UI (problematic)?

Is documentation machine-readable — schemas, examples, edge cases?

Do MCP connectors or equivalent integration paths exist?

Can interactions be logged, constrained, and audited?

Have you classified applications as Use / Wrap / Contain / Replace candidates?

When making upgrade, renewal, or vendor decisions, does AI operability feature as an assessment criterion?

Red Flags
  • "We don't have an API inventory" — you do not know what agents can reach
  • "Business logic is mostly in the UI" — agents cannot interact safely
  • "We haven't looked at this" — you are at Layer 0, not Layer 0.5
  • "We assessed data readiness, isn't that enough?" — classic Layer 1 false confidence

Where most organisations are: many have done some data readiness work. Few have assessed application architecture for AI operability. The application estate is the blind spot.

2

Runtime Safety

Assess with: Enterprise Architect / Security

Can you run an untrusted AI agent in a constrained execution environment today?

Do you have identity and access management for AI agents, separate from human IAM?

Are agent permissions scoped by task — or does the agent inherit a human user's full access?

Is agent execution stateless — or can an agent accumulate data across invocations?

Do you have blast radius containment — if an agent goes wrong, how far does the damage spread?

Is sensitive data tokenised or redacted before agents see it?

Can you observe what an agent is doing in real time — tool calls, data access, actions taken?

Do you have a kill switch — can you halt agent execution immediately if something goes wrong?

Red Flags
  • "Our agents use service accounts with broad access" — no blast radius containment
  • "We trust the model to behave" — manners, not physics
  • "We'd notice if something went wrong" — reactive, not preventive
  • "We're waiting for our cloud vendor to solve this" — hope-as-strategy

Where most organisations are: almost nowhere. This is the layer most enterprises have not started. The infrastructure exists (SiloOS-style patterns, AWS AgentCore, Microsoft Agent Governance Toolkit38) but adoption is extremely early.

3

Decision-Time Authority

Assess with: Governance / Risk / CISO

Can your system technically prevent an unauthorised AI decision from executing right now? Not "detect after the fact" — actually prevent at decision time.

Is authority evaluated at runtime — or only documented in a policy PDF?

Do you have a delegated authority model for AI — who can authorise what, at what scope, under what conditions?

Are approval tiers defined — which AI actions need human approval, which can proceed autonomously, which are forbidden?

Can you prove — from system-generated evidence, not reconstructed logs — who had authority before an action executed?

If an agent makes a decision, can you trace back: who authorised this agent, what policy was evaluated, and what evidence was observed?

Red Flags
  • "We have an AI policy document" — documentation, not enforcement
  • "The governance committee reviews quarterly" — too late — governance that runs after the decision is archaeology
  • "We trust the team to follow the policy" — manners again
  • "We're working on it" — it does not exist yet

Where most organisations are: policies exist. Runtime enforcement does not. Only 34% use technology to enforce them39. The other 66% have governance theatre.

4

Proof-Carrying Receipts

Assess with: Governance / Risk (shared with EA)

Can every consequential AI decision produce a portable evidence bundle at the moment of action?

Does that bundle include signed authority (who), signed data (what evidence), and signed execution graph (how)?

Are your decision records mutable or immutable? Can someone edit the logs after the fact?

If a regulator asked "show me the evidence chain for decision X," could you produce it on demand, without forensic reconstruction?

Do your systems produce audit-ready output by design, or would an audit require investigators to piece together evidence from scattered logs?

Red Flags
  • "We have logs" — mutable, deniable, unreliable (Level 0: No Provenance)
  • "We'd need to investigate" — you cannot produce evidence on demand
  • "Our auditors haven't asked for this yet" — they will. EU AI Act, August 2026.40
  • "We'll add this later" — governance retrofitted after deployment costs 10×

Where most organisations are: logs only. Mutable, deniable, unreliable. Almost no enterprise has proof-carrying receipt infrastructure for AI decisions today. This is the pinnacle — the destination, not the starting point.

Putting It Together: Where Are You Really?

Layer Status Next Step
Layer 1: Application Fitness Not started Partial Solid Assign to portfolio steward; run application audit
Layer 2: Runtime Safety Not started Partial Solid Assign to EA/Security; design containment architecture
Layer 3: Decision-Time Authority Not started Partial Solid Assign to Governance/Risk; build runtime authority
Layer 4: Proof-Carrying Receipts Not started Partial Solid Assign to Governance + EA; design attestation architecture

Most regulated enterprises will find: Layer 1 partial, Layer 2 not started, Layer 3 documentation-only (theatre), Layer 4 absent. That is normal. That is where most organisations actually are.

What You Do NOT Need to Do

You do not need to reach Layer 4 before starting Layer 1 work

You do not need a perfect score at any layer before moving to the next

You do not need to solve everything at once

You need to know where you are, who owns what, and what your next gap is

"Being ready for something doesn't mean you're asking for it. The staircase gives permission to be at any step. Stage 1 is a legitimate starting point. The danger is calling it the destination."

Three Personas, Three Starting Points

1
The Cautious Portfolio Steward

Start with Layer 1. Audit application fitness. No AI deployed, no risk triggered. Pure stewardship.

"You're not even turning on AI. You're just assessing which applications are ready for it."

2
The Active AI Deployer

Start with Layer 2–3. Your pilots are live. Are they contained? Are they authorised? Can you prove it?

"If you can't answer the runtime safety questions, your pilots are running without a safety net."

4
The Compliance Officer

Start with Layer 4. What can you prove on demand? Work backwards to find the gaps.

"What evidence would you need to produce? Can you? If not, which layers are missing?"

Key Takeaway

  • Self-assessment is per-layer, not global. "60% AI-ready" is meaningless; "Layer 1 solid, Layer 2 absent" is actionable.
  • Most regulated enterprises: Layer 1 partial, Layer 2 not started, Layer 3 theatre, Layer 4 absent. That is normal.
  • The staircase gives permission to be at any step. The danger is calling Step 1 the destination.
  • Use the ownership mapping (Chapter 3) to assign each gap to the right owner.
  • Start where the pain or urgency is highest — application audit, runtime gaps, or regulatory evidence requirements.
Part III: Climbing the Staircase

The Portfolio Steward's First Move

What to do Monday morning. One bounded project. One non-threatening first step that produces useful output regardless of how fast the organisation moves on AI.

The previous chapters defined the staircase, mapped the owners, showed the failure modes, and helped you find your step. This chapter answers the simplest and most urgent question: what does the portfolio steward actually do first?

Not a transformation programme. Not a governance overhaul. Not an AI strategy deck. One conversation. One bounded project. One non-threatening first move.

The portfolio steward already has projects running, applications running, legacy risk to manage, vendor contracts to renew, and executives asking uncomfortable questions about AI. He does not need another framework to admire. He needs a move he can make this quarter.

The Political Framing: Preparedness, Not Adoption

What He Should NOT Say

"We need to do AI now."

Triggers governance antibodies immediately

"We're falling behind."

Creates defensiveness, even if true

"AI is inevitable, so we need to act."

Sounds like futurist sermonising

"Let's add AI into this project."

Governance shuts it down immediately

What He SHOULD Say

"I'm not advocating for uncontrolled AI rollout."

Validates current conservatism

"I'm making sure the organisation understands its readiness posture."

Sounds like stewardship

"A low-risk step is to assess our application estate for AI readiness."

Bounded, defensible, governable

"We're not turning on AI. We're assessing which applications are ready."

Hard to attack without sounding irresponsible

The winning frame is not AI adoption. It is AI preparedness. "Adoption" sounds like pressure, ideology, premature commitment. "Preparedness" sounds like stewardship, prudence, sensible executive hygiene. Very different beast.

That is a grown-up sentence. Hard to attack without sounding oddly committed to strategic sleepwalking.

"Being ready for something doesn't mean you're asking for it. Preparedness is not advocacy. It is stewardship."

The First Project: Application AI-Readiness Assessment

Low-Risk

No production AI. No customer impact. No model decisions. No scary blast radius.

Governance-Compatible

It is an assessment activity, not an operational deployment. Harder to swat down.

Portfolio-Relevant

Sits naturally with architecture, application stewardship, and future investment decisions.

Useful Regardless

Even without AI rollout, you learn which systems are modern, brittle, or over-closed.

This must survive the question: "Why do this if we aren't ready for AI yet?"

Answer: "Because readiness itself is now part of responsible architecture and portfolio management."

What the Audit Looks Like

Score major systems across the eight dimensions defined in Chapter 2: access posture, model legibility, actionability, documentation quality, integration friendliness, governance compatibility, replacement exposure, and strategic criticality. This is not an "AI strategy document." It is an inventory and assessment of major applications against a small set of criteria. That sounds like a real piece of work rather than strategy confetti.

Level 0

Opaque

No usable API, poor documentation, GUI-bound, hard to automate safely

Level 1

Accessible with Effort

Some integration points, patchy model exposure, weak docs, limited action support

Level 2

AI-Compatible

Good APIs, reasonable schemas, documented actions, manageable permissions, auditable use

Level 3

AI-Ready

Strong APIs/events, good documentation, fine-grained controls, suitable for supervised agent workflows

AI Readiness as Another Application Assessment Lens

The portfolio steward already assesses applications alongside the usual dimensions: business fit, technical fit, cost, vendor risk, security, supportability, integration, lifecycle status. Now add: AI operability.

This is not a new exotic function. It is the next version of a very old architecture question: Can this system participate in the operating model we are moving toward? That question used to be about web, mobile, cloud, APIs, event-driven architecture, zero trust. Now it is also about AI-mediated work.

Pick the label that sounds least like marketing confetti in that organisation: AI operability, AI integration readiness, agent usability, machine legibility, governance compatibility. The label matters less than the substance — and the substance is just another column in the application portfolio assessment that EA teams have been doing for decades.

The Stranded Investment Argument

The portfolio steward's real fear is not abstract AI theory. It is concrete portfolio embarrassment. He approves spend. Signs off direction. Carries legacy risk. Lets a project proceed. Renews or extends a platform. Later the organisation realises that application is a poor fit for the next operating model.

Then everyone stares at each other like a committee of damp parrots.

"No one wants to be the bloke who okays a big project on an application, then AI comes along and he has to say 'oh yeah, but that application can't do AI.' That's the egg-on-face scenario."

The argument in his language:

"I'm not trying to predict every AI use case. I'm trying to avoid making portfolio decisions today that create obvious future constraints tomorrow."

"We already assess applications for lifecycle, security, and delivery risk. It may now be prudent to include AI-readiness as part of overall application fitness."

"This is less about forecasting AI adoption and more about making sure our current portfolio decisions don't unintentionally lock us into avoidable limitations."

The Capital Context
~$700B

hyperscaler AI infrastructure spend in 2026

typical switching costs relative to initial investment

3–5 yr

contracts that constrain strategic options

Sources: CNBC/Axios 2026; Contus 2026

The infrastructure is being built. Hyperscalers are committing roughly $700 billion to AI infrastructure in 202638. Enterprises that fail to map readiness will find their programmes stranded against infrastructure they cannot safely use. Switching costs are typically 2× the initial investment, and 3–5 year contracts constrain strategic options39.

The application AI-readiness review is not only about today's AI use. It is also an anti-lock-in mechanism against signing tomorrow's wrong contracts under yesterday's assumptions.

The Replacement Economics Shift

For years, organisations tolerated mediocre point solutions because replacement economics were ugly. Even if the software was annoying, the cost and risk of rewriting kept everyone trapped. AI changes that equation. Not always. Not magically. Not for every system. But the cost curve for building or rebuilding certain classes of application has clearly shifted downward.41

Sweet spot for AI-assisted replacement: well-understood workflows, CRUD-heavy systems, internal line-of-business apps, heavily form-based processes, straightforward case/task/record systems, and commodity point solutions with lots of configuration but limited true differentiation.

The key insight: the legacy system is often not just technical debt. It is also a partially expressed requirements document. Messy, implicit, annoying — but still a spec. If AI lowers the cost of turning observed behaviour, screens, workflows, and business rules into new software, replacement becomes more imaginable earlier.

Part of replacing applications now is the fact that AI can write it — ridiculously cheap, 90–95% cheaper in some cases, if you do it the right way. Caveat: this works best for bounded internal apps with limited integrations and available subject matter experts. Core systems with undocumented tentacles and brittle batch logic are harder. That caveat makes the argument stronger, not weaker — it turns prophecy into selection logic.

The Non-Threatening Conversation Structure

Step 1: Validate Current Caution

"The organisation's caution on AI is understandable and probably appropriate given the regulatory and operational environment."

Do not lead with "you're behind." True or not, it creates defensiveness.

Step 2: Differentiate Caution from Preparedness

"There's a difference between being cautious and being unprepared. Not deploying AI yet does not remove the need to prepare the estate."

"Caution without architecture is well-dressed paralysis."

Step 3: Offer a Low-Risk Preparatory Step

"An application AI-readiness assessment gives the portfolio function a measured, non-threatening way to build readiness."

"We're not even turning on AI. We're just assessing which applications are ready for it."

The identity you are offering him is not "the executive who sponsored a weird AI thing." It is "the executive who prepared the terrain before everyone else started making noisy demands." He can talk about AI readiness without being evangelical. He is not saying "let's go AI" or "hurry up." He is making a prudent move to assess and understand where the organisation stands.

What the Audit Enables

Once the application readiness assessment exists, the portfolio steward has:

A fact base instead of opinion when AI opportunities emerge

Better identification of future blockers before they become emergencies

More informed modernisation priorities — some applications may now be strategically weaker than old criteria detected

Earlier identification of legacy candidates — not old in years, but old in architectural shape

Better project gating: "Before this project leans heavily on that platform, we should review whether it's still fit"

Better vendor conversations: AI-readiness as another dimension alongside lifecycle, security, and delivery risk

"My responsibility is to ensure the application estate does not become the avoidable blocker to safe future AI adoption."

What Happens After Layer 1

The readiness findings feed into EA's future-state design (Layer 2 — see the ownership mapping in Chapter 3). Governance and risk can use the findings to scope authority requirements (Layer 3). The staircase continues. But the portfolio steward has done his part: he has made sure the ground floor is assessed and the portfolio is not flying blind.

He does not need to own the whole mountain. He just needs to make sure portfolio decisions are not made blind to AI readiness.

The Insurance Enterprise Application

Return to the archetype from Chapter 1: the regulated insurer. The whole organisation is allergic to AI. Banned access. It is like a dirty word. They run waterfall. Old-world project governance. But — the insurer is already insourcing and rewriting large point solutions that had been outsourced. They already understand applications can be replaced.

The framing for insurance specifically: "As AI becomes more capable, the organisations that benefit will not necessarily be the boldest first movers. They will be the ones whose architecture, controls, and application estate allow them to adopt safely when the time is right."

That is very insurance-coded. Sensible, calm, anti-chaos. Readiness will likely matter before deployment. Once business demand arrives, the organisation will not want to discover at that point that key systems are inaccessible, poorly governed, or structurally incompatible with AI-mediated workflows.

Key Takeaway

  • The first move is an application AI-readiness assessment — not an AI deployment, not a strategy deck
  • Frame as preparedness, not adoption: "Being ready doesn't mean you're asking for it"
  • The audit is low-risk, governance-compatible, and useful even if AI adoption stays slow
  • AI readiness is a legitimate application assessment lens alongside cost, security, supportability, and lifecycle
  • The stranded investment argument is the strongest political angle: avoid committing spend to AI-incompatible platforms
  • Replacement economics have shifted — AI lowers the cost of escaping software, not just making it
  • The portfolio steward does not own the whole mountain. He owns making sure the ground floor is assessed.

Ready to Map Your Staircase?

Take the free 10-minute AI Readiness Assessment to understand where your organisation stands across all four layers.

Scott Farrell — LeverageAI
leverageai.com.auscott@leverageai.com.au

REF
Sources & Evidence

References & Sources

The evidence base behind every claim — primary research, industry analysis, and technical specifications

Research Methodology

This ebook draws on primary research from standards bodies, independent research firms, enterprise technology vendors, and consulting firms. Statistics cited throughout have been cross-referenced against primary sources.

Frameworks and interpretive analysis developed by Scott Farrell / LeverageAI are listed separately below — these represent the practitioner lens through which external research is interpreted, and are not cited inline to avoid self-promotional appearance.

Primary Research & Standards Bodies

Deloitte — State of AI in the Enterprise 2026 [1]

85% plan agentic AI, only 21% governance ready

https://www.deloitte.com/us/en/about/press-room/state-of-ai-report-2026.html

Cisco — Cisco 2024 AI Readiness Index [3]

Global AI readiness declined, 13% fully ready, 98% report increased urgency

https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2024/m11/cisco-2024-ai-readiness-index-urgency-rises-readiness-falls.html

Deloitte — State of AI in the Enterprise 2026 [4]

Multi-dimension readiness gaps: strategy 40%, governance 30%, talent 20%

https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html

Argano — Overcoming the AI Pilot Trap [5]

95% enterprise AI pilots fail to deliver business value

https://argano.com/insights/articles/overcoming-the-ai-pilot-trap.html

AIM Research Councils — AI Insights in 2025: Scale is the Strategy [6]

80% of AI pilots fail to reach production despite 70% succeeding technically

https://councils.aimmediahouse.com/ai-insights-in-2025-shows-scale-is-the-strategy/

RAND Corporation — Root Causes of Failure for Artificial Intelligence Projects [7]

AI projects fail at 2x rate of non-AI IT, driven by governance not tech

https://www.rand.org/pubs/research_reports/RRA2680-1.html

Gartner — Lack of AI-Ready Data Puts AI Projects at Risk [11]

60% of AI projects abandoned without AI-ready data

https://www.gartner.com/en/newsroom/press-releases/2025-02-26-lack-of-ai-ready-data-puts-ai-projects-at-risk

SailPoint — AI Agents: The New Attack Surface [12]

80% of orgs experienced unintended AI agent actions

https://www.sailpoint.com/identity-library/ai-agents-attack-surface

Gartner — Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027 [14]

Agentic AI project cancellation prediction

https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027

Strata — What is Agentic AI Security? A Guide for 2026 [16]

Tiered authorization: 76% fewer agent safety incidents

https://www.strata.io/blog/agentic-identity/8-strategies-for-ai-agent-security-in-2025/

Gartner — Top Strategic Technology Trends for 2026: Digital Provenance [18]

Digital provenance as strategic technology trend

https://www.gartner.com/en/documents/7031598

NIST NCCoE — Accelerating the Adoption of Software and AI Agent Identity and Authorization [20]

Agent identity and authorization require new infrastructure

https://www.nccoe.nist.gov/sites/default/files/2026-02/accelerating-the-adoption-of-software-and-ai-agent-identity-and-authorization-concept-paper.pdf

Fortune — MIT Report: 95% of generative AI pilots failing [26]

42% of organizations abandoned most AI projects

https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/

Gartner — AI Governance Failure Prediction [29]

60% will fail to realise AI value by 2027 due to incohesive governance

https://www.rapidionline.com/blog/data-integration-trends-markets

Cisco — Cisco AI Readiness Index 2024 PDF [37]

Infrastructure readiness largest decline, 54% can't scale

https://www.cisco.com/c/dam/m/en_us/solutions/ai/readiness-index/2024-m11/documents/cisco-ai-readiness-index.pdf

Industry Analysis & Vendor Research

AI Governance Today — The $665 Billion AI Spending Crisis [2]

Enterprise AI spending and ROI failure rates

https://www.aigovernancetoday.com/news/enterprise-ai-spending-crisis-2026

Sweep.io — Enterprise AI Readiness in 2026: What It Really Takes [8]

Only 14% of organizations fully ready for AI deployment

https://www.sweep.io/blog/executive-brief-what-it-means-to-be-ai-ready-in-2026

CNBC — Tech AI spending approaches $700 billion in 2026 [9]

Hyperscaler AI capex: Amazon $200B, Google $175-185B, Meta $115-135B

https://www.cnbc.com/2026/02/06/google-microsoft-meta-amazon-ai-cash.html

Wikipedia — Stargate LLC [10]

$500B AI infrastructure investment, $100B initial deployment

https://en.wikipedia.org/wiki/Stargate_LLC

Forbes — 40% Of Workflows Will Run On Agentic AI [13]

40% of enterprise workflows will involve autonomous agents by 2026

https://www.forbes.com/sites/digital-assets/2026/02/13/40-of-workflows-will-run-on-agentic-ai-wheres-the-identity/

Microsoft — Introducing the Agent Governance Toolkit [15]

First toolkit addressing all 10 OWASP agentic AI risks

https://opensource.microsoft.com/blog/2026/04/02/introducing-the-agent-security-for-ai-agents/

Dataversity — AI Governance in 2026: Is Your Organization Ready? [17]

34% use technology to enforce governance policies

https://www.dataversity.net/articles/ai-governance-in-2026-is-your-organization-ready/

Kiteworks — AI Regulation 2026 Business Compliance Guide [19]

Cyber insurers writing AI exclusions; SOC 2 auditors asking about AI change management

https://www.kiteworks.com/cybersecurity-risk-management/ai-regulation-2026-business-compliance-guide/

Exceeds AI — AI Governance Risk Management: NIST Framework Guide 2026 [21]

Only operational evidence counts in 2026 compliance

https://blog.exceeds.ai/ai-governance-risk-management/

Scott Farrell — Compliance Cosplay [24]

Governance that runs with the decision changes system behaviour; governance after the decision is already too late

https://leverageai.com.au/wp-content/media/Compliance_Cosplay.html

HarrisonAIX — The AI Execution Gap: Why 80% of Pilots Die [25]

Companies cite costs, privacy, governance gaps for abandonment

https://harrisonaix.com/blog/ai-execution-gap-enterprise/

CSA + Google Cloud — The State of AI Security and Governance [27]

75% of organisations have AI usage policies, responsible AI principles, governance charters — none running at decision time

https://cloud.google.com/resources/content/csa-the-state-of-ai-security-and-governance

Kai Waehner — Enterprise Agentic AI Landscape 2026 [28]

Architecture-first governance 3.4x more effective

https://www.kai-waehner.de/blog/2026/04/06/enterprise-agentic-ai-landscape-2026-trust-flexibility-and-vendor-lock-in/

Contus — Build vs Buy AI Solution in 2026 [39]

Switching costs 2x initial investment, 3-5 year constraints

https://www.contus.com/blog/build-vs-buy-ai/

McKinsey — AI for IT modernization: faster, cheaper, and better [41]

AI-driven IT modernization now costs less than half traditional approaches

https://www.mckinsey.com/capabilities/quantumblack/our-insights/ai-for-it-modernization-faster-cheaper-and-better

Regulatory Frameworks & Compliance

LegalNodes — EU AI Act Compliance Guide 2026 [11]

EU AI Act full enforcement August 2026, high-risk AI requirements including risk management, documentation, logging, and human oversight

https://www.legalnodes.com/article/eu-ai-act-2026-updates-compliance-requirements-and-business-risks

Clayton Utz / APRA — APRA 2025-26 Corporate Plan [12]

APRA CPS 230 took effect July 2025, pre-existing contracts require compliance by July 2026, AI supply chain risk in scope

https://www.claytonutz.com/insights/2025/august/apras-2025-26-corporate-plan-key-implications-for-financial-services

LeverageAI / Scott Farrell — Practitioner Frameworks

The interpretive frameworks, architectural patterns, and practitioner analysis in this ebook were developed through enterprise AI transformation consulting. The articles below are the underlying thinking behind those frameworks. They are listed here for transparency and further exploration — not cited inline, as this is the author's own analytical voice.

Scott Farrell — The AI Readiness Staircase

Layer 0.5 concept — application readiness as necessary but not sufficient

https://leverageai.com.au/blog-posts/

Scott Farrell — SiloOS

Runtime containment architecture — from manners to physics

https://leverageai.com.au/siloos-the-agent-operating-system-for-ai-you-cant-trust/

Scott Farrell — Getting Enterprise AI-Ready: Governance as Code, Not Committees

In-Path Authority pattern and governance-as-code

https://leverageai.com.au/getting-enterprise-ai-ready-governance-as-code-not-committees/

Scott Farrell — The Governance Stack

Three governance layers: data, AI, authority infrastructure

https://leverageai.com.au/the-governance-stack-data-truth-model-risk-and-the-authority-layer-nobody-built/

Scott Farrell — AI Governance Means Signing the Authority, the Data, and the Graph

Decision Attestation Package — proof-carrying receipts

https://leverageai.com.au/ai-governance-means-signing-the-authority-the-data-and-the-graph/

Major Consulting Firms

McKinsey — The State of AI 2025 [23]

High performers 3x more likely to have senior AI ownership

https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

About This Reference List

Compiled April 2026. All URLs verified at time of compilation. Regulatory documents and standards specifications are subject to revision — check primary sources for the most current versions.

Some links to academic papers and vendor research may require free registration. Government and standards body publications are freely accessible.