AI Governance Means Signing the Authority, the Data, and the Graph
The missing object in AI governance is not better logs. It is a Decision Attestation Package β a signed, portable bundle that proves what was authorised, what was observed, and what machinery produced the outcome.
π Want the complete guide?
Learn more: Read the full eBook here β
Scott Farrell | LeverageAI | March 2026
TL;DR
- Governance that exists outside the decision can always be separated from it. Logs record narratives. Signed attestations prove commitments.
- AI governance requires three cryptographic commitments: signed authority (who was allowed to decide), signed data (what was observed), and signed graph (the policy evaluation and deterministic code that produced the outcome).
- Supply-chain attestation infrastructure (SLSA, in-toto, Sigstore) is mature enough to serve as the structural template β the adaptation to business decisions is conceptual, not technological.
Your AI Made a Decision Last Tuesday
Can you prove it was authorised?
Not explain. Not reconstruct from logs. Not assemble a narrative from database entries, Slack messages, and committee meeting minutes.
Prove.
If you paused, you have company. Only 28% of organisations can reliably trace agent actions back to a human sponsor. 80% cannot tell in real time what their agents are doing or who is responsible.1
Four independent authorities converged on the same structural gap in the same twelve-month window. NIST explicitly asked: “How do we ensure non-repudiation for agent actions and binding back to human authorization?”2 Gartner named Digital Provenance a 2026 Top Strategic Technology Trend, predicting billion-dollar sanction risks for those who fail to invest.3 The EU AI Act hits full enforcement in August 2026 with penalties reaching 15M euros or 3% of worldwide turnover.4 And McKinsey called 2026 “the year of tackling the foundational enablers required to build, govern, and operate agentic workflows.”5
None of them are coordinating. They all arrived independently at the same hole in the floor.
This article names the missing object β and gives it a structure.
The Problem: Governance That Lives Outside the Decision
Most organisations deploying AI into consequential workflows β claims processing, credit decisions, compliance checks β govern those decisions the same way they’ve always governed everything: separately.
Policies live in SharePoint. Dashboards live in Grafana. Authority delegations live in meeting minutes. Audit trails live in log databases. The decision itself lives somewhere else entirely.
And every one of those governance artefacts can be separated from the decision it claims to govern.
“Audit logs record what happened β but can be modified after the fact. Signed commits prove what happened, with cryptographic binding to the author and timestamp.”
β SLSA Provenance / Legit Security6
ISACA put it plainly in 2026: “Every answer produced by an AI system should be traceable to an authenticated request, authorized data, enforced controls and auditable evidence.”7 Notice the word traceable. Not “reconstructable.” Not “probably somewhere in the logs.” Traceable β with a path you can follow from outcome to authority without forensic archaeology.
Here is the current state of AI governance at most mid-market organisations:
| What They Have | What It Actually Proves |
|---|---|
| System logs | What happened (mutable, modifiable, deletable) |
| Policy documents | What should happen (aspirational, not enforced) |
| Governance committee | That meetings occurred (not that authority was validated at decision time) |
| Dashboards | Summary statistics (not per-decision accountability) |
| Model cards | Model properties (not decision-level proof of authority) |
Compare this with what a regulator, auditor, or insurer will actually ask:
“For decision #47291 β the one the customer is disputing β can you prove the AI had authority to act, the data it observed was authentic, the policy in force at the time was evaluated, and the deterministic code that executed the outcome was the code you intended?”
That question requires a different kind of answer. Not a narrative reconstructed from logs. A receipt.
Three Signatures, Not One
AI governance is not one signature on one document. It is three cryptographic commitments, each proving something different.
1. Signed Authority
What it proves: Who was allowed to decide, under what delegation, within what scope, at what risk tier.
Most organisations assume authority exists because someone approved a project in a committee meeting. But authority that is not validated at runtime is not authority β it is a laminated promise card floating through the system.
The Authority Attestation records: the delegating principal, the role and mandate applied, scope limits, risk thresholds, expiry conditions, jurisdiction, and whether human approval was obtained for this specific decision class.
2. Signed Data
What it proves: What the system actually observed at the moment of decision β not what was available, but what was consumed.
The Observation Attestation records: cryptographic hashes of input documents, pointers to immutable evidence stores, feature-set digests, data classification labels, and the timestamp of observation. This is not a description. It is a cryptographic commitment: “These exact bytes entered this decision.”
3. Signed Graph
What it proves: What reasoning machinery produced the outcome β the AI judgements, the policy evaluation, and the deterministic code that assembled everything into a binding action.
This is the distinctive signature. Signing authority and data are at least conceptually familiar. Signing the graph is the architectural move.
“The graph” captures:
- AI micro-judgements β what each model node proposed, with what confidence, against what evidence
- Policy-as-code evaluation β what rules the IPA (In-Path Authority) checked at runtime and what it decided
- Deterministic assembly β the traditional code that consumed AI proposals and policy outcomes to produce the binding result
- SDLC provenance β that the code which ran was the code that was tested, reviewed, and deployed through governance
McKinsey’s framing captures why the graph matters: “Agency isn’t a feature β it’s a transfer of decision rights.”8 When a system acts, the question is not whether the model was accurate. The question is who is accountable when the system acts β and the graph is where that accountability lives.
The Decision Attestation Package
These three signatures are not separate documents filed in three different systems. They are layers of a single portable object: the Decision Attestation Package.
A governed AI decision is not complete when the model produces a judgement. It is complete when the judgement, authority, policy evaluation, and execution outcome are bound into one signed attestation package.
Seven sections. One bundle. Born at decision time.
| Section | What It Records | Who Signs |
|---|---|---|
| 1. Subject | Decision/case/workflow ID, step name, classification | System metadata |
| 2. Observation Envelope | Hashes of input documents, pointers to immutable evidence, feature-set digest, data classification labels, time of observation | Evidence bundler / IPA |
| 3. Judgement Envelope | Model identity + version, inference config, task type, structured output, confidence/uncertainty, failure flags | Execution runtime |
| 4. Authority Envelope | Delegating principal, role/mandate, scope limits, risk thresholds, expiry, jurisdiction, human approval status | Authority registry / IPA |
| 5. Policy Decision Record | Policy set evaluated, policy version, inputs to policy engine, result (ALLOW/PAUSE/DENY), reason codes | Policy engine |
| 6. Deterministic Assembly / Outcome | What code ran, which inputs consumed, which upstream attestations validated, final result produced | Deterministic execution environment |
| 7. Signature + Timestamp + Transparency Proof | Package-level signature, timestamp, transparency log anchor | Signing infrastructure |
The governing principle: every consequential step emits a signed artefact. Only deterministic, policy-aware machinery may convert those artefacts into action. The final action emits its own signed artefact that links back to all upstream ones.
Authority Is Potential. Policy Evaluation Is Activation.
Most governance architectures treat authority and policy as separate concerns. Authority is delegated in a meeting. Policy is written in a document. Neither is evaluated against the other at the moment the decision executes.
This is the gap. Authority without runtime policy evaluation is potential energy that never converts to kinetic energy. It exists on paper but is never validated against the actual decision being made.
The IPA β the In-Path Authority layer β closes this gap. It is a deterministic enforcement layer that sits between AI proposals and execution. It does not reason or infer. It evaluates conditions against policy rules.9
What the IPA contributes to the Decision Attestation Package:
- Checks delegated authority against the specific action being proposed
- Evaluates policy-as-code rules at runtime β not aspirational policy, executable policy
- Emits a signed Policy Decision Attestation recording what was checked and what was decided
- Gates execution: ALLOW / PAUSE / DENY with machine-readable reason codes
This is not slow. Advanced platforms implement governance checks in parallel with inference requests at sub-millisecond latency.10 AWS Bedrock AgentCore enforces Cedar policies deterministically outside the LLM reasoning loop β “the gateway enforces it at runtime before the action executes.”11
The latency objection is a 2023 excuse applied to 2026 infrastructure.
Without the Policy Decision Attestation in the package, you can prove what was decided but not whether it was permitted. That distinction is the difference between a receipt and a narrative.
The Supply-Chain Template
This is not a greenfield design exercise. The software supply-chain world solved this structural problem years ago β and the tooling is now mature enough to adapt.
SLSA (Supply-Chain Levels for Software Artifacts) defines provenance as “the verifiable information about software artifacts describing where, when and how something was produced.”6 The provenance model records the builder’s actions, the recipe executed, external inputs, environmental parameters, and all materials consumed. SLSA 1.2 was released in December 2025, expanding beyond the build phase.12
in-toto provides the attestation format β a fixed, lightweight Statement that binds metadata to an artefact using context-specific Predicate schemas.13 The four-layer structure (Predicate, Statement, Envelope, Bundle) is domain-agnostic. Critically, in-toto explicitly supports custom predicates for any use case β the community review process allows proposing new predicate types.13
Sigstore makes signing practical through keyless, ephemeral certificates. It is now integrated with NPM, PyPI, Maven, GitHub, Homebrew, Kubernetes, and NVIDIA’s NGC model catalogue.14,15 NVIDIA has been cryptographically signing all models in the NGC Catalog since March 2025.15 The model-transparency project hit v1.0 in 2025.16
The adaptation to business decisions is conceptual, not technological. Translate SLSA into decision systems and you get: not just “what software was built,” but “what decision was built, from what evidence, under what authority, through what policy gate, with what outcome.”
| SLSA Provenance Concept | Decision Attestation Equivalent |
|---|---|
| Builder (who built it) | Delegating principal + AI model identity |
| Recipe (how it was built) | Decision DAG structure + policy rules evaluated |
| Materials (what went in) | Observation Envelope β evidence hashes, data classification |
| Build environment | Execution runtime β model version, inference config, policy engine version |
| Output artefact | Outcome Attestation β what was decided, what action was taken |
| Signed provenance | Package-level signature + transparency log anchor |
SLSA defines four maturity levels β from no provenance (Level 0) through signed, tamper-evident provenance (Level 2) to full build isolation with non-falsifiable attestations (Level 3).6
Most organisations deploying AI are at Level 0. They have logs β mutable, modifiable, deletable. The Decision Attestation Package targets Level 2 as the practical starting point for consequential decisions.
Deterministic Code Needs Governance Too
AI governance conversations focus almost exclusively on the AI. What about the code that acts on the AI’s output?
The accountable event in most regulated systems is not “the AI thought something.” It is “the system acted.” The claim was approved. The credit was extended. The compliance flag was cleared. Those actions are executed by deterministic code β and that code needs its own governance proof.
But the proof it needs is different.
AI nodes need epistemic proof: was the reasoning sound? Were the right evidence inputs consumed? Did the model operate within its validated domain?
Deterministic code needs control proof: was the action authorised? Was it bounded? Was it executed under the right policy version? Did the code that ran match the code that was tested and reviewed?
The Outcome Attestation β the deterministic node’s signed receipt β records which inputs it accepted, which upstream attestations it validated, what policy version it applied, whether the caller had mandate, and what result it produced.
That second receipt is not redundant. It is the binding explanation of control. The AI receipt says, “Here is what was inferred.” The deterministic receipt says, “Here is why this became an actual business outcome.”
For auditors and regulators, the Outcome Attestation is often the most important artefact in the package. It is the point where the system crosses from analysis into action.
Layered Receipts: How the Stack Works
The Decision Attestation Package is not a monolith. It is a stack of signed receipts, each produced by a different layer of the decision pipeline.
At the bottom, an AI micro-judgement node emits a receipt:
- Subject: what narrow question was answered
- Evidence set: hashes and references to the observed records
- Model identity: version, prompt-policy profile, inference config
- Output: structured decision object with confidence, uncertainty fields, failure flags
- Signature from the execution environment + timestamp
The IPA layer emits its receipt:
- Authority claim consumed and validated
- Policy set and version evaluated
- Inputs presented to the policy engine
- Gate outcome: ALLOW / PAUSE / DENY + reason codes
- Signature from the policy engine + timestamp
The deterministic assembly layer emits the final receipt:
- Upstream attestations consumed (AI judgement + policy decision)
- Signatures validated from upstream layers
- Policy version applied, rule set executed
- Outcome produced, action taken
- Signature from the deterministic execution environment + timestamp
These three receipts, plus the observation envelope and the package-level signature, form the Decision Attestation Package. Each layer signs its own output. The deterministic code brings them together.
This is the decision-control provenance β a provenance type that does not yet have a formal standard. Data provenance covers where data came from. Model provenance covers what model produced an output. Decision-control provenance covers who was allowed to act, what policy validated that authority, and what deterministic code executed the outcome. It is the missing layer.
What Existing Audit Approaches Are Missing
Governance frameworks already exist. SOC 2, SR 11-7, ISO 42001 all define what organisations should prove about their AI systems. The gap is not what but how β and specifically, how to make proof inseparable from the decision.
SOC 2 auditors increasingly flag AI-driven actions as accountability gaps. When an AI agent takes a privileged action without a documented human request, auditors treat it as a control failure β because SOC 2 expects privileged actions to be attributable to an accountable individual, not a generic system account.17
SR 11-7 (the Federal Reserve’s model risk management guidance) requires documentation detailed enough that unfamiliar parties can understand the model’s operation β yet only 44% of banks properly validate their AI tools.18,19
ISO 42001 audits cluster on auditability failures β AI decisions that cannot be reconstructed or justified to auditors.20
Each of these frameworks defines outcomes β what governance should demonstrate. None specify a runtime evidence format that makes governance inseparable from the decision itself. They define the destination but not the vehicle.
The Decision Attestation Package is that vehicle. It does not replace SOC 2, SR 11-7, or ISO 42001. It provides the runtime evidence format those frameworks implicitly assume but never specify.
The Convergence Window
The urgency is not theoretical.
EU AI Act full enforcement begins August 2026 β five months from now. Article 12 requires high-risk AI systems to automatically log events enabling traceability, with logs that are tamper-resistant and operate automatically.21 First enforcement actions are expected in the 10-30M euro range.4
Gartner predicts that by 2029, organisations that failed to adequately invest in digital provenance capabilities will face sanction risks running into billions of dollars.3
Cyber insurance carriers are introducing “AI Security Riders” β coverage conditioned on documented evidence of adversarial red-teaming, model-level risk assessments, and specialised safeguards.22 Governance that cannot demonstrate runtime enforcement may void coverage when it matters most.
80% of organisations have already encountered risky behaviour from AI agents.8 McKinsey’s assessment: “In the agentic organization, governance cannot remain a periodic, paper-heavy exercise. As agents operate continuously, governance must become real time, data driven, and embedded.”23
The infrastructure to solve this exists. Supply-chain attestation is production-grade. Policy-as-code enforcement runs at sub-millisecond latency. What is missing is the output format β the Decision Attestation Package that makes governance a co-product of the decision, not a parallel activity.
Start With the Outcome Attestation
You do not need all seven sections on day one.
You need the architecture that makes all seven possible. And the place to start is where the risk lives: the Outcome Attestation β the signed receipt from the deterministic code that actually caused something to happen.
That receipt answers the regulator’s question: “What did the system do, under what authority, with what policy in force?”
Work backwards from there. Add the Policy Decision Record (what rules were evaluated). Add the Authority Envelope (who delegated). Add the Observation Envelope (what data was consumed). Add the Judgement Envelope (what the AI proposed).
Each layer you add makes the package more complete, the audit faster, and the governance more structurally sound.
The decision that carries its own proof does not need a committee to reconstruct what happened. It does not need a forensic investigation. It does not need hope that the logs are sufficient.
It has a receipt.
Next Steps
This article defines the output format β what your AI decision pipeline should produce. The architecture that generates these attestation packages draws on four pillars of runtime governance (Compliance Cosplay), proof-carrying decision decomposition (Stop Asking AI Why It Decided), and supply-chain signing patterns applied to agents (OpenClaw Has a Provenance Problem).
If you want to understand where your governance gaps are and what it would take to produce Decision Attestation Packages for your most consequential AI workflows: take the readiness assessment or discuss your situation directly.
References
- [1]LeverageAI / Strata Research. “Agent Provenance Stack.” β “Only 28% can reliably trace agent actions back to a human sponsor; 80% cannot tell in real time what agents are doing or who is responsible.” leverageai.com.au/wp-content/media/Agent_Provenance_Stack.html
- [2]NIST NCCoE. “Accelerating the Adoption of Software and AI Agent Identity and Authorization β Concept Paper.” February 2026. β “How can we ensure that agents log their actions and intent in a tamper-proof and verifiable manner? How do we ensure non-repudiation for agent actions and binding back to human authorization?” nccoe.nist.gov/sites/default/files/2026-02/accelerating-the-adoption-of-software-and-ai-agent-identity-and-authorization-concept-paper.pdf
- [3]Gartner. “Top Strategic Technology Trends for 2026: Digital Provenance.” β “By 2029, those who failed to adequately invest in digital provenance capabilities will face sanction risks potentially running into billions of dollars.” gartner.com/en/documents/7031598
- [4]Kiteworks. “AI Regulation 2026 Business Compliance Guide.” β “EU AI Act enforcement begins August 2026. Penalties reach 15M euros or 3% of worldwide annual turnover.” kiteworks.com/cybersecurity-risk-management/ai-regulation-2026-business-compliance-guide/
- [5]McKinsey. “AI’s Next Act.” β “2026 will be the year of tackling the foundational enablers required to build, govern, and operate agentic workflows.” mckinsey.com/uk/our-insights/uk-insights/ais-next-act-mckinsey-ai-leaders-on-the-year-ahead
- [6]Legit Security / SLSA. “Deep Dive Into SLSA Provenance and Software Attestation.” β “SLSA defines provenance as ‘the verifiable information about software artifacts describing where, when and how something was produced.'” legitsecurity.com/blog/slsa-provenance-blog-series-part-2-deeper-dive-into-slsa-provenance
- [7]ISACA. “AI Answers Are Becoming Business Decisions.” Volume 3, 2026. β “Every answer produced by an AI system should be traceable to an authenticated request, authorized data, enforced controls and auditable evidence.” isaca.org/resources/news-and-trends/newsletters/atisaca/2026/volume-3/ai-answers-are-becoming-business-decisions-most-organizations-arent-governing-them-that-way
- [8]McKinsey. “Trust in the Age of Agents.” β “Agency isn’t a feature β it’s a transfer of decision rights.” / “80 percent of organizations have encountered risky behavior from AI agents.” mckinsey.com/capabilities/risk-and-resilience/our-insights/trust-in-the-age-of-agents
- [9]LeverageAI. “Compliance Cosplay: Why AI Governance Without Runtime Authority Is Theatre.” β “The IPA is model-agnostic, non-autonomous, authority-aware, and irreversibility-sensitive.” leverageai.com.au/compliance-cosplay-why-ai-governance-without-runtime-authority-is-theatre/
- [10]Ethyca. “Governing Enterprise Data & AI with Policy-as-Code.” β “Advanced platforms implement governance checks in parallel with inference requests, ensuring sub-millisecond latency impact.” ethyca.com/news/how-to-govern-data-and-ai-with-a-policy-as-code-approach
- [11]Refactored.pro. “AWS re:Invent 2025: Bedrock AgentCore β The Trust Layer for Enterprise AI.” β “The policy enforcement is deterministic, not probabilistic. The gateway enforces it at runtime before the action executes.” refactored.pro/blog/2025/12/4/aws-reinvent-2025-bedrock-agentcorethe-deterministic-guardrails-that-make-autonomous-ai-safe-for-the-enterprise
- [12]SLSA Blog. “Supply Chain Robots, Electric Sheep, and SLSA.” December 2025. β “The Linux Foundation released the latest major update to its supply chain security standard, SLSA 1.2.” slsa.dev/blog/2025/12/supply-chain-robots-slsa
- [13]in-toto Attestation Framework. GitHub Specification. β “The framework defines a fixed, lightweight Statement that communicates information about the execution of a software supply chain. Predicate proposals are reviewed to ensure they are of high quality, useful to different organizations.” github.com/in-toto/attestation/blob/main/spec/README.md
- [14]Chainguard Academy. “An Introduction to Cosign.” β “Sigstore is now integrated with NPM, PyPI, Maven, GitHub, brew, Kubernetes, and more.” edu.chainguard.dev/open-source/sigstore/cosign/an-introduction-to-cosign/
- [15]NVIDIA Technical Blog. “Bringing Verifiable Trust to AI Models: Model Signing in NGC.” β “NVIDIA has been signing all NVIDIA-published models in the NGC Catalog with the OpenSSF Model Signing (OMS) specification since March 2025.” developer.nvidia.com/blog/bringing-verifiable-trust-to-ai-models-model-signing-in-ngc
- [16]Sigstore Blog. “Taming the Wild West of ML β Practical Model Signing with Sigstore.” β “Model Transparency v1.0, a community-driven library and CLI that supports multiple signing and verification methods for AI models.” blog.sigstore.dev/model-transparency-v1.0/
- [17]Goteleport. “How AI Agents Impact SOC 2 Trust Services Criteria.” β “Auditors will often treat ‘no human request’ as a major accountability gap, because SOC 2 expects privileged actions to be attributable to an accountable individual.” goteleport.com/blog/ai-agents-soc-2/
- [18]ValidMind. “How Model Risk Management Teams Comply with SR 11-7.” β “SR 11-7 requires documentation detailed enough that unfamiliar parties can understand the model’s operation.” validmind.com/blog/sr-11-7-model-risk-management-compliance/
- [19]LeverageAI. “Compliance Cosplay.” β “Only 44% of banks properly validate their AI tools β leaving 56% with massive blind spots.” leverageai.com.au/compliance-cosplay-why-ai-governance-without-runtime-authority-is-theatre/
- [20]Cloud Security Alliance. “ISO 42001 β Lessons Learned from Auditing and Implementing the Framework.” 2025. β “Common gaps include auditability failures where AI decisions cannot be reconstructed or justified to auditors.” cloudsecurityalliance.org/blog/2025/05/08/iso-42001-lessons-learned-from-auditing-and-implementing-the-framework
- [21]EU AI Act. “Article 12 β Record-Keeping.” β “High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system.” artificialintelligenceact.eu/article/12/
- [22]Delinea. “Cyber Insurance Coverage Requirements for 2026.” β “Insurers have begun introducing ‘AI Security Riders’ that require documented evidence of adversarial red-teaming, model-level risk assessments, and specialized safeguards.” delinea.com/blog/cyber-insurance-coverage-requirements-for-2026
- [23]McKinsey. “Accountability by Design in the Agentic Organization.” β “In the agentic organization, governance cannot remain a periodic, paper-heavy exercise. As agents operate continuously, governance must become real time, data driven, and embedded.” mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-organization-blog/accountability-by-design-in-the-agentic-organization
Scott Farrell is an AI governance and solutions architect at LeverageAI, helping Australian mid-market leadership teams turn AI experiments into governed portfolios. He writes about production AI systems, decision architecture, and the governance infrastructure nobody built. Connect on LinkedIn or email scott@leverageai.com.au.
Discover more from Leverage AI for your business
Subscribe to get the latest posts sent to your email.
Previous Post
Your AI Can Code. You Just Don't Know How to Drive It.
Next Post
You Don't Have an AI Problem. Your Enterprise Has an Architecture Problem.