Evidence Governance — How AIEP Compares
Every AI assistant that retrieves information from the web faces the same fundamental problem: it can show you citations, but it cannot prove them.
AIEP solves this at the protocol level. This page shows what that means in practice — what dimensions exist, and what AIEP provides versus what standard AI assistants provide.
The 10 dimensions of evidence governance
Verifiable AI evidence can be assessed across 10 independent dimensions. Each one matters in a different regulated context — legal admissibility, financial audit, clinical safety, regulatory compliance.
| Dimension | What it means |
|---|---|
| Source Attribution (SA) | Does the system identify which sources it used? |
| Cryptographic Verifiability (CV) | Are sources hash-bound to the response — tamper-evident? |
| Evidence Chain (EC) | Is there a sequential, reproducible chain from genesis through each artefact to the response? |
| Source Integrity (SI) | Does the system inspect the network path of sources — detecting VPN, proxy, no-TLS, Geo-restricted, private-IP sources? |
| Dissent Signals (DS) | When confidence is insufficient, is a governed uncertainty record persisted — not just a verbal hedge? |
| RBAC / Tenancy (RB) | Is there fine-grained role-based access control with per-route permission gates? |
| Audit Trail (AT) | Is each response cryptographically committed with a stable, reproducible hash independently verifiable by third parties? |
| Open Protocol (OP) | Is the evidence methodology documented as an open standard — not proprietary? |
| Hardware Governance (HG) | Is there TPM/secure-enclave substrate attestation binding the governance layer to hardware? |
| Canonical Schema (CS) | Is there deterministic JSON serialisation ensuring hash reproducibility across machines, deployments, and time? |
Standard AI assistants — what they provide
The general-purpose AI assistant generation — regardless of which provider — converges on the same structural pattern:
| Dimension | Standard AI assistants |
|---|---|
| SA | Partial — URLs or document names are shown, but no structured evidence object or artefact IDs |
| CV | Absent — No cryptographic binding between cited sources and the response. No response hash. |
| EC | Absent — No tamper-evident evidence chain. Deep Research modes show reasoning steps in the UI — but visibility is not verifiability. |
| SI | Absent — No network-path inspection. All HTTPS sources treated equally regardless of origin, relay, or integrity. |
| DS | Absent — Uncertainty may appear as verbal hedging. No structured, hashable dissent record. |
| RB | Partial — Workspace or org-level access controls exist. No per-endpoint RBAC permission model. |
| AT | Absent — Activity logs record that a response occurred. They do not record what sources were cryptographically verified. |
| OP | Absent — No open evidence protocol. Methodology is proprietary and not independently auditable. |
| HG | Absent — No hardware attestation. |
| CS | Absent — No canonical schema. JSON outputs are not deterministic across implementations. |
Typical total: 1.0–2.0 out of 10.0 — on a uniform scoring framework where Full = 1.0, Partial = 0.5, Absent = 0.0.
“Deep Research” modes (multi-step web retrieval, synthesis across sources, numbered citations) are an important UX improvement. They do not close this gap. Making retrieval steps visible is not the same as making them verifiable. A numbered citation list and a tamper-evident cryptographic chain are categorically different artefacts.
AIEP (Piea) — what the architecture provides
| Dimension | AIEP / Piea | Mechanism |
|---|---|---|
| SA | Full | Every response includes evidence_rail[] — source URLs, excerpts, ContentHash, confidence tier, integrity flags |
| CV | Full | response_commitment() = SHA-256 over answer + artefact IDs + timestamp. R8 primitive, GENOME-locked. |
| EC | Full | P37 sequential evidence_commitment() chain from genesis artefact through each retrieval to the response hash |
| SI | Full | P124 inspectSourceIntegrity() — VPN, proxy, no-TLS, private IP, HTTP 451, Via headers. Flag demotes source tier. |
| DS | Full | P126 negative_proof_hash() — dissent record stored in KV, streamed over SSE, surfaced as a first-class ledger artefact |
| RB | Full | requirePermission() middleware — 4 roles, 3 permission classes, per-route enforcement |
| AT | Full | Stable response_id + response_hash per response. verify/:hash endpoint for independent third-party verification. |
| OP | Partial | P60–P63 mirror standard is published. Core protocol specifications published as open-source prior art following Gate 1 confirmation (April 2026). |
| HG | Partial | Silicon governance layer. aiep-node emulator published as open-source reference implementation. See /.well-known/aiep/hardware/node-attestation.json |
| CS | Full | GENOME R1 canonical_json() — deterministic key sort, NFC unicode, no scientific notation. Locked at v1.0.0. |
AIEP total: 9.5 out of 10.0 — a lead of +6.0 points over the nearest specialist competitor, +8.5 over the general-purpose AI assistant cluster.
Why this gap is unbridgeable without the patent-protected architecture
The five dimensions where the gap is widest — CV, EC, SI, DS, CS — are not missing from standard AI assistants because of engineering oversight. They are missing because implementing them requires:
- A deterministic canonical serialisation standard (GENOME R1) — without this, hashes cannot be reproduced across machines, and CV/EC/CS cannot exist
- A sequential commitment chain (P37) — without a genesis-anchored evidence chain, individual source hashes are disconnected; they cannot be used to prove a tamper-evident sequence
- A network-path integrity inspection protocol (P124) — without inspecting the actual network path of a fetch, a system cannot know whether a source was served via VPN, relay, or manipulated proxy
- A governed dissent substrate (P126) — without a first-class dissent artefact type and a persistence layer, uncertainty is a verbal policy claim, not a cryptographic record
These are not features that can be bolted onto a standard LLM retrieval pipeline. They require the architecture to be built from the ground up around evidence commitment — which is exactly what AIEP specifies and Piea demonstrates.
The regulatory consequence
For most enterprise use — internal productivity, code generation, content drafting — the gap does not matter.
For regulated use — legal proceedings, financial audit, clinical decision support, regulatory submissions, government procurement — it is dispositive:
| Use case | Why standard AI assistants are insufficient | What a court or regulator requires |
|---|---|---|
| Legal evidence chain | Flat citation list cannot prove source integrity or non-tampering | Tamper-evident hash-bound chain (EC + CV) |
| Financial audit | Activity log records that an output was produced, not what verified sources produced it | Cryptographic AT + verifiable CV |
| Clinical protocol | Source may have been served via proxy or be outdated; no detection mechanism | SI inspection + temporal validity |
| Regulatory submission | ”We used AI” is not an evidence standard | Open, auditable protocol (OP) with reproducible hashes (CS) |
| Court-ordered disclosure | eDiscovery logs confirm interaction occurred; cannot prove source provenance | Full EC chain with genesis hash |
AIEP provides all five. No current general-purpose AI assistant provides any of them.
See also: /piea · /source-integrity · /verifiable-citations · /audit · /certification · /why-now