Evidence Governance — How AIEP Compares

Every AI assistant that retrieves information from the web faces the same fundamental problem: it can show you citations, but it cannot prove them.

AIEP solves this at the protocol level. This page shows what that means in practice — what dimensions exist, and what AIEP provides versus what standard AI assistants provide.


The 10 dimensions of evidence governance

Verifiable AI evidence can be assessed across 10 independent dimensions. Each one matters in a different regulated context — legal admissibility, financial audit, clinical safety, regulatory compliance.

DimensionWhat it means
Source Attribution (SA)Does the system identify which sources it used?
Cryptographic Verifiability (CV)Are sources hash-bound to the response — tamper-evident?
Evidence Chain (EC)Is there a sequential, reproducible chain from genesis through each artefact to the response?
Source Integrity (SI)Does the system inspect the network path of sources — detecting VPN, proxy, no-TLS, Geo-restricted, private-IP sources?
Dissent Signals (DS)When confidence is insufficient, is a governed uncertainty record persisted — not just a verbal hedge?
RBAC / Tenancy (RB)Is there fine-grained role-based access control with per-route permission gates?
Audit Trail (AT)Is each response cryptographically committed with a stable, reproducible hash independently verifiable by third parties?
Open Protocol (OP)Is the evidence methodology documented as an open standard — not proprietary?
Hardware Governance (HG)Is there TPM/secure-enclave substrate attestation binding the governance layer to hardware?
Canonical Schema (CS)Is there deterministic JSON serialisation ensuring hash reproducibility across machines, deployments, and time?

Standard AI assistants — what they provide

The general-purpose AI assistant generation — regardless of which provider — converges on the same structural pattern:

DimensionStandard AI assistants
SAPartial — URLs or document names are shown, but no structured evidence object or artefact IDs
CVAbsent — No cryptographic binding between cited sources and the response. No response hash.
ECAbsent — No tamper-evident evidence chain. Deep Research modes show reasoning steps in the UI — but visibility is not verifiability.
SIAbsent — No network-path inspection. All HTTPS sources treated equally regardless of origin, relay, or integrity.
DSAbsent — Uncertainty may appear as verbal hedging. No structured, hashable dissent record.
RBPartial — Workspace or org-level access controls exist. No per-endpoint RBAC permission model.
ATAbsent — Activity logs record that a response occurred. They do not record what sources were cryptographically verified.
OPAbsent — No open evidence protocol. Methodology is proprietary and not independently auditable.
HGAbsent — No hardware attestation.
CSAbsent — No canonical schema. JSON outputs are not deterministic across implementations.

Typical total: 1.0–2.0 out of 10.0 — on a uniform scoring framework where Full = 1.0, Partial = 0.5, Absent = 0.0.

“Deep Research” modes (multi-step web retrieval, synthesis across sources, numbered citations) are an important UX improvement. They do not close this gap. Making retrieval steps visible is not the same as making them verifiable. A numbered citation list and a tamper-evident cryptographic chain are categorically different artefacts.


AIEP (Piea) — what the architecture provides

DimensionAIEP / PieaMechanism
SAFullEvery response includes evidence_rail[] — source URLs, excerpts, ContentHash, confidence tier, integrity flags
CVFullresponse_commitment() = SHA-256 over answer + artefact IDs + timestamp. R8 primitive, GENOME-locked.
ECFullP37 sequential evidence_commitment() chain from genesis artefact through each retrieval to the response hash
SIFullP124 inspectSourceIntegrity() — VPN, proxy, no-TLS, private IP, HTTP 451, Via headers. Flag demotes source tier.
DSFullP126 negative_proof_hash() — dissent record stored in KV, streamed over SSE, surfaced as a first-class ledger artefact
RBFullrequirePermission() middleware — 4 roles, 3 permission classes, per-route enforcement
ATFullStable response_id + response_hash per response. verify/:hash endpoint for independent third-party verification.
OPPartialP60–P63 mirror standard is published. Core protocol specifications published as open-source prior art following Gate 1 confirmation (April 2026).
HGPartialSilicon governance layer. aiep-node emulator published as open-source reference implementation. See /.well-known/aiep/hardware/node-attestation.json
CSFullGENOME R1 canonical_json() — deterministic key sort, NFC unicode, no scientific notation. Locked at v1.0.0.

AIEP total: 9.5 out of 10.0 — a lead of +6.0 points over the nearest specialist competitor, +8.5 over the general-purpose AI assistant cluster.


Why this gap is unbridgeable without the patent-protected architecture

The five dimensions where the gap is widest — CV, EC, SI, DS, CS — are not missing from standard AI assistants because of engineering oversight. They are missing because implementing them requires:

  1. A deterministic canonical serialisation standard (GENOME R1) — without this, hashes cannot be reproduced across machines, and CV/EC/CS cannot exist
  2. A sequential commitment chain (P37) — without a genesis-anchored evidence chain, individual source hashes are disconnected; they cannot be used to prove a tamper-evident sequence
  3. A network-path integrity inspection protocol (P124) — without inspecting the actual network path of a fetch, a system cannot know whether a source was served via VPN, relay, or manipulated proxy
  4. A governed dissent substrate (P126) — without a first-class dissent artefact type and a persistence layer, uncertainty is a verbal policy claim, not a cryptographic record

These are not features that can be bolted onto a standard LLM retrieval pipeline. They require the architecture to be built from the ground up around evidence commitment — which is exactly what AIEP specifies and Piea demonstrates.


The regulatory consequence

For most enterprise use — internal productivity, code generation, content drafting — the gap does not matter.

For regulated use — legal proceedings, financial audit, clinical decision support, regulatory submissions, government procurement — it is dispositive:

Use caseWhy standard AI assistants are insufficientWhat a court or regulator requires
Legal evidence chainFlat citation list cannot prove source integrity or non-tamperingTamper-evident hash-bound chain (EC + CV)
Financial auditActivity log records that an output was produced, not what verified sources produced itCryptographic AT + verifiable CV
Clinical protocolSource may have been served via proxy or be outdated; no detection mechanismSI inspection + temporal validity
Regulatory submission”We used AI” is not an evidence standardOpen, auditable protocol (OP) with reproducible hashes (CS)
Court-ordered disclosureeDiscovery logs confirm interaction occurred; cannot prove source provenanceFull EC chain with genesis hash

AIEP provides all five. No current general-purpose AI assistant provides any of them.


See also: /piea · /source-integrity · /verifiable-citations · /audit · /certification · /why-now