Architecture
AIEP is a deterministic evidence protocol for governed AI reasoning. It defines how an AI substrate admits evidence, makes decisions, and produces verifiable records — with every step cryptographically bound, immutable, and independently replayable.
Most AI systems reason by prediction. AIEP reasons by evidence.
The seven-layer stack
| Layer | Name | Patents | Description |
|---|---|---|---|
| 7 | AGI Cognitive Architecture | P200–P259 | Phase 2 cognitive substrate (patent applications filed — GB2608060.6 · GB2608061.4 · GB2608062.2 · GB2608063.0 · GB2608064.8 · GB2608066.3) |
| 6 | AGI & Protocol Extensions | P80–P132 | GoalVector, recall, compliance, swarm, PIEA surface, dissent |
| 5 | Cognitive Continuity | P95–P103 | Cross-session patterns, swarm consensus, and moral substrate |
| 4 | Constitutional Stack | P22, P37–P94 | GoalVector, recall, compliance certification, chip governance |
| 3b | Admissibility Gate | P02, P03, P04, P66, P67 | Plausibility matrix, probability certification, quantum-aligned scoring |
| 3 | Evidence Ecosystem | P10–P30 | Normalisation, stitching, presentation, audit |
| 2 | Web Surface | P60–P70 | Machine mirror pages, well-known manifests, site index |
| 1 | Core Protocol | GB2519711.2 | Constitutional substrate, canonical primitives |
Every layer builds on Layer 1. The frozen kernel — R1 through R8 canonical primitives — is the trust root. Every downstream component carries a cryptographic commitment to the kernel version in force at build time (GENOME_LOCKFILE.json).
Three properties no prediction-based system can guarantee
Deterministic replayability. Given the same evidence set and schema version, any AIEP node produces the same canonical output. Cross-node equivalence is cryptographically verifiable. A third party can replay the evidence and confirm they reach the same conclusion — without access to model weights, without trusting any intermediary.
Provable completeness. The substrate records not only what evidence it has, but cryptographic proof of what it lacks. A time period with no evidence gets a committed NegativeProofRecord. Absence is proven, not merely asserted.
Governance-bound operation. Every output is a function of architecture, capability, and policy — formally. The Flexibility Contract (AIEP-FC-001) binds these relationships. What operators may configure, and what they cannot, is defined in the contract, not in documentation.
Layer 1 — Core Protocol
The constitutional substrate. Defines the instruction–evidence link, the canonical record structure, and the primitives shared across all layers.
Core patent: GB2519711.2 (filed 20 November 2025, UK Intellectual Property Office).
The canonical schema aiep.canonical.schema.v3.0.0.json covers every valid record type across all 137+ specifications. Locked under AIEP-NMR-001: any change requires a clean rewrite as a new major version. Any two AIEP-governed systems validating against this schema can exchange and verify each other’s records without a shared implementation.
Layer 2 — Web Surface
Makes AIEP practical on the open web through machine-readable discovery.
A compliant publisher exposes:
/.well-known/aiep/index.json— site map/.well-known/aiep/metadata.json— publisher declaration/.well-known/aiep/schemas/— per-record validation schemas/.well-known/aiep/innovation-ledger/— concept provenance
The following Layer 2 repositories are published under Apache 2.0 and available immediately:
| Repository | Specifications | What it does |
|---|---|---|
| aiep-mirror | P60, P61, P62, P63 | Builds and validates the .well-known/ machine-readable web surface |
| aiep-hub-validator | P60, P61, P62, P63 | TypeScript endpoint validator — 15 checks, SSRF guard, zero runtime dependencies |
| aiep-well-known | P60, P61, P62, P63 | Python library + CLI for building and validating complete .well-known/ trees |
Layer 3 — Evidence Ecosystem
Deterministic evidence normalisation and temporal gap detection.
| Repository | Specifications | What it does |
|---|---|---|
| aiep-normaliser | P10, P17 | Deterministic evidence normalisation — version-bound canonical forms, fail-closed rejection |
| aiep-divergence-detector | P16 | Temporal evidence gap detection — cryptographic proof of absence as well as presence |
Canonical normalisation (P10): UTF-8 encoding + sorted keys + no whitespace + minimal number representation → SHA-256 → sha256:<hex>. Every node that normalises an identical artefact produces an identical hash.
Layer 3b — Admissibility Gate
Sits between the Evidence Ecosystem and the Constitutional Stack. Evidence that is normalised and hash-verified is not automatically execution-eligible. The admissibility gate applies two independent, fail-closed checks before any artefact may be used by a reasoning operation:
Plausibility Matrix (P03 / GB2519799.7): The PlausibilityScore for a claim-type is retrieved from a versioned safety registry and verified with a Merkle proof of inclusion. Below the lower threshold: guaranteed non-execution. Intermediate band: mandatory human arbitration. Above threshold: proceeds. Claims that fail enter the dissent archive with a full audit record. When the registry is updated by authorised authorities, previously non-executable claims may become executable — this is the mechanical pathway for recall.
Probability Certification Engine (P04 / GB2519801.1): For each DivergenceGraph node, a certified tail-risk bound is derived (Failure probability ≤ ε at confidence C) and a cryptographic commitment computed over the canonical serialisation. The deterministic arbitration state machine evaluates the bound. Failure, validation error, or non-identical recomputation: non-executable state. No override.
Quantum Alignment Layer (P02 / GB2519798.9): Where quantum hardware is available, the Quantum Alignment Layer runs quantum and classical scoring in parallel, canonicalises both results, computes a deterministic deviation metric, and commits the result only if equivalence is confirmed. Classical simulation is always the fallback. The committed result is bit-identical across all distributed nodes regardless of quantum hardware variance.
See: Plausibility Matrix · Probability Engine · Quantum Alignment
Layer 4 — Constitutional Stack
GoalVector stabilisation, compliance certification, chip governance. Implements the governed reasoning substrate: deterministic memory, divergence control, hierarchical planning, adaptive governance, and self-model capability calibration.
See /goal-generation for a full explanation of how AIEP structures goal derivation, commitment, drift detection, and hardware-enforced activation. See /recall for how archived divergent branches are deterministically reconstructed and re-evaluated when knowledge changes.
Layer 4 repositories are under patent application and available under NDA for evaluated integration partners.
Layer 5 — Cognitive Continuity
Cross-session patterns, swarm consensus, and hardware-enforced anonymisation. Enables multi-node AIEP deployments where nodes reach coordinator-free consensus, each contribution anonymised at the hardware enclave level.
Layer 5 repositories are under patent application. See /licensing for access tiers.
The Thesis Layer — Governing Intelligence at the Hardware Boundary
Layers 1–5 solve the problem that every AI system faces today: how to reason from evidence, prove what you relied on, and operate within constitutional constraints.
Layers 6–7 ask a harder question: what happens when the system is capable enough to reason about its own governance?
The AIEP hardware governance thesis addresses a structural limitation that no software-layer governance framework can resolve — and proposes an architecture that does not degrade as AI capability increases. The security property is physical, not computational. It does not depend on the governed system being less capable than the governance mechanism.
The thesis is structured as a working dissent engine with eleven open research goals that constitute the AIEP Foundation’s founding research agenda — spanning instrumental convergence detection, specification adequacy at ASI-level capability, and the network effects of global hardware governance adoption.
Three documents form a closed loop: the hardware thesis enables the commercial framework; the commercial framework funds the institution; the institution resolves the research goals and feeds them back into the specification process.
Piea — the full stack in production
Piea is a production AI assistant that implements the complete AIEP Piea Surface (P116–P128). It is not a reference implementation of a subset. It is a running system demonstrating every layer of the AIEP architecture from GENOME R1–R8 up through the governance UI.
| AIEP Layer | What Piea implements |
|---|---|
| GENOME kernel | R1–R8 canonical primitives — every artefact normalised and hash-bound |
| Evidence substrate | Live retrieval → EvidenceRef[] → response_hash commitment |
| Source integrity | P124 VPN/proxy/no-TLS inspection — flagged sources demote confidence tier |
| Artefact cache | P125 — KV-backed evidence artefact caching with chain IDs |
| Dissent signal | P126 — governed uncertainty record when evidence is insufficient |
| Reasoning chain | P127 — 5-step chain streamed over SSE, terminal step hash-anchored, persisted |
| Semantic branches | P128 — ambiguous queries answered with both valid interpretations + shared evidence |
| Multimodal ingestion | P119 — PDF, DOCX, plain text → canonical evidence artefact |
| Governed output | P120 — signed Markdown audit pack with full evidence chain |
| Substrate continuity | P116 — Durable Objects maintain reasoning state across channel changes |
Piea proves that an AI system built on AIEP cannot hallucinate in the structural sense: every response requires an evidence commitment before generation; an empty evidence set produces a dissent record, not a fabricated answer.
GENOME SDK
The AIEP GENOME SDK (kernel v1.2.0) is the reference implementation and production SaaS foundation. It contains:
- the frozen kernel (R1–R8 canonical primitives)
- the Flexibility Contract (
AIEP-FC-001) - the governance layer
Available to Tier 1 and above licensees and evaluated hardware partners under NDA. Contact: [email protected]
Knowledge states
AIEP supports a living knowledge system:
| State | Meaning |
|---|---|
| Consensus | Relied upon with confidence; admitted evidence supports the conclusion |
| Outlier | Preserved without elevation to consensus; contradicts prevailing evidence but is not discarded |
| Radical outlier | Archived for future recall; may become relevant when new evidence arrives |
Governance
AIEP is open. Open use is always permitted.
Governance exists only to preserve trust in:
- certification claims (“AIEP Certified”)
- access to NDA-gated development materials
- evidential logs for restricted downloads
Cognitive Architecture — Evidence-Governed Reasoning Pipeline
The cognitive architecture describes how AIEP systems produce auditable outputs from user queries. Unlike prediction-based systems, every reasoning step is governed, traceable, and replayable.
Reasoning Pipeline
A typical AIEP reasoning process follows this sequence:
User Query
↓
Query Classification
↓
Evidence Retrieval
↓
Evidence Trust Evaluation
↓
Reasoning Execution
↓
Dissent Detection
↓
Semantic Branch Detection
↓
Response Generation
↓
Compliance Certificate Generation
Each stage produces auditable artefacts. A reasoning chain is replayable: given the same evidence set and schema version, any AIEP node produces the same output.
Cognitive Layers
| Layer | Responsibilities | Key Protocols |
|---|---|---|
| Evidence Layer | Discovery, hashing, mirror generation, indexing | P10, P14, P16, P133, P134, P142 |
| Reasoning Layer | Structured reasoning chains, semantic branch detection, dissent signals | P126, P127, P128 |
| Knowledge Layer | Persistent knowledge substrate, temporal reassembly, jurisdictional segmentation | P200, P209, P210 (filed GB2608060.6 · GB2608061.4 · GB2608062.2) |
| Research Layer | Autonomous hypothesis generation and evidence gathering | P504+ |
| Governance Layer | Meta-reasoning evaluation, trust score adjustment, compliance verification | P89, P92, P93, P228 |
| Infrastructure Layer | Sovereign knowledge nodes, federated evidence exchange, mirror network resilience | P133, P140, P142 |
Evidence Preservation Model
Evidence sources are preserved through four mechanisms that ensure reasoning outputs remain reproducible even if the original source disappears:
- Content hashing — each source document is bound to its SHA-256 hash at retrieval time
- Mirror generation — autonomous mirror nodes replicate and preserve evidence assets (P134)
- Evidence indexing — a distributed index enables cross-node discovery and retrieval (P133)
- Version tracking — evidence artefacts carry version identifiers; drift is cryptographically detectable
Persistent Knowledge Substrate
AIEP systems maintain a persistent world model encoding entities, relationships, regulatory frameworks, and historical changes. The world model evolves as new evidence is ingested, but historical states are preserved and addressable.
Dissent and Semantic Branches
When evidence is insufficient or ambiguous, AIEP does not fabricate confidence:
- Dissent signal (P126) — a governed uncertainty record is generated and returned alongside any response
- Semantic branches (P128) — ambiguous queries produce both valid interpretations with shared evidence, not a single forced answer
Compliance Certificates
Every AIEP output can carry a compliance certificate (P92) containing:
- response hash
- reasoning chain hash
- evidence hashes
- model identifiers
- jurisdiction scope
Certificates allow regulators and auditors to independently verify that any AI output was produced in accordance with governance rules — without access to model weights.
See /cognitive-architecture for the full cognitive architecture specification.
Hardware attestation layer (P09, P104)
Software governance is auditable. Hardware governance is unfalsifiable.
AIEP’s filed patent P09 (GB2519711.2) and P104 define a governance chip attestation protocol: cryptographic attestation of the hardware substrate on which AI reasoning executes. This provides a hardware root of trust — proof that the AI system ran on unmodified, approved hardware infrastructure.
This capability is distinct from all current AI systems. No competitor — general assistant or specialist AI — has a filed patent for hardware-level AI governance attestation.
Relevance to regulatory timelines (2026–2027):
- EU AI Act Article 17 (quality management, high-risk AI): hardware attestation enables independently verifiable substrate integrity
- NIS2 Directive: substrate attestation provides assurance for AI systems operating in essential services
- US DoD Zero Trust Architecture: hardware root of trust is a stated requirement for AI components
The implementation is planned for 2027, after core PCT filings complete. The patents are filed. When regulators begin mandating substrate verification, AIEP will be the only vendor with a prior art position.
Founding documents
For the architecture’s intellectual context and the motivating argument for hardware-level governance:
- Hardware Governance Thesis — the case for why software-only AI governance is structurally insufficient, and what hardware enforcement enables that policy cannot
- AI is the OS — the architectural claim: AI is not an application running on an OS — AI is the operating system, and AIEP is its governance kernel
- Genesis — the founding observations that led to AIEP
See also: /piea · /spec · /protocol · /licensing · /patents