Architecture

AIEP is a deterministic evidence protocol for governed AI reasoning. It defines how an AI substrate admits evidence, makes decisions, and produces verifiable records — with every step cryptographically bound, immutable, and independently replayable.

Most AI systems reason by prediction. AIEP reasons by evidence.


The seven-layer stack

LayerNamePatentsDescription
7AGI Cognitive ArchitectureP200–P259Phase 2 cognitive substrate (patent applications filed — GB2608060.6 · GB2608061.4 · GB2608062.2 · GB2608063.0 · GB2608064.8 · GB2608066.3)
6AGI & Protocol ExtensionsP80–P132GoalVector, recall, compliance, swarm, PIEA surface, dissent
5Cognitive ContinuityP95–P103Cross-session patterns, swarm consensus, and moral substrate
4Constitutional StackP22, P37–P94GoalVector, recall, compliance certification, chip governance
3bAdmissibility GateP02, P03, P04, P66, P67Plausibility matrix, probability certification, quantum-aligned scoring
3Evidence EcosystemP10–P30Normalisation, stitching, presentation, audit
2Web SurfaceP60–P70Machine mirror pages, well-known manifests, site index
1Core ProtocolGB2519711.2Constitutional substrate, canonical primitives

Every layer builds on Layer 1. The frozen kernel — R1 through R8 canonical primitives — is the trust root. Every downstream component carries a cryptographic commitment to the kernel version in force at build time (GENOME_LOCKFILE.json).


Three properties no prediction-based system can guarantee

Deterministic replayability. Given the same evidence set and schema version, any AIEP node produces the same canonical output. Cross-node equivalence is cryptographically verifiable. A third party can replay the evidence and confirm they reach the same conclusion — without access to model weights, without trusting any intermediary.

Provable completeness. The substrate records not only what evidence it has, but cryptographic proof of what it lacks. A time period with no evidence gets a committed NegativeProofRecord. Absence is proven, not merely asserted.

Governance-bound operation. Every output is a function of architecture, capability, and policy — formally. The Flexibility Contract (AIEP-FC-001) binds these relationships. What operators may configure, and what they cannot, is defined in the contract, not in documentation.


Layer 1 — Core Protocol

The constitutional substrate. Defines the instruction–evidence link, the canonical record structure, and the primitives shared across all layers.

Core patent: GB2519711.2 (filed 20 November 2025, UK Intellectual Property Office).

The canonical schema aiep.canonical.schema.v3.0.0.json covers every valid record type across all 137+ specifications. Locked under AIEP-NMR-001: any change requires a clean rewrite as a new major version. Any two AIEP-governed systems validating against this schema can exchange and verify each other’s records without a shared implementation.


Layer 2 — Web Surface

Makes AIEP practical on the open web through machine-readable discovery.

A compliant publisher exposes:

  • /.well-known/aiep/index.json — site map
  • /.well-known/aiep/metadata.json — publisher declaration
  • /.well-known/aiep/schemas/ — per-record validation schemas
  • /.well-known/aiep/innovation-ledger/ — concept provenance

The following Layer 2 repositories are published under Apache 2.0 and available immediately:

RepositorySpecificationsWhat it does
aiep-mirrorP60, P61, P62, P63Builds and validates the .well-known/ machine-readable web surface
aiep-hub-validatorP60, P61, P62, P63TypeScript endpoint validator — 15 checks, SSRF guard, zero runtime dependencies
aiep-well-knownP60, P61, P62, P63Python library + CLI for building and validating complete .well-known/ trees

Layer 3 — Evidence Ecosystem

Deterministic evidence normalisation and temporal gap detection.

RepositorySpecificationsWhat it does
aiep-normaliserP10, P17Deterministic evidence normalisation — version-bound canonical forms, fail-closed rejection
aiep-divergence-detectorP16Temporal evidence gap detection — cryptographic proof of absence as well as presence

Canonical normalisation (P10): UTF-8 encoding + sorted keys + no whitespace + minimal number representation → SHA-256 → sha256:<hex>. Every node that normalises an identical artefact produces an identical hash.


Layer 3b — Admissibility Gate

Sits between the Evidence Ecosystem and the Constitutional Stack. Evidence that is normalised and hash-verified is not automatically execution-eligible. The admissibility gate applies two independent, fail-closed checks before any artefact may be used by a reasoning operation:

Plausibility Matrix (P03 / GB2519799.7): The PlausibilityScore for a claim-type is retrieved from a versioned safety registry and verified with a Merkle proof of inclusion. Below the lower threshold: guaranteed non-execution. Intermediate band: mandatory human arbitration. Above threshold: proceeds. Claims that fail enter the dissent archive with a full audit record. When the registry is updated by authorised authorities, previously non-executable claims may become executable — this is the mechanical pathway for recall.

Probability Certification Engine (P04 / GB2519801.1): For each DivergenceGraph node, a certified tail-risk bound is derived (Failure probability ≤ ε at confidence C) and a cryptographic commitment computed over the canonical serialisation. The deterministic arbitration state machine evaluates the bound. Failure, validation error, or non-identical recomputation: non-executable state. No override.

Quantum Alignment Layer (P02 / GB2519798.9): Where quantum hardware is available, the Quantum Alignment Layer runs quantum and classical scoring in parallel, canonicalises both results, computes a deterministic deviation metric, and commits the result only if equivalence is confirmed. Classical simulation is always the fallback. The committed result is bit-identical across all distributed nodes regardless of quantum hardware variance.

See: Plausibility Matrix · Probability Engine · Quantum Alignment


Layer 4 — Constitutional Stack

GoalVector stabilisation, compliance certification, chip governance. Implements the governed reasoning substrate: deterministic memory, divergence control, hierarchical planning, adaptive governance, and self-model capability calibration.

See /goal-generation for a full explanation of how AIEP structures goal derivation, commitment, drift detection, and hardware-enforced activation. See /recall for how archived divergent branches are deterministically reconstructed and re-evaluated when knowledge changes.

Layer 4 repositories are under patent application and available under NDA for evaluated integration partners.


Layer 5 — Cognitive Continuity

Cross-session patterns, swarm consensus, and hardware-enforced anonymisation. Enables multi-node AIEP deployments where nodes reach coordinator-free consensus, each contribution anonymised at the hardware enclave level.

Layer 5 repositories are under patent application. See /licensing for access tiers.


The Thesis Layer — Governing Intelligence at the Hardware Boundary

Layers 1–5 solve the problem that every AI system faces today: how to reason from evidence, prove what you relied on, and operate within constitutional constraints.

Layers 6–7 ask a harder question: what happens when the system is capable enough to reason about its own governance?

The AIEP hardware governance thesis addresses a structural limitation that no software-layer governance framework can resolve — and proposes an architecture that does not degrade as AI capability increases. The security property is physical, not computational. It does not depend on the governed system being less capable than the governance mechanism.

The thesis is structured as a working dissent engine with eleven open research goals that constitute the AIEP Foundation’s founding research agenda — spanning instrumental convergence detection, specification adequacy at ASI-level capability, and the network effects of global hardware governance adoption.

Three documents form a closed loop: the hardware thesis enables the commercial framework; the commercial framework funds the institution; the institution resolves the research goals and feeds them back into the specification process.

Read the thesis →


Piea — the full stack in production

Piea is a production AI assistant that implements the complete AIEP Piea Surface (P116–P128). It is not a reference implementation of a subset. It is a running system demonstrating every layer of the AIEP architecture from GENOME R1–R8 up through the governance UI.

AIEP LayerWhat Piea implements
GENOME kernelR1–R8 canonical primitives — every artefact normalised and hash-bound
Evidence substrateLive retrieval → EvidenceRef[]response_hash commitment
Source integrityP124 VPN/proxy/no-TLS inspection — flagged sources demote confidence tier
Artefact cacheP125 — KV-backed evidence artefact caching with chain IDs
Dissent signalP126 — governed uncertainty record when evidence is insufficient
Reasoning chainP127 — 5-step chain streamed over SSE, terminal step hash-anchored, persisted
Semantic branchesP128 — ambiguous queries answered with both valid interpretations + shared evidence
Multimodal ingestionP119 — PDF, DOCX, plain text → canonical evidence artefact
Governed outputP120 — signed Markdown audit pack with full evidence chain
Substrate continuityP116 — Durable Objects maintain reasoning state across channel changes

Piea proves that an AI system built on AIEP cannot hallucinate in the structural sense: every response requires an evidence commitment before generation; an empty evidence set produces a dissent record, not a fabricated answer.

Full Piea specification →


GENOME SDK

The AIEP GENOME SDK (kernel v1.2.0) is the reference implementation and production SaaS foundation. It contains:

  • the frozen kernel (R1–R8 canonical primitives)
  • the Flexibility Contract (AIEP-FC-001)
  • the governance layer

Available to Tier 1 and above licensees and evaluated hardware partners under NDA. Contact: [email protected]


Knowledge states

AIEP supports a living knowledge system:

StateMeaning
ConsensusRelied upon with confidence; admitted evidence supports the conclusion
OutlierPreserved without elevation to consensus; contradicts prevailing evidence but is not discarded
Radical outlierArchived for future recall; may become relevant when new evidence arrives

Governance

AIEP is open. Open use is always permitted.

Governance exists only to preserve trust in:

  • certification claims (“AIEP Certified”)
  • access to NDA-gated development materials
  • evidential logs for restricted downloads


Cognitive Architecture — Evidence-Governed Reasoning Pipeline

The cognitive architecture describes how AIEP systems produce auditable outputs from user queries. Unlike prediction-based systems, every reasoning step is governed, traceable, and replayable.

Reasoning Pipeline

A typical AIEP reasoning process follows this sequence:

User Query

Query Classification

Evidence Retrieval

Evidence Trust Evaluation

Reasoning Execution

Dissent Detection

Semantic Branch Detection

Response Generation

Compliance Certificate Generation

Each stage produces auditable artefacts. A reasoning chain is replayable: given the same evidence set and schema version, any AIEP node produces the same output.

Cognitive Layers

LayerResponsibilitiesKey Protocols
Evidence LayerDiscovery, hashing, mirror generation, indexingP10, P14, P16, P133, P134, P142
Reasoning LayerStructured reasoning chains, semantic branch detection, dissent signalsP126, P127, P128
Knowledge LayerPersistent knowledge substrate, temporal reassembly, jurisdictional segmentationP200, P209, P210 (filed GB2608060.6 · GB2608061.4 · GB2608062.2)
Research LayerAutonomous hypothesis generation and evidence gatheringP504+
Governance LayerMeta-reasoning evaluation, trust score adjustment, compliance verificationP89, P92, P93, P228
Infrastructure LayerSovereign knowledge nodes, federated evidence exchange, mirror network resilienceP133, P140, P142

Evidence Preservation Model

Evidence sources are preserved through four mechanisms that ensure reasoning outputs remain reproducible even if the original source disappears:

  1. Content hashing — each source document is bound to its SHA-256 hash at retrieval time
  2. Mirror generation — autonomous mirror nodes replicate and preserve evidence assets (P134)
  3. Evidence indexing — a distributed index enables cross-node discovery and retrieval (P133)
  4. Version tracking — evidence artefacts carry version identifiers; drift is cryptographically detectable

Persistent Knowledge Substrate

AIEP systems maintain a persistent world model encoding entities, relationships, regulatory frameworks, and historical changes. The world model evolves as new evidence is ingested, but historical states are preserved and addressable.

Dissent and Semantic Branches

When evidence is insufficient or ambiguous, AIEP does not fabricate confidence:

  • Dissent signal (P126) — a governed uncertainty record is generated and returned alongside any response
  • Semantic branches (P128) — ambiguous queries produce both valid interpretations with shared evidence, not a single forced answer

Compliance Certificates

Every AIEP output can carry a compliance certificate (P92) containing:

  • response hash
  • reasoning chain hash
  • evidence hashes
  • model identifiers
  • jurisdiction scope

Certificates allow regulators and auditors to independently verify that any AI output was produced in accordance with governance rules — without access to model weights.

See /cognitive-architecture for the full cognitive architecture specification.


Hardware attestation layer (P09, P104)

Software governance is auditable. Hardware governance is unfalsifiable.

AIEP’s filed patent P09 (GB2519711.2) and P104 define a governance chip attestation protocol: cryptographic attestation of the hardware substrate on which AI reasoning executes. This provides a hardware root of trust — proof that the AI system ran on unmodified, approved hardware infrastructure.

This capability is distinct from all current AI systems. No competitor — general assistant or specialist AI — has a filed patent for hardware-level AI governance attestation.

Relevance to regulatory timelines (2026–2027):

  • EU AI Act Article 17 (quality management, high-risk AI): hardware attestation enables independently verifiable substrate integrity
  • NIS2 Directive: substrate attestation provides assurance for AI systems operating in essential services
  • US DoD Zero Trust Architecture: hardware root of trust is a stated requirement for AI components

The implementation is planned for 2027, after core PCT filings complete. The patents are filed. When regulators begin mandating substrate verification, AIEP will be the only vendor with a prior art position.


Founding documents

For the architecture’s intellectual context and the motivating argument for hardware-level governance:

  • Hardware Governance Thesis — the case for why software-only AI governance is structurally insufficient, and what hardware enforcement enables that policy cannot
  • AI is the OS — the architectural claim: AI is not an application running on an OS — AI is the operating system, and AIEP is its governance kernel
  • Genesis — the founding observations that led to AIEP

See also: /piea · /spec · /protocol · /licensing · /patents