How the stack works together

AIEP is not a collection of independent components. Each layer is designed to compose with the layers above and below it. An evidence normalisation decision made in Layer 3 affects the hash that a Layer 4 GoalVector commitment is bound to. A divergence detected in Layer 3 feeds into the Layer 4 dissent fork mechanism. A Layer 5 swarm consensus record references the Layer 3 evidence weights that produced it.

This page traces how those connections work — from a publisher putting an instruction on the web to a regulator receiving a jurisdiction-specific compliance package.


The full stack

LayerNameWhat it doesKey patents
Layer 1Core protocolDefines the instruction object, DivergenceGraph, and canonical primitivesGB2519711.2
Layer 2Web surfacePublishes machine-readable artefacts at /.well-known/aiep/P60–P63
Layer 3Evidence ecosystemNormalises, stores, chains, and validates evidence artefactsP10, P14, P16, P17, P37
Layer 3bAdmissibility gatePlausibility matrix, probability certification, and quantum-aligned scoringP02, P03, P04, P66, P67
Layer 4Constitutional stackGoverns reasoning: goals, divergence, recall, compliance, governance chipsP22, P42, P50, P72, P80, P83, P88, P89, P92, P93, P94
Layer 5Swarm and continuityMulti-node consensus, cross-session continuity, sub-swarm formationP90–P103, GB2519803.7

Tracing a complete flow

Step 1: Publication (Layer 1 + Layer 2)

A publisher constructs an AIEP instruction object (Layer 1). The object contains:

  • intention, parameters, context, safetyEnvelope, protocolVersion
  • An embedded DivergenceGraph — the primary interpretation plus 2–8 alternatives, generated using operators D1–D9

The object is canonically serialised (R1), hashed (R2), signed with provenance (R3), and schema-pinned (R4).

It is published at /.well-known/aiep/ (Layer 2) — discoverable by any AI agent, validator, or audit system that knows the well-known convention.

What this creates: A hash-bound, signed, schema-validated instruction with all alternatives embedded and traceable.


Step 2: Ingestion and normalisation (Layer 3)

An agent or evidence system retrieves the published artefact and ingests new evidence that relates to it.

The normalisation engine (P10) converts all incoming evidence to canonical form. It:

  1. Detects the input type
  2. Selects the version-bound NormalisationProfile
  3. Produces a CanonicalForm
  4. Computes CanonicalHash = SHA-256(CanonicalForm || ProfileVersionId)
  5. Generates a NormalisationManifest — binding the transformation steps, profile version, and hash
  6. Appends the canonical artefact and manifest to the Evidence Ledger

If normalisation cannot be performed deterministically, the input is rejected fail-closed — a rejection record is appended. The Evidence Ledger never contains an artefact whose hash cannot be independently reproduced.

What this creates: A canonical, hash-verified evidence artefact in the Evidence Ledger with a reproducible transformation record.


Step 3: Gap and divergence detection (Layer 3)

As evidence accumulates, two checks run continuously:

Temporal gap detection (P16): The TimelineIndex is evaluated against the GapRuleProfile. If a gap exceeds the permitted threshold, a GapArtefact is appended. Any downstream operation requiring a complete timeline is gated until the gap is resolved or acknowledged.

Divergence detection (P37): When two artefacts make contradictory claims about the same subject, a DivergenceRecord is generated — naming both artefacts, the contradicted fields, and the contradiction type. Both artefacts are preserved in the dissent archive. Neither is discarded.

What this creates: A complete, honest record of what the evidence agrees on, where it disagrees, and where it is missing.


Step 3b: Admissibility gate — plausibility and probability certification (Layer 3b)

Before any evidence artefact or proposed reasoning path can reach the constitutional stack, it must pass through two independent pre-execution gates that run in parallel:

Plausibility gate (P03 / GB2519799.7): The execution controller retrieves the PlausibilityScore for the artefact’s declared claim_type from the versioned safety registry. The registry entry is verified using its version identifier and a Merkle proof of inclusion. If verification fails: non-execution, fail-closed. The PlausibilityScore is incorporated as a deterministic coefficient in the safetyViolation computation. Below the lower threshold: guaranteed non-execution. In the intermediate band: mandatory human arbitration. Above the upper threshold: proceeds.

A claim that fails here is not discarded — it enters the dissent archive with its DivergenceRecord. If the PlausibilityScore is later updated in the registry (via signed assessments from authorised authorities, deterministically aggregated under threshold quorum signatures), archived claims of that type become eligible for recall.

Probability certification gate (P04 / GB2519801.1): For each DivergenceGraph node, the Probability Certification Module generates a failure-probability distribution. A certified tail-risk bound is derived: Failure probability ≤ ε at confidence C. A cryptographic commitment is computed over the canonical serialisation of all estimation artefacts. The deterministic arbitration state machine evaluates the bound — if it exceeds threshold, validation fails, or recomputation is not bit-identical: non-execution, fail-closed. Nodes that pass proceed to canonical scoring unchanged.

For computationally demanding probability estimation, quantum amplitude estimation (P02 / GB2519798.9) accelerates the computation. The Quantum Alignment Layer runs the canonical scoring function on a quantum processor in parallel with a canonical classical simulation. Both results are canonicalised to fixed-length representations and a deterministic deviation metric is computed. If the quantum result is valid, timely, and within the deviation threshold, it is committed. Otherwise, the classical simulation result is committed. The committed result is bit-identical across all distributed nodes regardless of quantum execution variance.

What this creates: An evidence substrate that has been filtered by two independent, cryptographically verifiable, fail-closed gates before any goal commitment or reasoning operation touches it. An artefact that passes both gates has: (a) a registry-verified plausibility classification, (b) a cryptographically committed failure-probability bound, and (c) a deterministic, independently reproducible audit trail covering both assessments.


Step 4: Goal commitment (Layer 4)

Before executing a reasoning operation, the agent must commit to a GoalVector (P50).

The GoalVector is a structured commitment to the agent’s current goal state — not a natural language description. It contains:

  • The goal identifier
  • The evidence binding (the Evidence Ledger artefacts that justify this goal being active)
  • A commitment hash
  • The drift threshold

The GoalVector is appended to the Reasoning Ledger. The agent cannot silently change its goal — any deviation beyond the drift threshold requires a signed re-commitment record, with evidence justifying the change.

What this creates: An auditable goal history — for any completed task, a complete record of what goal was active at every step and what triggered any changes.


Step 5: Invariant-gated execution (Layer 4)

Before any reasoning state entry can execute, the GENOME substrate (P80) evaluates four invariants:

  1. Referential integrity — every Evidence Ledger reference in the reasoning state resolves to a valid, hash-verified artefact
  2. Activation exclusivity — no conflicting goal state is currently active
  3. Schema compatibility — the reasoning state’s schema version is compatible with the evidence it references
  4. Frontier compliance — the operation does not exceed declared computational bounds

All four must be satisfied. If any fails, execution is suppressed fail-closed and a suppression record is appended.

What this creates: A substrate where reasoning cannot proceed on inconsistent, incomplete, or out-of-scope evidence.


Step 6: Dissent fork, recall, and re-entry (Layer 4)

If the evidence presents a sustained divergence — magnitude above threshold, duration above threshold — the automatic dissent fork mechanism (P83) generates a bounded reasoning fork:

  • The primary branch continues with the consensus interpretation
  • The dissent branch is archived with its full evidence chain and RecallScope
  • A RecallScopeHash commits the recall configuration cryptographically
  • A termination record bounds the fork’s expansion

When new evidence arrives, the anticipatory branch surfacing mechanism (P94) monitors for partial convergence toward archived branches — detecting when new artefacts are directionally consistent with archived positions even before the threshold is fully crossed. A recall candidate notification is surfaced to retrieval agents.

When recall is triggered, the deterministic recall and context reconstruction engine (P22) takes over:

  1. Admissible artefacts are retrieved from the versioned ContextRegistry
  2. Admissibility is verified against invariant constraints — not inferred
  3. A canonical ordering rule produces a canonical context sequence
  4. ContextReconstructionHash = H(canonical_context_sequence) — same inputs, same hash, on every distributed node
  5. If any required artefact is missing, inadmissible, or cannot be deterministically ordered → execution denied, fail-closed

The recalled branch then re-enters the stack at Layer 3b — it must pass the plausibility and probability gates again under current registry state before proceeding. If the PlausibilityScore has been updated (via authorised registry process) to reflect changed knowledge, the previously non-executable claim may now proceed.

What this creates: A closed loop between knowledge preservation and knowledge reactivation — every archived minority position is recoverable, every recovery is deterministic and independently verifiable, and nothing bypasses the admissibility gates.


Step 7: Compliance certification (Layer 4)

At the moment the reasoning output is produced, the compliance certification engine (P92) automatically evaluates the substrate state against the RegulatoryFrameworkRegistry.

For each applicable regulatory framework, it evaluates:

  • Evidence Ledger append-only integrity
  • Evidence hash chain completeness (output back to all source artefacts)
  • Deterministic replay path availability
  • Schema version binding of all operations

A ComplianceCertificate is generated, bound by hash to the specific output, reasoning chain, and evidence substrate state. If any required property is not satisfied, output is suppressed and a ComplianceFailureRecord is appended.

On demand, a jurisdiction-specific JurisdictionCompliancePackage (P93) is generated — formatted for the named regulatory authority, with a PackageIntegrityHash enabling independent verification.

What this creates: Output that arrives with proof of its own compliance, independently verifiable without access to the substrate internals.


Step 8: Swarm consensus and cross-session continuity (Layer 5)

In multi-node deployments, the swarm consensus mechanism (P90) aggregates the evidence-weighted reasoning of all nodes:

  • Each node computes its LocalDominanceHash from its EvidenceWeightVector
  • Nodes exchange hashes within their consensus scope
  • The GlobalDominanceState is computed deterministically
  • A ConsensusRecord is appended when a branch reaches the dominance threshold

Sub-swarms form for specific tasks (P97), operate under bounded governance scope, and dissolve cleanly — all state merged into the parent ledger.

Cross-session continuity (P95–P98) ensures that when a session ends and a new one begins, the governance context — GoalVector history, cognitive pattern fingerprint, certification status — is recovered from the ledger. No governance state is lost on restart.

What this creates: A scalable, coordinator-free network of governed nodes that maintains continuous governance context across session boundaries.


The dependency graph

The layers are not just stacked — they are dependent:

Layer 5 (Swarm) depends on:
  └── Layer 4 (Constitutional stack) — GoalVector, recall, compliance certification
        └── Layer 3b (Admissibility gate) — plausibility score, probability bound, quantum alignment
              └── Layer 3 (Evidence ecosystem) — normalised, hash-bound artefacts
                    └── Layer 2 (Web surface) — published at /.well-known/aiep/
                          └── Layer 1 (Core protocol) — canonical instruction objects + DivergenceGraph

The admissibility gate (Layer 3b) is what separates evidence that exists from evidence that is execution-eligible. Evidence can be normalised, hash-verified, and stored in the Evidence Ledger (Layer 3) and still be non-executable because it has not passed the plausibility or probability gates. The gate does not modify artefacts — it classifies them as admissible or non-admissible, and routes the non-admissible ones to the dissent archive with a full audit record.

The hash at Layer 1 propagates upward. A reasoning operation in Layer 4 references specific Evidence Ledger entries by their Layer 3 hashes. A ComplianceCertificate in Layer 4 is bound to the Evidence Ledger state that was produced by Layer 3 normalisation of artefacts published in Layer 2 under the Layer 1 schema.

This is not loose coupling. The entire stack is a single cryptographically anchored chain from source instruction to regulator-ready compliance package.


What each patent does in the chain

PatentLayerRole in the chain
GB2519711.21Defines the canonical instruction object and DivergenceGraph
P103Normalises heterogeneous evidence to canonical form with verifiable manifests
P163Detects temporal gaps in evidence sets and gates downstream operations
P173Cross-jurisdiction normalisation — multiple formats to canonical form
P373Detects evidential contradictions and generates typed DivergenceRecords
P023bQuantum Alignment Layer — deterministic commitment of quantum-computed scores
P033bPlausibility Matrix — registry-bound, Merkle-verified PlausibilityScore gate
P043bProbability Certification Engine — fail-closed tail-risk bound verification
P663bProbability Metadata Envelope — public declaration format for agents
P673bPlausibility Constraint Declaration Format — public outcome format for agents
P224Deterministic recall context reconstruction — bit-identical across distributed nodes
P424Controlled multi-transactional recall loop
P504GoalVector commitment and drift detection
P944Anticipatory branch surfacing — detects recall candidates before threshold crossing
P724Hierarchical goal decomposition with auditable arbitration records
P804Dual-ledger memory substrate with invariant-gated execution
P834Automatic dissent fork generation with bounded frontier control
P884Constitutional goal drift detection — detects when goal state deviates from constitutional constraints
P894Hardware-enforced goal activation — governance chip witness
P924Automated regulatory compliance certification at output production
P934Jurisdiction-specific compliance package generation
P905Evidence-weighted distributed consensus without central coordination
P955Cross-session cognitive pattern accumulation
P975Sub-swarm formation and dissolution
P995Secure governance chip substrate migration

What is novel

Prior AI systems have logging. Some have structured output formats. Some have audit trails. AIEP’s claim is not that these individual components are new in isolation — it is that no prior system binds all of them into a single cryptographically anchored chain where:

  • Every input is deterministically normalised before it enters the chain
  • Every artefact passes a registry-verified plausibility gate and a cryptographically committed probability certification gate before execution
  • Quantum computation can be integrated without breaking deterministic equivalence across distributed nodes
  • Every reasoning operation is invariant-gated before it executes
  • Every output arrives with an automatically generated, independently verifiable compliance certificate
  • Every divergence is preserved, not discarded
  • Archived minority positions are recoverable through a deterministic, fail-closed recall mechanism that requires no bypass of any gate
  • What was once non-executable because of current knowledge becomes executable again — deterministically and verifiably — when knowledge changes
  • Every goal change is a signed record, not a silent state transition
  • The entire chain is reproducible from stored ledger entries without access to the executing system

That combination — particularly the automatic compliance certification bound cryptographically to each specific output — is the regulatory governance invention.


Constitutional substrate · GENOME SDK · Regulatory governance · Architecture · Patents