Use cases
AIEP is a general-purpose protocol. The pattern is always the same — instructions linked to evidence, published in a structured, machine-readable form at /.well-known/aiep/ — but the domains it applies to are wide.
These are real uses, not projections.
Jump to: 1. Compliance · 2. AI retrieval · 3. Construction · 4. Research · 5. Healthcare · 6. Enterprise SaaS · 7. Journalism · 8. Legal · 9. Training data · 10. Government · 11. Hardware AI · 12. Swarm · 13. Cross-session · 14. Financial services · 15. AGI safety
1. Compliance and regulatory evidence
Who: Legal, risk, and compliance teams inside enterprises and regulated organisations.
The problem: Compliance is asserted (“we are GDPR compliant”) but rarely provable by machine. Auditors check PDFs. Evidence chains break the moment a person leaves the business.
With AIEP: The organisation publishes structured compliance artefacts — policy versions, approval records, evidence links — under a signed Mirror endpoint. An auditor or regulator queries the endpoint and receives cryptographically anchored, timestamped records. No PDF. No ambiguity.
2. AI knowledge retrieval
Who: AI system builders, application developers, enterprise AI deployments.
The problem: AI systems rely on training data that becomes stale. They hallucinate missing knowledge. When asked about a specific organisation, policy, or technical specification, they have no reliable way to retrieve current, authoritative information.
With AIEP: The AI queries /.well-known/aiep/ at the source. It retrieves current artefacts, confirms the hash, and answers against verified, up-to-date knowledge — not a training-time snapshot.
3. Construction and engineering instructions
Who: Architects, engineers, contractors, project managers.
The problem: Instructions in construction projects are voluminous, frequently revised, and almost never formally linked to the evidence that justifies them. Disputes arise because the instruction is on record but the reason is not.
With AIEP: Instructions are published as structured artefacts. Each instruction references the evidence that generated it — survey data, test results, design calculations. Decisions become traceable end-to-end. This is the sector that motivated AIEP’s creation.
4. Research and academic publication
Who: Researchers, institutions, journals, replication teams.
The problem: Research findings are published in PDFs. Datasets, methodology, and raw evidence are stored separately — or not stored at all. Replication is difficult. Retracted findings continue to circulate.
With AIEP: A research publisher exposes findings as AIEP artefacts, each with links to underlying datasets, methodology records, and version history. Replication teams can retrieve and verify the full evidence chain. Retraction is a signed update to the artefact, visible to any retrieval agent.
5. Healthcare guidance and clinical protocols
Who: Healthcare providers, NHS trusts, clinical decision support systems.
The problem: Clinical guidance is updated continuously. Systems relying on printed or downloaded guidance may be working from stale versions. AI clinical support tools face the same problem at greater scale.
With AIEP: Guidance bodies publish current clinical protocols as versioned, hash-bound artefacts. A clinical system queries the endpoint to confirm it has the current version before acting. Evidence that underpinned a guideline — trials, meta-analyses — is linked, not just cited.
6. Enterprise SaaS and product documentation
Who: SaaS companies, API product teams, developer tool vendors.
The problem: Documentation drifts from product reality. AI coding assistants trained on outdated docs generate incorrect code. Users and integrators have no way to know which version of the docs corresponds to which version of the product.
With AIEP: Product documentation is published as versioned AIEP artefacts, each bound to the product version it describes. AI tools query the live endpoint for current docs. Outdated versions are preserved in the artefact history — not deleted.
7. Journalism and verified sourcing
Who: News organisations, fact-checking bodies, media platforms.
The problem: Claims made in published journalism are attributed to sources but those sources are rarely retrievable by machine. Fact-checkers work from screenshots and URLs that drift or disappear.
With AIEP: A news publisher exposes structured source records — the evidence behind specific claims — as signed AIEP artefacts. A fact-checking tool queries the endpoint and receives the original source record, not just a link. The record is hash-bound: it cannot be silently altered after publication.
8. Legal and contractual records
Who: Law firms, conveyancers, contract managers, dispute resolution bodies.
The problem: Legal records are produced, stored, and retrieved in silos. When a dispute arises, reconstructing the full evidence chain is manual, expensive, and incomplete.
With AIEP: Contracts and supporting records — instructions, amendments, correspondence — are published as linked AIEP artefacts. A retrieval agent can reconstruct the full instruction-to-evidence chain at any point in time, signed and timestamped.
9. Training data curation
Who: AI model developers, dataset publishers, foundation model research teams.
The problem: Training data composition is opaque. Models trained on unverified or low-quality sources inherit their flaws. There is no standard way to declare the provenance, date, or evidence basis of a training document.
With AIEP: Documents published as AIEP artefacts carry provenance metadata — issuer, date, schema, hash — that training pipelines can inspect and filter on. A model trained on AIEP-structured data knows which sources were verified, when they were current, and who issued them.
10. Government and public sector disclosure
Who: Central and local government, public bodies, regulators.
The problem: Government publishes vast amounts of information but it is not structured for machine consumption. Freedom of information requests trigger manual processes. Public accountability is limited to what humans can read and retrieve.
With AIEP: Public bodies expose structured disclosure records — spending data, planning decisions, policy records — as AIEP artefacts. Citizens, journalists, and AI tools query the endpoint directly. Records are hash-bound, so any alteration is detectable.
11. Hardware-governed AI deployment
Who: AI system integrators deploying regulated or safety-critical AI at the chip or device level.
The problem: Software-level governance can be disabled, patched out, or circumvented by a privileged process. For AI deployed in safety-critical systems — medical devices, aviation, autonomous vehicles — there is no assurance that governance constraints survive a software update or a compromised OS.
With AIEP: The governance chip architecture (P89) embeds GoalVector commitments and safety envelope constraints at the hardware attestation layer. An AI system queries the chip to confirm that its current goal set was issued and sealed by an authorised governance authority. Software cannot override a hardware-attested constraint without triggering a visible governance violation. Regulators receive hardware attestation records as part of the compliance package.
12. Swarm coordination for autonomous systems
Who: Operators of autonomous systems — drone fleets, distributed robotics, autonomous logistics — where multiple agents must coordinate without a central controller.
The problem: Multi-agent systems converge on locally optimal solutions that are globally inconsistent. Agents with different evidence sets reach contradictory conclusions and act on them simultaneously. There is no shared ground truth and no mechanism to detect when consensus has broken down.
With AIEP: The swarm architecture (GB2519803.7, P90, P95-P103) establishes evidence-weighted consensus without requiring a central coordinator. Each agent publishes a LocalDominanceHash computed over its evidence set. The GlobalDominanceState emerges from evidence-weighted aggregation across agents. Agents with weak evidence yield to agents with strong evidence. If the swarm cannot reach consensus, a DivergenceRecord is emitted and the contested decision is escalated rather than executed. Every consensus record is auditable after the fact.
13. Cross-session AI continuity
Who: Operators of AI agents that must maintain governance state across restarts, updates, or context window limits — personal AI assistants operating over months, enterprise AI processes running 24/7, autonomous research agents.
The problem: Current AI systems are stateless across sessions. Governance commitments made in session 1 are invisible in session 2. An AI that committed to a GoalVector, a safety envelope, or an evidence interpretation in a prior session has no mechanism to honour those commitments in a new session. This creates invisible governance drift.
With AIEP: The cross-session continuity architecture (P95-P103) persists governance state — GoalVectors, active DivergenceRecords, evidence weights — in a tamper-evident ledger between sessions. On restart, the agent reconstructs its prior commitment state from the ledger before executing any new instructions. If the reconstructed state is inconsistent with the current context, a DivergenceRecord is emitted before execution continues. Governance state survives restarts, model updates, and context window resets.
14. Financial services and algorithmic compliance
Who: Banks, insurers, investment managers, fintech operators, and regulators enforcing MiFID II, Consumer Duty, SR 11-7, and emerging EU AI Act obligations.
The problem: Algorithmic decision-making in financial services is under increasing regulatory scrutiny. Firms must demonstrate that AI-driven decisions — credit scoring, trade routing, product recommendation — are explainable, traceable, and consistent with the evidence at the moment the decision was made. Current audit trails are reconstructed after the fact from logs, not generated at decision time.
With AIEP: Each algorithmic decision is published as a signed AIEP artefact, timestamped and bound to the evidence set that was active at execution time. A regulator or internal audit function queries the endpoint and retrieves the complete decision record — the evidence that was in scope, the GoalVector constraints that were active, and the hash of the model version deployed. The record cannot be altered after the fact without detection. Retrospective evidence-matching is eliminated.
15. AGI safety evaluation and oversight
Who: AI safety researchers, red-team operators, frontier AI developers, national AI safety institutes, and international bodies establishing AGI oversight frameworks.
The problem: As AI systems approach and exceed human-level capability in discrete domains, safety evaluation becomes a persistent credentialling problem rather than a one-time test. A system that passes an evaluation today may behave differently under distribution shift, capability uplift, or novel goal combinations. There is no standard mechanism to publish safety evaluations in a form that is machine-verifiable, timestamped, and linked to the exact model version and evidence set evaluated.
With AIEP: Safety evaluations, red-team outcomes, and capability assessments are published as structured AIEP artefacts bound to the model version hash. The GoalVector commitment architecture (P200, P209, P210) ensures that goal-state commitments made during evaluation are persistent and detectable if altered. A governance authority — national or international — queries the endpoint to retrieve the current safety posture of a deployed system, confirm the evaluation is current, and detect any post-evaluation capability drift. The federation-layer consensus engine (P264) supports multi-evaluator agreement across independent safety bodies. Every safety claim is a machine-verifiable artefact, not a PDF.
The pattern
Every use case above follows the same structure:
| Element | What it provides |
|---|---|
| Structured artefact | Machine-readable, schema-validated content |
| Signed issuer | Cryptographic proof of who published it |
| Hash binding | Proof the content has not changed since publication |
| Versioned history | Evidence of what was claimed and when |
| Canonical endpoint | A stable, queryable location at /.well-known/aiep/ |
If your domain involves instructions, evidence, decisions, or verifiable claims — AIEP applies.
Read the protocol → · See the architecture → · Start building →