AIEP for Regulated Industries

General AI assistants are designed for breadth. AIEP is designed for verifiability. In four sectors — financial services, legal, healthcare, and government — the question is not just “is the answer accurate?” It is “can you prove it?”


The common requirement

Across regulated industries, AI use in high-stakes workflows faces a common precondition: the system must be able to produce an audit record that describes, in machine-verifiable form, what evidence was used to generate an output, that the evidence was retrieved cleanly, and that the record has not been altered since it was made.

No major consumer AI assistant — ChatGPT, Claude, Gemini, Perplexity, Copilot — meets this requirement. Their citations are presentational. They are not tamper-evident records.

AIEP’s architecture produces these records automatically. Every response generates:

  • A response_commitment hash — SHA-256 over answer text + artefact IDs + timestamp
  • An evidence artefact chain — P37 sequential chain linking every source to a genesis record
  • A ComplianceCertificate — bound by hash to the evidence chain and reasoning state
  • A dissent record — if evidence is insufficient, a negative proof hash is created and persisted

Financial services

Regulatory context: FCA PS22/3 (Consumer Duty), MiFID II Article 25 suitability documentation, FCA Handbook SYSC 10A (recording obligations), SEC Rule 17a-4 (records retention), FINRA rules.

The problem: When an AI system supports research, advice, or client reporting, regulators and courts may request evidence of what information the system used and whether that information was accurate at the time. A chat log of the AI’s output is not this evidence.

What AIEP provides:

  • Per-query artefact records with source URLs, content hashes, and retrieval timestamps
  • Tamper-evident chain linking every source to a stable genesis record for the session
  • Source integrity flags that identify proxy-routed, VPN-masked, or legally restricted sources
  • ComplianceCertificate automatically generated at output time with stable hash identifier

This produces the source-verifiable record that a post-trade review, FCA supervisory visit, or SEC examination would require.


Regulatory context: Solicitors Regulation Authority (SRA) Record Keeping Rules, Bar Standards Board Handbook, court disclosure obligations (CPR Part 31), duty of candour.

The problem: AI-assisted legal research, document analysis, and case preparation requires demonstrable reliability. If an AI system cites a case and the citation is wrong, the consequences range from adverse costs orders to professional conduct investigations. If the AI is used to prepare submissions for court, the source materials must be verifiable.

What AIEP provides:

  • artefact_id per cited source — a stable identifier that can appear in a disclosure schedule
  • Content hash at retrieval time — allows verification that the cited document matches what was fetched
  • Chain linking all sources for a research query into a single tamper-evident record
  • Dissent signal when sources conflict or are insufficient — flagged, not silently omitted

Harvey (legal AI) competes on domain accuracy through fine-tuning. AIEP competes on provenance — you can prove what Piea fetched and when. Harvey cannot.


Healthcare and life sciences

Regulatory context: EU MDR 2017/745, FDA 21 CFR Part 11 (electronic records), MHRA guidance on AI as a medical device, ICH E6 GCP guidelines (clinical trial documentation).

The problem: AI systems used in clinical documentation, adverse event monitoring, or regulatory submission preparation must produce records that are attributable, legible, contemporaneous, original, and accurate (ALCOA). An AI that provides a summary without a verifiable source chain fails the “original” and “accurate” ALCOA criteria.

What AIEP provides:

  • Attributable: artefact_id identifies the source and the session
  • Legible: source URL and extract in artefact record
  • Contemporaneous: retrieved_at timestamp committed at fetch time
  • Original: content hash captures the source state at retrieval
  • Accurate: P03/P04 admissibility gate applies plausibility and probability checks before artefact enters evidence chain

Government and public sector

Regulatory context: EU AI Act (high-risk categories: law enforcement, justice, migration, essential services), UK Government’s AI Framework, US Executive Order 14110 (AI safety and security), FedRAMP.

The problem: Government AI deployments in high-risk categories under the EU AI Act require technical documentation, traceability, logging, and human oversight mechanisms. US federal deployments require FedRAMP authorisation and, under EO 14110, safety and reliability standards.

What AIEP provides:

  • Traceability: every source and reasoning step recorded with stable hash identifiers
  • Technical documentation: the evidence chain constitutes machine-readable technical documentation of the output’s derivation
  • Human oversight: dissent signals and degraded-confidence artefacts surface cases requiring human review
  • Data sovereignty: Cloudflare Workers deployment under US/EU law (see Data Sovereignty)

DeepSeek’s PRC data law obligations make it structurally disqualified for EU government and US federal procurement. AIEP’s Cloudflare Workers deployment is governed by US/EU law with no equivalent state access obligation.


Hardware attestation (coming 2027)

P09 (filed GB2519711.2) and P104 define a governance chip attestation protocol: cryptographic attestation of the hardware substrate on which AI reasoning executes. This capability — hardware root of trust for AI outputs — is relevant to:

  • EU AI Act Article 17 (quality management for high-risk AI)
  • NIS2 Directive (security of network and information systems)
  • US DoD Zero Trust Architecture requirements

The patent has been filed. The implementation is scheduled for 2027 after core protocol filings complete. AIEP will be the only vendor with a filed patent on AI hardware attestation when regulators begin mandating substrate verification.


Talk to us

If you are evaluating AIEP for a regulated workflow, the Contact page is the right starting point. We can provide:

  • A technical briefing on the evidence architecture
  • A mapping of AIEP’s artefact records to your specific regulatory framework
  • Access to the Piea development environment for proof-of-concept evaluation

See also: Compliance · Regulatory Governance · Data Sovereignty · Verifiable Citations · Audit