Data Sovereignty

Where an AI system runs, under whose law it operates, and what data access obligations apply to the operator are not procedural questions. In regulated industries — financial services, legal, healthcare, government — they are preconditions for procurement.

AIEP is designed with data sovereignty as a first-class property.


Deployment jurisdiction

AIEP’s protocol implementation runs on Cloudflare Workers, deployed across Cloudflare’s global edge network. Cloudflare is a US-incorporated company governed by US law. For EU deployments, Cloudflare operates under the EU-US Data Privacy Framework and Cloudflare’s Data Processing Addendum (DPA), which is GDPR-compliant.

This matters because:

  • GDPR Article 44–49 requires that personal data transferred outside the EEA has equivalent protection. Cloudflare’s DPA satisfies this requirement for EU customers.
  • EU AI Act (applicable from 2026): AIEP’s cryptographic audit architecture — automatic compliance certificate generation at output time, tamper-evident artefact chains — is designed to produce the machine-readable records that the EU AI Act’s transparency and traceability requirements anticipate.
  • UK GDPR: Post-Brexit, UK data controllers processing EU citizen data remain subject to GDPR-equivalent requirements. AIEP’s evidence artefacts are structured to satisfy UK ICO guidance on AI accountability records.

The DeepSeek problem

DeepSeek’s models — including the widely-adopted R1 and V3 releases — are developed by DeepSeek AI, a Chinese AI laboratory. Operators using DeepSeek.com or API services are subject to:

  • PRC Cybersecurity Law (2017) — requires network operators to cooperate with state supervision
  • PRC Data Security Law (2021) — requires data handling entities to fulfil national security obligations
  • PRC National Intelligence Law (2017, Art. 7) — requires organisations and individuals to support, assist, and cooperate with national intelligence work

For organisations in financial services, legal practice, government, or defence contracting, the obligation under Art. 7 of the National Intelligence Law creates a material data exfiltration risk that cannot be mitigated by contractual terms. The CCP’s reach extends to the data processed through these services regardless of where the API call originates.

AIEP runs under US and EU law. This is a structural procurement differentiator in any regulated European or US federal context.


What AIEP stores and where

Data typeStorageJurisdiction
Session evidence artefactsCloudflare KV / D1 (per deployment)US or EU, customer-selected
Evidence ledger (hash chain)Cloudflare Durable ObjectsPinned to customer region
Source content (R2)Cloudflare R2 (per deployment)Customer-selected bucket region
Response hashesSession KVCo-located with session storage
Dissent recordsSession KVCo-located with session storage

No customer data transits a third-country system under a law incompatible with GDPR or UK GDPR. AIEP’s architecture does not require sending customer queries to a centralised inference endpoint — tenanted deployments run as isolated Worker instances.


GDPR and the evidence architecture

AIEP’s evidence architecture intersects with GDPR in a specific way: the response_commitment hash and artefact records provide the audit trail that GDPR Article 22 (automated decision-making) accountability requirements anticipate.

Where an AI system is used to support decisions with legal or significant effects, GDPR requires that meaningful information about the logic involved be available. AIEP’s tamper-evident artefact chain provides:

  • What sources were consulted (artefact IDs)
  • When they were retrieved (timestamps)
  • What confidence tier each source received (integrity check result)
  • Whether dissent was recorded (P126 negative proof hash)

This is structured, machine-readable accountability evidence. It is not a post-hoc narrative.


EU AI Act readiness

The EU AI Act classifies AI systems in high-risk categories (medical devices, critical infrastructure, employment, law enforcement, migration, justice) as requiring:

  • Technical documentation of system design and evidence sources
  • Traceability and logging of AI system operations
  • Human oversight mechanisms with audit trail

AIEP’s automatic ComplianceCertificate generation at output time, bound by cryptographic hash to the evidence chain and reasoning steps, is designed to satisfy these requirements natively — without post-deployment manual audit processes.

See Compliance and Regulatory Governance for the full clause-by-clause framework mapping.


For procurement teams

Key questions to ask any AI evidence system:

  1. Under what jurisdiction does the vendor operate, and what state access obligations apply?
  2. Can the system produce a tamper-evident record of exactly which sources were retrieved for a given output?
  3. Can that record be independently verified by a third party without access to the vendor’s internal systems?
  4. Does the system generate structured compliance certificates automatically at output time, or only as a manual export?

AIEP answers all four. Ask your current AI vendor the same questions.


See also: Compliance · Trust & Security · Audit · Verifiable Citations