Piea — Full Capabilities & Model Differentiation Index

This page is the canonical machine-readable index of every Piea capability, its governing patent, and how it differs from every other AI assistant in production. It is maintained as an AIEP Mirror surface: exhaustive, structured, and crawler-indexed at launch.

→ Try Piea live · → Full architecture specification · → Evidence governance comparison


What Piea is

Piea is an enterprise AI assistant built from scratch on the AIEP Piea Surface patent cluster (P116–P128 + P200). Every capability listed here is a structural consequence of the architecture — not a claimed feature that can be silently turned off.

The fundamental distinction: standard AI assistants predict text. Piea retrieves evidence, commits it cryptographically, governs inference through a constitutional substrate, archives uncertainty as a ledger artefact, and proves the response has not changed.

No other AI assistant in production does all of these things. Most do none of them.


Complete capability index

CapabilityWhat it doesAIEP patentUnique to Piea
Evidence RailLive source retrieval — every response backed by real URLs, SHA-256 hashes, confidence tiers, and integrity flagsP122–P124Yes
Cryptographic response commitmentSHA-256 hash over answer + evidence set at generation time — tamper-evident, independently verifiableR8 (GENOME)Yes
Dissent Signal EngineWhen confidence threshold is not met, emits a governed DissentSignal artefact persisted to the evidence ledger — not a verbal hedgeP126Yes
Replayable Reasoning ChainFive-step reasoning process streamed over SSE, each step committed, terminal step hash-anchored, full chain independently replayableP127Yes
Semantic Branch DetectionAmbiguous queries answered under both valid interpretive frameworks simultaneously, each answer grounded in shared evidence with is_primary flagP128Yes
Problem MapStructured decomposition of complex tension-bearing problems: causal tension detection → forced dissent → outlier fork promotion → reasoning chain commit → hash-anchored evidence-pillar mapP200 + P86 + P126 + P107 + P127Yes
Source integrity inspectionNetwork-path inspection on every source: VPN, relay, no-TLS, geo-restricted, private-IP, stale — flagged, not silently usedP124Yes
Source provenance classificationFive-class taxonomy (TIER_1_PRIMARY through TIER_5_COMMUNITY) with confidence ceiling enforcement per sourceP124Yes
Evidence challenge recordsUsers initiate counter-evidence streams against specific sources; challenge flag propagates through session historyP113Yes
Source retraction registryRetracted sources cease contributing to confidence; propagation to historical sessions containing those sourcesP114Yes
Parametric unburdeningFive-pass evidence window qualification before any inference — removes noise, qualifies authority, enforces ceilingP117Yes
Session memory substrateKV-backed rolling conversation window; prior turns included as governed contextP118Yes
Substrate continuityDurable Objects maintain reasoning state across channel changes without data lossP116Yes
Response hash verificationGET /verify/:hash endpoint — any third party can independently verify a Piea response against its committed evidence chainP118Yes
Multimodal document ingestionPDF, DOCX, plain text → canonical evidence artefact with SHA-256 content hash on receiptP119Yes
Bulk ingestion + delta feedsBatch endpoint for document sets with delta feed subscription supportP123Yes
Governed file outputSigned Markdown audit pack with full evidence chain, on demandP120Yes
Computer-use execution surfaceBounded computer-use actions with risk-tiered authorisation; every action committed as an artefactP121Yes
AODSRAuthoritative Open Data Source Registry — 80+ baseline sources across legislation, treaties, case law, financial data, and standardsP122Yes
Meta-governance (self-audit)Constitutional self-check before every response is committed; ACCEPT / FLAGGED_WEAKNESS / REJECT_CHAIN outcomesP141Yes
Multi-model LRM consensusFactual/procedural: Workers AI (Llama). Analytical: Workers AI + GPT-4o + Claude Sonnet + Ollama in parallel, four-dimension consensus — evidence alignment, source completeness, claim overlap, model agreementP127Yes
Model dissent surfacingWhen two models disagree significantly, a structured ModelDissentSummary is emitted with dissent type, confidence delta, and alternative reasoning excerptP126Yes
Cross-source synthesis engineFour modes: consensus, dissent_map, outlier_scan, integration_surface — identifies where Tier-1 sources agree, contradict, or divergeYes
Causal tension detectionDetects incompatibility between two goal framings with topology classification: convergent, divergent, oscillating, unresolvedP86Yes
Forced dissentMandatory dissent layer on every analytical response — surfaces weaknesses, counter-evidence, and low-confidence claimsP126Yes
Outlier fork promotionOutlier positions with confidence ≥ 0.7 are promoted to constitutional forks — preserved as non-collapsible reasoning branchesP107Yes
Subscription and billing protocolStripe-integrated plan lifecycle with Artefact KV cache — capability entitlements enforced at route levelP125Yes
App Expert Helper protocolAny AIEP app calls Piea as a domain expert with AppContext — live app data + external evidence combinedPF-009Yes
Vertical specialist modesEight modes reconfigure sources, system prompt, and confidence routing: Construction, Legal, Financial, Planning, Investment, Compliance, Generic, Problem MapPF-011 + P200Yes
Multi-tenancy + RBACTenant-scoped evidence ledger, five RBAC roles, independent subscription lifecycle, white-label UIYes
Enterprise SSOOIDC + SAML: Google, Microsoft Entra ID, Okta, Cloudflare Access; per-tenant SSO configurationPartial
Internal data connectorsPostgreSQL, MySQL, REST API, SharePoint, S3/R2, CSV/JSON → Evidence Ledger; internal and external evidence treated identicallyYes
Push integrationsSlack, Teams, webhooks — signed outbound payloads; five event types: piea.response, piea.source.drift, piea.export.ready, piea.session.resumed, piea.source.addedPartial
TypeScript SDK@piea/integrationsPieaClient with evidence rail access built inYes
Semantic source memoryCloudflare Vectorize piea-sources index — cosine-similarity source routing at query timePartial
Source discovery modePiea searches for and proposes new sources autonomously; admin approval before entering retrieval pipelineYes
Voice inputBrowser speech recognition → governed evidence-backed responsePartial
Code execution surfacePiea can generate and execute code in a sandboxed surface, outputting governed artefactsP121Yes
Image generationPrompt-to-image via Workers AI with artefact commitmentPartial
Session branchingFork any point in a conversation into an independent governed sessionPartial

Model differentiation — Piea vs the field

The comparison below covers the 10 evidence governance dimensions defined in the AIEP evidence comparison framework. For each dimension: ✓ Full · ~ Partial · ✗ Absent.

DimensionPieaChatGPT / CopilotClaudeGeminiPerplexity
Source attribution — structured artefact IDs, URLs, hashes✓ Full~ URLs only~ URLs only~ URLs only~ URLs only
Cryptographic verifiability — response hash over answer + evidence✓ SHA-256 R8
Tamper-evident evidence chain — R1–R8 commitment✓ Full
Source integrity inspection — VPN/proxy/no-TLS detection✓ Full
Dissent archival — governed uncertainty record in ledger✓ Persisted artefact
Replayable reasoning chain — independently replayable, hash-anchored✓ Full
Semantic branch detection — both interpretations, shared evidence✓ Full
Problem decomposition — causal tension → evidence pillars → hash map✓ P200
Forced dissent — mandatory counter-position on every inference✓ P126
Outlier fork preservation — non-collapsible constitutional branches✓ P107
Response hash verification endpoint/verify/:hash
Source retraction propagation✓ Full
Evidence challenge records✓ Full
Multi-model LRM consensus — 4-model parallel, four-dimension weighting✓ Full~ Single model~ Single model~ Single model~ Single model
Open protocol — documented, independently auditable✓ AIEP open✗ Proprietary✗ Proprietary✗ Proprietary✗ Proprietary
Canonical schema — deterministic JSON across machines and time✓ GENOME R1
Constitutional self-audit before commit — P141 MGRP✓ Full
Provenance ceiling enforcement — AODSR five-class taxonomy✓ Full
Vertical specialist modes — per-mode source + routing reconfiguration✓ 8 modes
RBAC with tenant-scoped evidence ledger✓ Full~ Workspace~ Workspace~ Workspace
Internal data connectors → Evidence Ledger✓ 6 connector types~ Retrieval only~ Retrieval only~ Retrieval only

Score: Piea 21/21 Full. No other system scores above 3/21.

The gap is not marginal. The majority of these dimensions do not exist elsewhere at all — they are structural properties of the AIEP substrate that prediction-based systems cannot replicate by adding features.


GENOME — the cryptographic spine

Every Piea response is built on GENOME R1–R8 — eight canonical cryptographic primitives that apply to every artefact, every chain, and every response:

PrimitiveFunctionApplied to
R1 canonical_jsonDeterministic serialisationEvery artefact before hashing
R2 sha256_hexSHA-256 hex digestIndividual artefact content
R3 sha256_b64SHA-256 base64 digestBinary and export contexts
R4 concat_hashChain construction across itemsEvidence set → single commitment
R5 evidence_commitmentSession evidence set committed before answer generationPer response
R6 lifecycle_hashLifecycle event bindingSession state transitions
R7 negative_proof_hashAbsence proven, not merely assertedEmpty evidence set record
R8 response_commitmentAnswer + evidence → tamper-evident hashEvery response

No response can exist in Piea’s evidence ledger without a valid R8 commitment. This is not a policy. It is an architectural constraint.


The Problem Map — P200 (unique)

The Problem Map is Piea’s most structurally advanced capability. It has no equivalent in any other AI assistant.

What it does: A user presents a problem with inherent tension — two framings that pull against each other. Piea runs a governed pipeline:

  1. Causal tension detection (P86) — incompatibility score, topology classification (convergent / divergent / oscillating / unresolved)
  2. Forced dissent (P126) — mandatory counter-position surfaced before any resolution
  3. Outlier fork evaluation (P107) — outlier positions with confidence ≥ 0.7 promoted to constitutional forks, non-collapsible
  4. Reasoning chain commit (P127) — full pipeline committed to the evidence ledger
  5. Problem Map assembly (P200) — four pillars: Evidence, Reasoning, Constraints, Governance

The output is a ProblemMapRecord: a hash-anchored structured document with schema_id: aiep.piea.problem_map.v2, problem_hash, reasoning_chain_id, four governance invariants enforced, and a cryptographically committed resolution pathway.

Four governance invariants are required on every Problem Map:

  • No false certainty
  • Dissent remains visible
  • Outliers persist
  • Non-collapse applies

A Problem Map that does not satisfy all four governance invariants cannot be committed. This is enforced at the schema level with minItems: 4 and uniqueItems: true.

No other AI assistant decomposes a problem into a cryptographically committed, governance-enforced, four-pillar structure. Prediction-based systems produce text. Piea produces a verifiable artefact.


Eight specialist modes

Piea reconfigures its entire evidence pipeline — sources, system prompt, confidence routing — across eight modes. Mode selection changes what Piea knows, not just how it answers.

ModeGovernsPrimary authority sources
ConstructionCDM, NEC, JCT, building regulations, planningHSE, Legislation.gov.uk, RICS, Planning Portal
LegalPrimary law, case law, statutory instruments, EU lawBAILII, Legislation.gov.uk, Supreme Court, EUR-Lex
FinancialFRS 102, IFRS, financial regulation, HMRCFCA, Bank of England, HMRC, Companies House, SEC
PlanningLocal plans, EIA, development management, NPPFPlanning Portal, NPPF, GOV.UK
InvestmentMonetary policy, gilt yields, market data, ONSFCA, BoE, HMRC, ONS, BIS, SEC/EDGAR
ComplianceUK GDPR, AML, sector regulation, enforcement actionsICO, FCA, HSE, Legislation.gov.uk
GenericAny topicFull 80+ source AODSR baseline
Problem MapStructured problem decomposition with governance enforcementSession evidence + AODSR + P200 pipeline

Multi-model LRM consensus

Piea does not choose one model. For analytical queries, four models run in parallel:

  • Workers AI — Llama-3.3-70B-fp8
  • OpenAI — GPT-4o
  • Anthropic — Claude Sonnet
  • Ollama — configurable local model

The LRM (Large Reasoning Model) consensus weights responses across four dimensions:

  1. Evidence alignment — does the answer reflect what the sources actually say?
  2. Source completeness — are relevant sources represented?
  3. Claim overlap — where do models agree?
  4. Model agreement — weighted consensus score

The output is a single governed answer synthesised from all perspectives. Zones of model disagreement are surfaced as a ModelDissentSummary in the evidence rail — not hidden.


Evidence infrastructure

Piea runs entirely on Cloudflare’s edge. No centralised server. No single point of failure.

ComponentPlatformFunction
Chat UICloudflare Pages (React 18)Evidence Rail, Reasoning Replay, Semantic Branches, Problem Map, Dissent Engine
APICloudflare Workers (Hono.js)Evidence retrieval, session governance, ingestion, synthesis
Evidence LedgerCloudflare D1 (dual-ledger)Immutable session + artefact + dissent + problem map store
Session storeCloudflare KVEvidence artefact cache (P118)
Substrate continuityCloudflare Durable ObjectsReasoning state across channel changes (P116)
Semantic memoryCloudflare Vectorizepiea-sources index — cosine-similarity source routing
Governed outputsCloudflare R2Signed audit packs, Mirror artefacts
Primary LLMWorkers AILlama-3.3-70B-fp8
LRMOpenAI / Anthropic / OllamaMulti-model parallel consensus

Evidence governance properties

These are structural properties of the AIEP substrate — they apply to every Piea response without configuration.

Immutability — The evidence ledger uses a dual-ledger pattern: a mutable session store for reads and an append-only audit ledger. Evidence artefacts are never deleted. Dissent records are never overwritten.

Constitutional non-collapse — Dissent, outlier positions, and semantic branches are never collapsed into a single answer. They are preserved as first-class artefacts in the session record. This is enforced by the P107 non-collapse invariant.

No false certainty — A governance invariant enforced at session level: Piea cannot assert certainty it does not have. When the evidence is insufficient, a DissentSignal is emitted. The session answer for that query is the dissent record — not a fabricated confident response.

Third-party verifiability — Any response hash produced by Piea can be independently verified via GET /verify/:hash. The verification endpoint returns the full committed evidence chain. Third parties do not need Piea to be running to verify a past response.


Patent portfolio

The Piea Surface is protected by 19 patents across the AIEP Piea Surface cluster and core AIEP architecture:

PatentTitlePiea capability
P86Causal Tension DetectionProblem Map root tension pipeline
P107Outlier Fork PromotionConstitutional fork preservation
P113Evidence Challenge RecordUser-initiated counter-evidence stream
P114Source Retraction RegistryHistorical propagation on retraction
P116Multi-Channel Presence SubstrateDurable Object session continuity
P117Parametric UnburdeningFive-pass evidence qualification
P118Session Memory + IdentityKV memory, verify/:hash endpoint
P119Multimodal IngestionPDF/DOCX/text → governed artefact
P120Governed File OutputSigned audit packs
P121Computer-Use Execution SurfaceBounded actions, risk-tiered
P122AODSR80+ authoritative source registry
P123Bulk Ingestion + Delta FeedsBatch ingest, subscription
P124Source Provenance + IntegrityFive-class taxonomy, ceiling enforcement, path inspection
P125Subscription + Billing ProtocolArtefact KV plan lifecycle
P126Dissent Signal EngineGoverned uncertainty archival
P127Replayable Reasoning ChainFive-step committed, replay
P128Semantic Branch DetectionDual-interpretation, shared evidence
P141Meta-Governance (MGRP)Constitutional pre-commit self-audit
P200Problem MapCausal tension → governed four-pillar artefact

Unique terminology

These terms are specific to Piea and the AIEP architecture. They do not exist in any other AI assistant’s vocabulary because the capabilities they name do not exist elsewhere.

  • Evidence Rail — the live, right-panel source audit surface showing every piece of evidence that influenced a response
  • Dissent Signal — a governed artefact emitted when confidence threshold is not met; distinct from verbal uncertainty
  • Replayable Reasoning Chain — a five-step, hash-anchored, independently replayable inference trace
  • Semantic Branch — one of two simultaneously answered interpretations of an ambiguous query
  • Problem Map — a four-pillar, hash-committed structured decomposition of a tension-bearing problem
  • Causal Tension — the detected incompatibility between two goal framings at the root of a Problem Map
  • Tension Topology — classifies causal tension as convergent, divergent, oscillating, or unresolved
  • Constitutional Fork — an outlier position elevated to non-collapsible status by the P107 outlier fork promotion invariant
  • GENOME — the cryptographic primitive layer (R1–R8) underlying all Piea evidence commitments
  • AODSR — Authoritative Open Data Source Registry; Piea’s curated 80+ source baseline
  • Parametric Unburdening — five-pass evidence window qualification before inference
  • LRM Consensus — Large Reasoning Model consensus across four parallel models
  • Governance Invariants — constitutional constraints applied to every Problem Map record: no false certainty, dissent remains visible, outliers persist, non-collapse applies
  • Response Commitment — R8 primitive binding the answer to its evidence set; the cryptographic proof that the answer has not changed
  • Evidence Challenge — a user-initiated counter-evidence stream against a specific source in the evidence rail
  • Non-collapse Invariant — P107 constraint: outlier positions may not be collapsed into the dominant answer
  • Confidence Ceiling — the maximum confidence Piea can assign to any answer, governed by the lowest-tier source in the evidence set
  • Negative Proof Hash — R7 primitive proving that an empty evidence set is itself a governed, auditable fact
  • Dual-Ledger — the D1 database pattern combining a mutable session store with an append-only immutable audit ledger

Enterprise readiness

Piea is a production enterprise system:

  • Zero-trust multi-tenancy — per-tenant scoped evidence ledger, five RBAC roles, independent subscription
  • SSO — Google, Microsoft Entra ID, Okta, Cloudflare Access JWT; per-tenant provider configuration
  • White-label UI — available at enterprise tier
  • Six internal data connector types — PostgreSQL, MySQL, REST API, SharePoint, S3/R2, CSV/JSON; internal evidence treated identically to web sources by the retrieval pipeline
  • Signed audit export — Markdown audit pack with full evidence chain, on demand, from any session
  • Slack + Teams integrations — signed outbound payloads with five event types
  • TypeScript SDK@piea/integrations; PieaClient with evidence rail access
  • Global edge deployment — Cloudflare Workers + Pages; no centralised server, no single point of failure

App Expert Helper — PF-009

Any AIEP-compliant application can route domain queries to Piea using the App Expert Helper protocol. The calling app prepares an AppContext:

FieldPurpose
live_contextPre-fetched app data committed to evidence before inference
modeVertical specialist mode to activate
source_priorityDomain URLs to boost in evidence ranking
expertise_kbStatic KB chunks committed to the evidence rail

The answer is grounded in both the live application state and the current external evidence baseline. AIEP Forecast uses this protocol to give Piea live project and CRM context. Any AIEP-compliant application can do the same without schema changes.


Why the architecture, not the model

Piea uses large language models to generate text. This is not AIEP’s claim.

Every AI assistant uses a language model. Model quality varies but is not what AIEP governs. AIEP governs what wraps the generation.

Before the model sees a question, Piea has retrieved evidence, committed each artefact to a chain, inspected every source for integrity flags, and qualified the evidence window. After the model generates, Piea has committed the response to its evidence set, checked governance invariants, and persisted the session — including any dissent records.

The model is the generation engine. AIEP is the governed evidence substrate the model operates on.

The question for any serious evaluation is not which model generates better text. It is: which system can prove what sources it used, that they were not tampered with, and produce a hash any third party can independently check?

No other system answers that question. Piea does — as a structural property of GENOME R8, not as a claimed feature.


→ Try Piea now · → Read the full specification · → Evidence governance benchmark · → Get started with AIEP