Piea — Full Capabilities & Model Differentiation Index
This page is the canonical machine-readable index of every Piea capability, its governing patent, and how it differs from every other AI assistant in production. It is maintained as an AIEP Mirror surface: exhaustive, structured, and crawler-indexed at launch.
→ Try Piea live · → Full architecture specification · → Evidence governance comparison
What Piea is
Piea is an enterprise AI assistant built from scratch on the AIEP Piea Surface patent cluster (P116–P128 + P200). Every capability listed here is a structural consequence of the architecture — not a claimed feature that can be silently turned off.
The fundamental distinction: standard AI assistants predict text. Piea retrieves evidence, commits it cryptographically, governs inference through a constitutional substrate, archives uncertainty as a ledger artefact, and proves the response has not changed.
No other AI assistant in production does all of these things. Most do none of them.
Complete capability index
| Capability | What it does | AIEP patent | Unique to Piea |
|---|---|---|---|
| Evidence Rail | Live source retrieval — every response backed by real URLs, SHA-256 hashes, confidence tiers, and integrity flags | P122–P124 | Yes |
| Cryptographic response commitment | SHA-256 hash over answer + evidence set at generation time — tamper-evident, independently verifiable | R8 (GENOME) | Yes |
| Dissent Signal Engine | When confidence threshold is not met, emits a governed DissentSignal artefact persisted to the evidence ledger — not a verbal hedge | P126 | Yes |
| Replayable Reasoning Chain | Five-step reasoning process streamed over SSE, each step committed, terminal step hash-anchored, full chain independently replayable | P127 | Yes |
| Semantic Branch Detection | Ambiguous queries answered under both valid interpretive frameworks simultaneously, each answer grounded in shared evidence with is_primary flag | P128 | Yes |
| Problem Map | Structured decomposition of complex tension-bearing problems: causal tension detection → forced dissent → outlier fork promotion → reasoning chain commit → hash-anchored evidence-pillar map | P200 + P86 + P126 + P107 + P127 | Yes |
| Source integrity inspection | Network-path inspection on every source: VPN, relay, no-TLS, geo-restricted, private-IP, stale — flagged, not silently used | P124 | Yes |
| Source provenance classification | Five-class taxonomy (TIER_1_PRIMARY through TIER_5_COMMUNITY) with confidence ceiling enforcement per source | P124 | Yes |
| Evidence challenge records | Users initiate counter-evidence streams against specific sources; challenge flag propagates through session history | P113 | Yes |
| Source retraction registry | Retracted sources cease contributing to confidence; propagation to historical sessions containing those sources | P114 | Yes |
| Parametric unburdening | Five-pass evidence window qualification before any inference — removes noise, qualifies authority, enforces ceiling | P117 | Yes |
| Session memory substrate | KV-backed rolling conversation window; prior turns included as governed context | P118 | Yes |
| Substrate continuity | Durable Objects maintain reasoning state across channel changes without data loss | P116 | Yes |
| Response hash verification | GET /verify/:hash endpoint — any third party can independently verify a Piea response against its committed evidence chain | P118 | Yes |
| Multimodal document ingestion | PDF, DOCX, plain text → canonical evidence artefact with SHA-256 content hash on receipt | P119 | Yes |
| Bulk ingestion + delta feeds | Batch endpoint for document sets with delta feed subscription support | P123 | Yes |
| Governed file output | Signed Markdown audit pack with full evidence chain, on demand | P120 | Yes |
| Computer-use execution surface | Bounded computer-use actions with risk-tiered authorisation; every action committed as an artefact | P121 | Yes |
| AODSR | Authoritative Open Data Source Registry — 80+ baseline sources across legislation, treaties, case law, financial data, and standards | P122 | Yes |
| Meta-governance (self-audit) | Constitutional self-check before every response is committed; ACCEPT / FLAGGED_WEAKNESS / REJECT_CHAIN outcomes | P141 | Yes |
| Multi-model LRM consensus | Factual/procedural: Workers AI (Llama). Analytical: Workers AI + GPT-4o + Claude Sonnet + Ollama in parallel, four-dimension consensus — evidence alignment, source completeness, claim overlap, model agreement | P127 | Yes |
| Model dissent surfacing | When two models disagree significantly, a structured ModelDissentSummary is emitted with dissent type, confidence delta, and alternative reasoning excerpt | P126 | Yes |
| Cross-source synthesis engine | Four modes: consensus, dissent_map, outlier_scan, integration_surface — identifies where Tier-1 sources agree, contradict, or diverge | — | Yes |
| Causal tension detection | Detects incompatibility between two goal framings with topology classification: convergent, divergent, oscillating, unresolved | P86 | Yes |
| Forced dissent | Mandatory dissent layer on every analytical response — surfaces weaknesses, counter-evidence, and low-confidence claims | P126 | Yes |
| Outlier fork promotion | Outlier positions with confidence ≥ 0.7 are promoted to constitutional forks — preserved as non-collapsible reasoning branches | P107 | Yes |
| Subscription and billing protocol | Stripe-integrated plan lifecycle with Artefact KV cache — capability entitlements enforced at route level | P125 | Yes |
| App Expert Helper protocol | Any AIEP app calls Piea as a domain expert with AppContext — live app data + external evidence combined | PF-009 | Yes |
| Vertical specialist modes | Eight modes reconfigure sources, system prompt, and confidence routing: Construction, Legal, Financial, Planning, Investment, Compliance, Generic, Problem Map | PF-011 + P200 | Yes |
| Multi-tenancy + RBAC | Tenant-scoped evidence ledger, five RBAC roles, independent subscription lifecycle, white-label UI | — | Yes |
| Enterprise SSO | OIDC + SAML: Google, Microsoft Entra ID, Okta, Cloudflare Access; per-tenant SSO configuration | — | Partial |
| Internal data connectors | PostgreSQL, MySQL, REST API, SharePoint, S3/R2, CSV/JSON → Evidence Ledger; internal and external evidence treated identically | — | Yes |
| Push integrations | Slack, Teams, webhooks — signed outbound payloads; five event types: piea.response, piea.source.drift, piea.export.ready, piea.session.resumed, piea.source.added | — | Partial |
| TypeScript SDK | @piea/integrations — PieaClient with evidence rail access built in | — | Yes |
| Semantic source memory | Cloudflare Vectorize piea-sources index — cosine-similarity source routing at query time | — | Partial |
| Source discovery mode | Piea searches for and proposes new sources autonomously; admin approval before entering retrieval pipeline | — | Yes |
| Voice input | Browser speech recognition → governed evidence-backed response | — | Partial |
| Code execution surface | Piea can generate and execute code in a sandboxed surface, outputting governed artefacts | P121 | Yes |
| Image generation | Prompt-to-image via Workers AI with artefact commitment | — | Partial |
| Session branching | Fork any point in a conversation into an independent governed session | — | Partial |
Model differentiation — Piea vs the field
The comparison below covers the 10 evidence governance dimensions defined in the AIEP evidence comparison framework. For each dimension: ✓ Full · ~ Partial · ✗ Absent.
| Dimension | Piea | ChatGPT / Copilot | Claude | Gemini | Perplexity |
|---|---|---|---|---|---|
| Source attribution — structured artefact IDs, URLs, hashes | ✓ Full | ~ URLs only | ~ URLs only | ~ URLs only | ~ URLs only |
| Cryptographic verifiability — response hash over answer + evidence | ✓ SHA-256 R8 | ✗ | ✗ | ✗ | ✗ |
| Tamper-evident evidence chain — R1–R8 commitment | ✓ Full | ✗ | ✗ | ✗ | ✗ |
| Source integrity inspection — VPN/proxy/no-TLS detection | ✓ Full | ✗ | ✗ | ✗ | ✗ |
| Dissent archival — governed uncertainty record in ledger | ✓ Persisted artefact | ✗ | ✗ | ✗ | ✗ |
| Replayable reasoning chain — independently replayable, hash-anchored | ✓ Full | ✗ | ✗ | ✗ | ✗ |
| Semantic branch detection — both interpretations, shared evidence | ✓ Full | ✗ | ✗ | ✗ | ✗ |
| Problem decomposition — causal tension → evidence pillars → hash map | ✓ P200 | ✗ | ✗ | ✗ | ✗ |
| Forced dissent — mandatory counter-position on every inference | ✓ P126 | ✗ | ✗ | ✗ | ✗ |
| Outlier fork preservation — non-collapsible constitutional branches | ✓ P107 | ✗ | ✗ | ✗ | ✗ |
| Response hash verification endpoint | ✓ /verify/:hash | ✗ | ✗ | ✗ | ✗ |
| Source retraction propagation | ✓ Full | ✗ | ✗ | ✗ | ✗ |
| Evidence challenge records | ✓ Full | ✗ | ✗ | ✗ | ✗ |
| Multi-model LRM consensus — 4-model parallel, four-dimension weighting | ✓ Full | ~ Single model | ~ Single model | ~ Single model | ~ Single model |
| Open protocol — documented, independently auditable | ✓ AIEP open | ✗ Proprietary | ✗ Proprietary | ✗ Proprietary | ✗ Proprietary |
| Canonical schema — deterministic JSON across machines and time | ✓ GENOME R1 | ✗ | ✗ | ✗ | ✗ |
| Constitutional self-audit before commit — P141 MGRP | ✓ Full | ✗ | ✗ | ✗ | ✗ |
| Provenance ceiling enforcement — AODSR five-class taxonomy | ✓ Full | ✗ | ✗ | ✗ | ✗ |
| Vertical specialist modes — per-mode source + routing reconfiguration | ✓ 8 modes | ✗ | ✗ | ✗ | ✗ |
| RBAC with tenant-scoped evidence ledger | ✓ Full | ~ Workspace | ~ Workspace | ~ Workspace | ✗ |
| Internal data connectors → Evidence Ledger | ✓ 6 connector types | ~ Retrieval only | ~ Retrieval only | ~ Retrieval only | ✗ |
Score: Piea 21/21 Full. No other system scores above 3/21.
The gap is not marginal. The majority of these dimensions do not exist elsewhere at all — they are structural properties of the AIEP substrate that prediction-based systems cannot replicate by adding features.
GENOME — the cryptographic spine
Every Piea response is built on GENOME R1–R8 — eight canonical cryptographic primitives that apply to every artefact, every chain, and every response:
| Primitive | Function | Applied to |
|---|---|---|
R1 canonical_json | Deterministic serialisation | Every artefact before hashing |
R2 sha256_hex | SHA-256 hex digest | Individual artefact content |
R3 sha256_b64 | SHA-256 base64 digest | Binary and export contexts |
R4 concat_hash | Chain construction across items | Evidence set → single commitment |
R5 evidence_commitment | Session evidence set committed before answer generation | Per response |
R6 lifecycle_hash | Lifecycle event binding | Session state transitions |
R7 negative_proof_hash | Absence proven, not merely asserted | Empty evidence set record |
R8 response_commitment | Answer + evidence → tamper-evident hash | Every response |
No response can exist in Piea’s evidence ledger without a valid R8 commitment. This is not a policy. It is an architectural constraint.
The Problem Map — P200 (unique)
The Problem Map is Piea’s most structurally advanced capability. It has no equivalent in any other AI assistant.
What it does: A user presents a problem with inherent tension — two framings that pull against each other. Piea runs a governed pipeline:
- Causal tension detection (P86) — incompatibility score, topology classification (convergent / divergent / oscillating / unresolved)
- Forced dissent (P126) — mandatory counter-position surfaced before any resolution
- Outlier fork evaluation (P107) — outlier positions with confidence ≥ 0.7 promoted to constitutional forks, non-collapsible
- Reasoning chain commit (P127) — full pipeline committed to the evidence ledger
- Problem Map assembly (P200) — four pillars: Evidence, Reasoning, Constraints, Governance
The output is a ProblemMapRecord: a hash-anchored structured document with schema_id: aiep.piea.problem_map.v2, problem_hash, reasoning_chain_id, four governance invariants enforced, and a cryptographically committed resolution pathway.
Four governance invariants are required on every Problem Map:
- No false certainty
- Dissent remains visible
- Outliers persist
- Non-collapse applies
A Problem Map that does not satisfy all four governance invariants cannot be committed. This is enforced at the schema level with minItems: 4 and uniqueItems: true.
No other AI assistant decomposes a problem into a cryptographically committed, governance-enforced, four-pillar structure. Prediction-based systems produce text. Piea produces a verifiable artefact.
Eight specialist modes
Piea reconfigures its entire evidence pipeline — sources, system prompt, confidence routing — across eight modes. Mode selection changes what Piea knows, not just how it answers.
| Mode | Governs | Primary authority sources |
|---|---|---|
| Construction | CDM, NEC, JCT, building regulations, planning | HSE, Legislation.gov.uk, RICS, Planning Portal |
| Legal | Primary law, case law, statutory instruments, EU law | BAILII, Legislation.gov.uk, Supreme Court, EUR-Lex |
| Financial | FRS 102, IFRS, financial regulation, HMRC | FCA, Bank of England, HMRC, Companies House, SEC |
| Planning | Local plans, EIA, development management, NPPF | Planning Portal, NPPF, GOV.UK |
| Investment | Monetary policy, gilt yields, market data, ONS | FCA, BoE, HMRC, ONS, BIS, SEC/EDGAR |
| Compliance | UK GDPR, AML, sector regulation, enforcement actions | ICO, FCA, HSE, Legislation.gov.uk |
| Generic | Any topic | Full 80+ source AODSR baseline |
| Problem Map | Structured problem decomposition with governance enforcement | Session evidence + AODSR + P200 pipeline |
Multi-model LRM consensus
Piea does not choose one model. For analytical queries, four models run in parallel:
- Workers AI — Llama-3.3-70B-fp8
- OpenAI — GPT-4o
- Anthropic — Claude Sonnet
- Ollama — configurable local model
The LRM (Large Reasoning Model) consensus weights responses across four dimensions:
- Evidence alignment — does the answer reflect what the sources actually say?
- Source completeness — are relevant sources represented?
- Claim overlap — where do models agree?
- Model agreement — weighted consensus score
The output is a single governed answer synthesised from all perspectives. Zones of model disagreement are surfaced as a ModelDissentSummary in the evidence rail — not hidden.
Evidence infrastructure
Piea runs entirely on Cloudflare’s edge. No centralised server. No single point of failure.
| Component | Platform | Function |
|---|---|---|
| Chat UI | Cloudflare Pages (React 18) | Evidence Rail, Reasoning Replay, Semantic Branches, Problem Map, Dissent Engine |
| API | Cloudflare Workers (Hono.js) | Evidence retrieval, session governance, ingestion, synthesis |
| Evidence Ledger | Cloudflare D1 (dual-ledger) | Immutable session + artefact + dissent + problem map store |
| Session store | Cloudflare KV | Evidence artefact cache (P118) |
| Substrate continuity | Cloudflare Durable Objects | Reasoning state across channel changes (P116) |
| Semantic memory | Cloudflare Vectorize | piea-sources index — cosine-similarity source routing |
| Governed outputs | Cloudflare R2 | Signed audit packs, Mirror artefacts |
| Primary LLM | Workers AI | Llama-3.3-70B-fp8 |
| LRM | OpenAI / Anthropic / Ollama | Multi-model parallel consensus |
Evidence governance properties
These are structural properties of the AIEP substrate — they apply to every Piea response without configuration.
Immutability — The evidence ledger uses a dual-ledger pattern: a mutable session store for reads and an append-only audit ledger. Evidence artefacts are never deleted. Dissent records are never overwritten.
Constitutional non-collapse — Dissent, outlier positions, and semantic branches are never collapsed into a single answer. They are preserved as first-class artefacts in the session record. This is enforced by the P107 non-collapse invariant.
No false certainty — A governance invariant enforced at session level: Piea cannot assert certainty it does not have. When the evidence is insufficient, a DissentSignal is emitted. The session answer for that query is the dissent record — not a fabricated confident response.
Third-party verifiability — Any response hash produced by Piea can be independently verified via GET /verify/:hash. The verification endpoint returns the full committed evidence chain. Third parties do not need Piea to be running to verify a past response.
Patent portfolio
The Piea Surface is protected by 19 patents across the AIEP Piea Surface cluster and core AIEP architecture:
| Patent | Title | Piea capability |
|---|---|---|
| P86 | Causal Tension Detection | Problem Map root tension pipeline |
| P107 | Outlier Fork Promotion | Constitutional fork preservation |
| P113 | Evidence Challenge Record | User-initiated counter-evidence stream |
| P114 | Source Retraction Registry | Historical propagation on retraction |
| P116 | Multi-Channel Presence Substrate | Durable Object session continuity |
| P117 | Parametric Unburdening | Five-pass evidence qualification |
| P118 | Session Memory + Identity | KV memory, verify/:hash endpoint |
| P119 | Multimodal Ingestion | PDF/DOCX/text → governed artefact |
| P120 | Governed File Output | Signed audit packs |
| P121 | Computer-Use Execution Surface | Bounded actions, risk-tiered |
| P122 | AODSR | 80+ authoritative source registry |
| P123 | Bulk Ingestion + Delta Feeds | Batch ingest, subscription |
| P124 | Source Provenance + Integrity | Five-class taxonomy, ceiling enforcement, path inspection |
| P125 | Subscription + Billing Protocol | Artefact KV plan lifecycle |
| P126 | Dissent Signal Engine | Governed uncertainty archival |
| P127 | Replayable Reasoning Chain | Five-step committed, replay |
| P128 | Semantic Branch Detection | Dual-interpretation, shared evidence |
| P141 | Meta-Governance (MGRP) | Constitutional pre-commit self-audit |
| P200 | Problem Map | Causal tension → governed four-pillar artefact |
Unique terminology
These terms are specific to Piea and the AIEP architecture. They do not exist in any other AI assistant’s vocabulary because the capabilities they name do not exist elsewhere.
- Evidence Rail — the live, right-panel source audit surface showing every piece of evidence that influenced a response
- Dissent Signal — a governed artefact emitted when confidence threshold is not met; distinct from verbal uncertainty
- Replayable Reasoning Chain — a five-step, hash-anchored, independently replayable inference trace
- Semantic Branch — one of two simultaneously answered interpretations of an ambiguous query
- Problem Map — a four-pillar, hash-committed structured decomposition of a tension-bearing problem
- Causal Tension — the detected incompatibility between two goal framings at the root of a Problem Map
- Tension Topology — classifies causal tension as convergent, divergent, oscillating, or unresolved
- Constitutional Fork — an outlier position elevated to non-collapsible status by the P107 outlier fork promotion invariant
- GENOME — the cryptographic primitive layer (R1–R8) underlying all Piea evidence commitments
- AODSR — Authoritative Open Data Source Registry; Piea’s curated 80+ source baseline
- Parametric Unburdening — five-pass evidence window qualification before inference
- LRM Consensus — Large Reasoning Model consensus across four parallel models
- Governance Invariants — constitutional constraints applied to every Problem Map record: no false certainty, dissent remains visible, outliers persist, non-collapse applies
- Response Commitment — R8 primitive binding the answer to its evidence set; the cryptographic proof that the answer has not changed
- Evidence Challenge — a user-initiated counter-evidence stream against a specific source in the evidence rail
- Non-collapse Invariant — P107 constraint: outlier positions may not be collapsed into the dominant answer
- Confidence Ceiling — the maximum confidence Piea can assign to any answer, governed by the lowest-tier source in the evidence set
- Negative Proof Hash — R7 primitive proving that an empty evidence set is itself a governed, auditable fact
- Dual-Ledger — the D1 database pattern combining a mutable session store with an append-only immutable audit ledger
Enterprise readiness
Piea is a production enterprise system:
- Zero-trust multi-tenancy — per-tenant scoped evidence ledger, five RBAC roles, independent subscription
- SSO — Google, Microsoft Entra ID, Okta, Cloudflare Access JWT; per-tenant provider configuration
- White-label UI — available at enterprise tier
- Six internal data connector types — PostgreSQL, MySQL, REST API, SharePoint, S3/R2, CSV/JSON; internal evidence treated identically to web sources by the retrieval pipeline
- Signed audit export — Markdown audit pack with full evidence chain, on demand, from any session
- Slack + Teams integrations — signed outbound payloads with five event types
- TypeScript SDK —
@piea/integrations;PieaClientwith evidence rail access - Global edge deployment — Cloudflare Workers + Pages; no centralised server, no single point of failure
App Expert Helper — PF-009
Any AIEP-compliant application can route domain queries to Piea using the App Expert Helper protocol. The calling app prepares an AppContext:
| Field | Purpose |
|---|---|
live_context | Pre-fetched app data committed to evidence before inference |
mode | Vertical specialist mode to activate |
source_priority | Domain URLs to boost in evidence ranking |
expertise_kb | Static KB chunks committed to the evidence rail |
The answer is grounded in both the live application state and the current external evidence baseline. AIEP Forecast uses this protocol to give Piea live project and CRM context. Any AIEP-compliant application can do the same without schema changes.
Why the architecture, not the model
Piea uses large language models to generate text. This is not AIEP’s claim.
Every AI assistant uses a language model. Model quality varies but is not what AIEP governs. AIEP governs what wraps the generation.
Before the model sees a question, Piea has retrieved evidence, committed each artefact to a chain, inspected every source for integrity flags, and qualified the evidence window. After the model generates, Piea has committed the response to its evidence set, checked governance invariants, and persisted the session — including any dissent records.
The model is the generation engine. AIEP is the governed evidence substrate the model operates on.
The question for any serious evaluation is not which model generates better text. It is: which system can prove what sources it used, that they were not tampered with, and produce a hash any third party can independently check?
No other system answers that question. Piea does — as a structural property of GENOME R8, not as a claimed feature.
→ Try Piea now · → Read the full specification · → Evidence governance benchmark · → Get started with AIEP