AIEP
Verifiable AI answers, backed by real evidence
AIEP produces answers using real sources, validates them, and binds every output to cryptographic proof you can check.
Explore Hub → ⭐ View on GitHub →
Try it in 60 seconds
pip install aiep-genome-sdk
from aiep.genome import verify
result = verify("Is inflation rising?")
print(result)
Output:
{
"answer": "...",
"sources": ["https://...", "https://..."],
"hash": "0xabc123...",
"confidence": "verified"
}
See the difference
| Standard AI | AIEP |
|---|---|
| Generates answers | Produces evidence-bound outputs |
| No guaranteed sources | Real, retrievable sources |
| No audit trail | Cryptographic proof |
| Non-deterministic | Deterministic & replayable |
Why AIEP exists
Most AI systems generate answers by prediction.
- They can be confident but wrong
- They rarely provide verifiable sources
- There is no reliable audit trail
AIEP changes that by requiring evidence before output.
What AIEP is
AIEP (Architected Instruction & Evidence Protocol) is a protocol for producing evidence-bound outputs.
It is not a model.
It defines how systems:
- retrieve data
- validate sources
- bind outputs to evidence
- enforce deterministic behaviour
AIEP inference
AIEP does not simply attach sources to answers.
It governs how evidence is retrieved, validated, weighted, and used to produce outputs.
Each response is:
- evidence-bound
- confidence-scored
- reproducible
- fail-closed where admissible evidence is missing
AIEP guarantees
Every governed response:
- includes an Evidence Rail
- references real source artefacts
- carries a confidence tier
- is deterministic and replayable
- is fail-closed if evidence is insufficient
- is machine-discoverable via
/.well-known/aiep/
Core components
- Hub — developer gateway
- Piea — AIEP reasoning engine
- Mirror — machine-readable web layer
- GENOME SDK — integration toolkit
What you can build
- AI systems that prove their outputs
- Applications with audit-grade reasoning
- Workflows requiring regulatory alignment
- Systems that do not silently hallucinate
Machine-readable by design
/.well-known/aiep/
/.well-known/aiep/schema
These allow:
- AI agents to discover capabilities
- Systems to validate evidence structures
- Cross-platform interoperability
Direct endpoints: /.well-known/aiep/index.json · /.well-known/aiep/schema · /.well-known/aiep/metadata.json
Start building
Explore Hub → View GitHub → Get Started →