GPT vs AIEP
Same question. Two different systems.
This page shows what changes when you apply AIEP evidence governance — using the same question, side by side.
The question
“Is inflation in the UK currently rising?”
Standard AI response
Yes, UK inflation is currently rising. The Consumer Prices Index (CPI)
showed an increase in recent months, driven by energy prices and food costs.
Analysts expect this trend to continue through the next quarter.
What you get:
- An answer
- No sources
- No hash
- No audit trail
- No way to verify whether any of this is current, accurate, or reproducible
What happens if this is wrong: Nothing. There is no record it was ever produced.
AIEP-governed response
from aiep.genome import verify
result = verify("Is inflation in the UK currently rising?")
print(result)
{
"answer": "UK CPI rose 3.4% in the 12 months to February 2026, up from 3.1% in January.",
"sources": [
{
"url": "https://www.ons.gov.uk/economy/inflationandpriceindices/bulletins/consumerpriceinflation/february2026",
"retrieved": "2026-04-12T09:14:00Z",
"content_hash": "sha256:4f2a1e8c9b0d3f7a6c8e2d5b1a9f4c3e8b7d2a1f6e9c4b0d3a7f5e8c2b1d4a0",
"tls_verified": true,
"integrity": "PASS"
}
],
"hash": "sha256:7da3d0cf50986a44d34dfd66e46d54b26d6685d508dfdada80f79153c855d7e8",
"confidence": "verified",
"evidence_rail": "AIEP-EVIDENCE-RAIL-001",
"replayable": true,
"fail_closed": false
}
What you get:
- A sourced, dated answer drawn from a real ONS publication
- Cryptographic hash of the source at retrieval time
- TLS integrity confirmation
- A hash of the response itself — reproducible by any third party
- A confidence tier:
verified - A replayable evidence trail
What happens if this is wrong: The hash mismatch is detectable. The session is persisted. The exact source is pinned and checkable.
The difference
| Standard AI | AIEP | |
|---|---|---|
| Answer | ✅ | ✅ |
| Real sources | ❌ | ✅ |
| Source hash | ❌ | ✅ |
| TLS integrity check | ❌ | ✅ |
| Response hash | ❌ | ✅ |
| Confidence tier | ❌ | ✅ |
| Replayable | ❌ | ✅ |
| Audit trail | ❌ | ✅ |
| Fail-closed if evidence missing | ❌ | ✅ |
Why this matters
A standard AI assistant produces text. It may be right. It may be confidently wrong. There is no way to tell from the output alone.
An AIEP-governed system produces evidence-bound outputs. Every answer is tied to a real artefact, committed to a hash, and independently verifiable. If evidence is insufficient, the system fails closed — it does not guess.
This is the difference between a system that generates and a system that proves.
Try it
pip install aiep-genome-sdk
→ Full quickstart → → GENOME SDK on GitHub → → What is AIEP? →