P256 — AIEP — Provenance-Bound Knowledge Cache
Applicant: Neil Grassby Classification: Patent Application — Confidential Priority: Claims priority from GB2519711.2 filed 20 November 2025 Architecture Layer: AIEP Phase 2 Knowledge Retrieval Layer
Framework Context
[0001] This specification operates within an AIEP environment as defined in GB2519711.2 and GB2519798.9. The present specification defines a high-speed knowledge cache that stores frequently accessed knowledge items with their complete provenance chains, enabling fast retrieval of both the knowledge content and its evidential basis without requiring a full ledger traversal on each access.
Field of the Invention
[0002] The present invention relates to provenance-bound knowledge caching for evidence-chain-preserving fast access in evidence-bound AI knowledge management.
Background
[0003] Evidence-bound reasoning requires that every knowledge item accessed during a reasoning session carry its provenance chain — the chain of evidence artefacts from which it was derived. Reconstructing provenance chains from the storage layer on each access is expensive. A purpose-built cache that stores knowledge items together with their provenance chains accelerates evidence-bound reasoning without sacrificing provenance integrity.
Summary of the Invention
[0004] The invention provides a Provenance-Bound Knowledge Cache (PBKC) that: maintains a bounded in-memory cache of knowledge items from the LTM (P208), CWSG (P200), and abstraction library, each stored alongside a compressed provenance chain; evicts least-recently-used items with provenance compaction on eviction; validates cache entry freshness against the evidence ledger Merkle root on each access; and invalidates cache entries when the underlying evidence artefacts are modified or retracted.
ASCII Architecture
Reasoning Session Knowledge Request
|
v
+-------------------------------------------+
| Provenance-Bound Knowledge Cache (PBKC) |
| |
| Cache lookup (item + provenance chain) |
| Freshness validation (ledger root check)|
| Cache hit → return item + provenance |
| Cache miss → fetch from LTM/CWSG |
| + load provenance chain |
| + admit to cache |
| Eviction: LRU + provenance compaction |
+-------------------+-----------------------+
|
v
Knowledge item + provenance chain
→ Reasoning session (no full ledger traversal)
Detailed Description
[0005] Cache Entry Structure. Each cache entry holds: the knowledge item content; a compressed provenance chain (using RLCC, P254); the evidence ledger Merkle root at the time of caching; and a staleness flag.
[0006] Freshness Validation. On each cache access, the item’s ledger root at caching is compared to the current ledger root. If the root has advanced and the item’s underlying evidence has been flagged as modified or retracted, the entry is invalidated and the item is re-fetched with an updated provenance chain.
[0007] Provenance Compaction. On eviction, the provenance chain is further compacted using the RLCC codec before the entry is written to the cold store, reducing storage overhead for infrequently accessed items.
[0008] Cache Warming. At session start, the PBKC pre-warms cache entries for the N highest-utility knowledge items (P232 scores) relevant to the session’s active goals.
Technical Effect
[0009] The invention provides fast access to distilled knowledge items whilst preserving provenance traceability and evidence freshness guarantees, eliminating the latency cost of full provenance chain reconstruction on each access. By validating cache entry freshness using ledger Merkle root comparison on each access, the cache detects evidence retraction and modification events that would invalidate cached knowledge without requiring per-access evidence ledger traversal. Session-start cache warming for high-utility items reduces cold-start latency at the beginning of computationally intensive reasoning sessions.
Claims
-
A method of providing provenance-bound knowledge caching in an evidence-bound artificial intelligence system, comprising the steps of: (a) storing each cache entry as a record comprising: the knowledge item content; a compressed provenance chain encoded using the Reasoning Lineage Compression Codec; the evidence ledger Merkle root at the time of caching; and a staleness flag; (b) on each cache access, comparing the entry’s caching-time ledger Merkle root to the current ledger Merkle root; if the root has advanced, checking whether any evidence artefact in the provenance chain has been modified or retracted; if so, invalidating the entry and re-fetching the knowledge item with an updated provenance chain; (c) on cache eviction, applying further provenance chain compaction using the RLCC codec before writing to cold-store, reducing cold-store overhead for infrequently accessed items; (d) at session start, pre-warming cache entries for the N highest-utility knowledge items relevant to the session’s active goals, ranked by Knowledge Utility Scoring Engine scores.
-
The method of claim 1, wherein the evidence modification check at step (b) uses Merkle membership proofs to confirm that specific artefacts in the provenance chain remain current ledger members.
-
The method of claim 1, wherein the staleness flag is set on invalidation and cleared only when the entry has been re-fetched and its provenance chain re-verified.
-
The method of claim 1, wherein N for cache warming is configurable per session class, with higher N for sessions with known compute-intensive reasoning requirements.
-
The method of claim 1, wherein the cache exposes a provenance query API returning the full provenance chain of any cached item, enabling downstream components to verify evidentiary lineage without re-fetching from the ledger.
-
A provenance-bound knowledge cache for an evidence-bound artificial intelligence system, comprising: a cache store holding knowledge items with compressed provenance chains and caching-time ledger roots; a freshness validator performing Merkle-root-based evidence modification detection; an eviction compactor applying RLCC codec to evicted entries; and a session-start cache warmer.
-
A computer-readable medium carrying instructions for implementing the method of any preceding method claim.
Abstract
A provenance-bound knowledge cache for evidence-bound artificial intelligence stores knowledge items together with their RLCC-compressed provenance chains and the evidence ledger Merkle root captured at caching time. On each access, entry freshness is validated by comparing the caching-time root to the current root and checking for evidence artefact retraction or modification, invalidating affected entries. Evicted entries are further compacted before cold-store writing. At session start, the N highest-utility knowledge items relevant to the session’s active goals are pre-warmed into cache.
Dependencies
- P208
- P200
- P237