◎ OS PUB Apache 2.0 ← All specifications

P236 — AIEP — Federated Knowledge Trust Weighting Engine

Applicant: Neil Grassby Classification: Patent Application — Confidential Priority: Claims priority from GB2519711.2 filed 20 November 2025 Architecture Layer: AIEP Phase 2 Support Layer


Framework Context

[0001] This specification operates within an AIEP environment as defined in GB2519711.2 and GB2519798.9. The present specification defines the mechanism by which the AIEP system assigns differential trust weights to knowledge artefacts received from federated nodes, based on the trust history, governance alignment, and evidence quality record of the source node.


Field of the Invention

[0002] The present invention relates to federated knowledge trust assessment and differential weighting for multi-node evidence-bound AI systems.


Background

[0003] Federated AIEP nodes share evidence artefacts under the Federated Knowledge Protocol (P230). Knowledge received from remote nodes must be integrated into the local world state with an appropriate trust weight reflecting the reliability of the source. Without differential trust weighting, all remote evidence is treated as equivalent, regardless of source quality differences.


Summary of the Invention

[0004] The invention provides a Federated Knowledge Trust Weighting Engine (FKTWE) that computes an inter-node trust weight for each federated knowledge source. Trust weight computation considers: attestation history (the track record of successful trust attestation cycles for the source node, P229); evidence accuracy record (proportion of admitted evidence artefacts that have not been subsequently challenged or retracted); governance alignment coefficient (similarity between source node’s governance policy hash and local policy hash); and evidence ledger integrity (no detected anomalies in the source’s ledger Merkle root sequence).

[0005] Knowledge artefacts received from a source are stored with the computed trust weight as metadata. Reasoning sessions weight evidence contributions in proportion to source trust weights when integrating multi-source evidence.


ASCII Architecture

Federated Knowledge Artefact (FKP, P230)
               |
               v
+----------------------------------------------+
| Federated Knowledge Trust Weighting Engine   |
|   (FKTWE)                                    |
|                                              |
|  Attestation history lookup (P229)          |
|  Evidence accuracy record lookup            |
|  Governance alignment scoring               |
|  Ledger integrity check                     |
|  Composite trust weight computation         |
+----------------------+-----------------------+
                       |
                       v
       Artefact admitted with trust_weight metadata
       → Reasoning session uses trust_weight in
         evidence integration

Detailed Description

[0006] Attestation History Weight. For each source node, the FKTWE queries the Federated Trust System (P229) for the node’s consecutive successful attestation count. Higher counts produce higher attestation history weights. First-contact nodes with zero history receive minimum trust weight (PROVISIONAL).

[0007] Evidence Accuracy Record. The FKTWE maintains a per-source record of: total artefacts received; artefacts subsequently challenged by local evidence; artefacts retracted or corrected by the source. Accuracy record weight = 1 — (challenged + retracted) / total.

[0008] Governance Alignment Coefficient. The similarity between the source’s most recently attested governance policy hash and the local governance policy hash is evaluated. An exact match yields maximum alignment; divergent policies yield reduced alignment weight. Policies in contradiction with local policy yield zero alignment weight.

[0009] Composite Trust Weight. Composite weight = (0.3 * attestation_history) + (0.4 * accuracy_record) + (0.2 * governance_alignment) + (0.1 * ledger_integrity). Weights range from 0.0 (untrusted) to 1.0 (fully trusted).



Technical Effect

[0010] The invention provides evidence-grounded, dynamically updated trust weighting for federally sourced knowledge in multi-node AIEP deployments. By computing trust weights from verifiable historical data — attestation records, accuracy history, governance alignment, and ledger integrity — rather than static reputation scores, the engine ensures that trust weights reflect demonstrated behaviour. By embedding trust weights in admitted artefact metadata and applying them in downstream reasoning, the engine enables reasoning sessions to appropriately discount evidence from nodes with poor accuracy or governance misalignment.


Claims

  1. A computer-implemented method for federated knowledge trust weighting, the method comprising: (a) computing an attestation history weight for each peer node from the proportion of successful attestation cycles over a rolling history window; (b) computing an accuracy record weight from the proportion of non-challenged and non-retracted artefacts received from the node; (c) computing a governance alignment coefficient by comparing the peer node’s attested governance policy hash to the local policy, with exact match yielding maximum alignment and contradictory policies yielding zero; (d) computing a ledger integrity weight from the track record of consistent Merkle root advancement; and (e) computing a composite trust weight as a weighted sum of component weights, embedding the composite weight in the metadata of each artefact admitted from the peer node.

  2. The method of claim 1, wherein governance policy contradiction between the peer node and the local node sets the composite trust weight to zero, preventing any evidence from the contradicting node from contributing to reasoning sessions.

  3. The method of claim 1, wherein trust weights are applied in reasoning sessions by scaling the evidential contribution of each artefact by its source node trust weight.

  4. The method of claim 1, wherein trust weight computations are updated on each successful or failed attestation cycle and on each challenged or retracted artefact event.

  5. The method of claim 1, wherein composite trust weights are stored persistently in admitted artefact metadata and used for all subsequent reasoning sessions citing the artefact.

  6. A Federated Knowledge Trust Weighting Engine comprising: one or more processors; memory storing per-node accuracy records, attestation history logs, and trust weight metadata store; wherein the processors are configured to execute the method of claim 1.

  7. A non-transitory computer-readable medium storing instructions that, when executed by a processor, implement the method of claim 1.


Abstract

A federated knowledge trust weighting engine for multi-node evidence-bound AI deployments computes composite trust weights for peer AIEP nodes from four evidence-grounded dimensions: attestation history, evidence accuracy record, governance policy alignment, and ledger integrity. Composite trust weights are embedded in admitted artefact metadata and applied in downstream reasoning to scale evidential contributions proportionally to demonstrated peer reliability. Governance policy contradiction produces a zero trust weight, excluding contradicting nodes from contributing to any reasoning session.

Dependencies