Skip to main content
Jira progress: loading…

AME

Assurance / Meta engines

1. Overview

Assurance / Meta Engines (AME) are a Tier-0 capability class of Micro Engines (MICE) responsible for evaluating trust, confidence, and admissibility of signals produced elsewhere in the platform.

AME engines:

  • do not compute metrics,
  • do not transform values,
  • do not infer new facts,

but instead operate on lineage, quality, and evidence metadata to produce assurance meta-signals.

AME is a capability class, not an identity.
Concrete engines are identified by MEID, versioned via ZAR, and emit authoritative assurance USOs.


2. Position in the MICE Model

Tier-0: Outside normal computation

AME engines sit above and across the normal computational tiers.

They are not part of Tier-1 / Tier-2 / Tier-3 pipelines. Instead, they:

  • observe execution outputs,
  • evaluate evidence,
  • emit meta-signals used for disclosure, audit, and governance.
DimensionValue
Capability typeAME
Common sub-capabilitiesCFIL (confidence filtering), proof checks
TierTier-0 (Assurance layer)
Typical classificationAssurance-Engine
VersioningZAR (CMI-level)
Lineage impactEmits new USO meta-signals

3. Core Assurance Capability: CFIL

Within AME, the most common capability is CFIL — Confidence Filtering.

CFIL engines:

  • assign confidence scores,
  • evaluate evidence completeness,
  • apply explicit acceptance thresholds,
  • flag or gate downstream usage.

CFIL evaluates trustworthiness, not correctness.


4. Design Principles

  1. Evidence-Driven
    All assurance decisions are derived from explicit evidence: lineage, validation results, provenance, and quality metadata.

  2. Deterministic Evaluation
    Given the same inputs and rules, assurance outcomes are reproducible.

  3. Non-Mutating
    AME engines never modify underlying data values.

  4. Separation from Policy
    AME engines assess trust; policy engines decide what to do with that assessment.


5. Scope of Responsibility

What AME engines do

  • Aggregate confidence indicators from upstream engines
  • Assign composite confidence or assurance scores
  • Evaluate completeness of lineage and evidence
  • Flag signals as:
    • admissible
    • provisional
    • restricted
    • non-assurable
  • Emit assurance meta-signals consumable by governance and reporting layers

Typical evidence sources:

  • EXTR confidence scores
  • SENS quality flags
  • TSER anomaly annotations
  • VALI pass/fail results
  • Provenance completeness (CMI / ZAR coverage)

6. What AME Engines Do Not Do

AME engines explicitly do not:

  • ❌ compute values or indicators (CALC, SCORE)
  • ❌ validate schemas or domain logic (VALI)
  • ❌ transform or normalize data (TRFM, NORM)
  • ❌ estimate missing data (SEM)
  • ❌ enforce routing or policy decisions (ZSSR, ZADIF)

They produce assurance signals, not actions.


7. Inputs

AME engines consume:

  • fully executed signals with lineage
  • confidence and quality metadata
  • declared assurance rulesets
  • threshold definitions

Inputs typically originate from:

  • EXTR
  • SENS
  • TSER / TSYN
  • VALI
  • AGGR
  • SCORE (for composite indicators)

8. Outputs

AME engines emit:

  • assurance meta-signals (new USOs)
  • confidence or admissibility scores
  • assurance classifications (e.g. audit-ready, provisional)
  • provenance references to evaluated evidence

These outputs are consumed by:

  • reporting and disclosure pipelines
  • audit and verification workflows
  • governance and trust propagation layers
  • ZSSR as decision inputs, not routing rules

9. Audit & Provenance

Every AME execution records:

  • evaluated evidence set
  • assurance rules and thresholds applied
  • resulting confidence score or classification
  • engine version and execution context

This enables:

  • regulator and auditor inspection,
  • replayable assurance decisions,
  • cross-partner trust verification.

Assurance outcomes are inspectable artifacts, not opaque judgments.


Link Engines (LINK) are a specialized class of Assurance / Meta engines responsible for asserting, validating, and publishing explicit relationships between signals, executions, and entities.

LINK engines do not compute values and do not assign confidence scores. Instead, they turn implicit lineage and contextual relationships into explicit, auditable, machine-queryable graph facts.

LINK engines make structural dependencies first-class citizens in the system.


In complex reporting and assurance pipelines, many relationships exist implicitly:

  • which computations support a disclosure,
  • which signals were covered by an assurance statement,
  • how metrics relate across periods,
  • which engines or versions a reported number depends on.

LINK engines make these relationships explicit.

They answer questions like:

  • What exactly supports this disclosed number?
  • Which metrics were assured — and which were not?
  • What breaks if this engine or input changes?
  • How do two reported values relate across time or scope?

LINK engines perform deterministic structural reasoning and emit linkage meta-signals.

They can:

  • Assert derivation relationships
  • Bind assurance coverage to concrete signals
  • Declare temporal continuity or comparison
  • Publish dependency graphs for impact analysis
  • Validate consistency between declared lineage and observed structure

LINK engines operate on:

  • USO identifiers,
  • CMI lineage stamps,
  • registry metadata (MEID, version, domain),
  • contextual scope (period, entity, framework).

LINK engines explicitly do not:

  • ❌ compute numeric values (CALC)
  • ❌ validate data correctness (VALI)
  • ❌ assign confidence or trust scores (CFIL)
  • ❌ route or orchestrate execution (ZSSR)
  • ❌ infer meaning or fill gaps (SEM)

They reason about relationships, not data values.


10.4 Example 1 — Disclosure Dependency Linking

Problem
A CSRD disclosure metric (e.g. Total Scope 3 Emissions, FY2025) depends on many upstream computations. That dependency is often implicit and hard to audit.

LINK Engine Action
A LINK engine consumes:

  • the disclosure-level signal,
  • its lineage stamps,
  • registry metadata.

It emits explicit dependency links:

dep-link-example.jsonGitHub ↗
{
"type": "link.assertion",
"subject": "uso:disclosure:esrs.e1.scope3.total@2025",
"links": [
{ "rel": "derived_from", "target": "uso:calc:ghg.scope3.total@v2" },
{ "rel": "covers_scope", "value": "Scope3" },
{ "rel": "covers_categories", "value": ["Cat1", "Cat2", "Cat3"] }
]
}

This allows the platform (and auditors) to deterministically answer:

“What exactly supports this disclosed number?”


10.5 Example 2 — Assurance Coverage Binding

Problem An assurance statement claims that “Scope 1–3 emissions are assurance-ready”. But which exact metrics, versions, and periods does that cover?

LINK Engine Action A LINK engine binds the assurance statement to a precise coverage set:

assurance-example.jsonGitHub ↗
{
"type": "link.coverage",
"assurance_statement": "uso:aae:assurance.statement@v1",
"covers": [
"uso:calc:ghg.scope1.total@2025",
"uso:calc:ghg.scope2.total@2025",
"uso:calc:ghg.scope3.total@2025"
],
"excludes": [
"uso:calc:ghg.scope3.cat15@draft"
]
}

This makes assurance inspectable, verifiable, and non-ambiguous.


10.6 Example 3 — Temporal Continuity & Comparison

Problem An auditor asks how FY2025 emissions relate to FY2024 emissions.

LINK Engine Action The LINK engine asserts explicit temporal relationships:

relationships-example.jsonGitHub ↗
{
"type": "link.temporal",
"relation": "period_comparison",
"current": "uso:calc:ghg.scope1.total@2025",
"previous": "uso:calc:ghg.scope1.total@2024",
"delta": "uso:calc:delta.ghg.scope1@v1"
}

This establishes what was compared, not just that a delta exists.


10.7 Example 4 — Change Impact Analysis

Problem A calculation engine (e.g. a Scope 3 category engine) is updated. Which downstream outputs are affected?

LINK Engine Role Because dependency links are explicit, the platform can query:

“Find all signals linked (directly or indirectly) to this MEID.”

This enables:

  • safe upgrades,
  • controlled recomputation,
  • audit-safe change management.

10.8 Relationship to CFIL and AAE

  • CFIL evaluates trustworthiness (confidence thresholds).
  • LINK evaluates structural correctness and dependency truth.
  • AAE may invoke LINK engines as part of assurance orchestration.

LINK engines provide the structural substrate that makes assurance claims precise.


10.9 Canonical Identification

  • Engine Type: LINK
  • USO Code: LINK
  • Category: Assurance / Meta Engine (AME sub-capability)
  • Layer: Computation Hub / Assurance Layer

LINK engines emit meta-signals that describe relationships, not values.


Without LINK:

  • dependencies are implicit,
  • assurance is vague,
  • impact analysis is manual.

With LINK:

  • relationships are explicit,
  • assurance is provable,
  • federation and verification become scalable.

LINK engines turn the system from a pipeline into a verifiable graph.


11. Interaction with ZSSR

ZSSR does not treat AME engines as normal computation steps.

Instead:

  • AME outputs are consumed as signals,
  • ZSSR rules may reference assurance results,
  • routing decisions remain explicitly rule-driven.

Example:

  • “Only route to disclosure if assurance.level >= audit_ready

AME informs governance — it does not replace it.


12. Example Assurance / Meta Engines

Loading micro engines…

13. Summary

Assurance / Meta Engines:

  • make trust explicit and machine-readable,
  • separate confidence from computation,
  • enable audit-grade disclosure pipelines,
  • form the backbone of regulatory assurance in ZAYAZ.

They are the platform’s truth referees, not its calculators.


Stable

GitHub RepoRequest for Change (RFC)