Skip to main content
Jira progress: loading…

NORM

Normalization Engines

1. Overview

Normalization Engines (NORM) are micro-engines that rescale or normalize numeric values to enable comparability across entities, time periods, or benchmarks.

NORM engines preserve the semantic meaning of a metric while expressing it relative to a declared reference base (e.g. revenue, production volume, area, FTE).

Normalization is a mathematical operation, not a representational one.

NORM is an engine type in the MICE taxonomy; concrete normalizers are implemented as MEID-registered micro engines.


2. Design Principles

  1. Comparability Over Absolutes
    NORM engines exist to support comparison, benchmarking, and analysis — not raw reporting.

  2. Explicit Reference Bases
    Every normalization explicitly declares its denominator or baseline.

  3. Deterministic Scaling
    Given identical inputs and bases, normalization always yields the same result.

  4. Lineage Preservation
    Normalized outputs always reference their source values and bases.


3. Scope of Responsibility

3.1 What NORM engines do

NORM engines perform value rescaling without changing semantic intent:

  • Convert absolute values to intensities
  • Normalize metrics for peer comparison
  • Express metrics relative to capacity, revenue, or activity

Typical examples:

  • tCO₂e per revenue
  • energy per m²
  • water per unit produced
  • waste per FTE

Normalization may be applied:

  • after CALC,
  • after AGGR,
  • or at disclosure-preparation stage.

4. What NORM engines do not do

NORM engines explicitly do not perform:

  • ❌ Primary computation (CALC)
  • ❌ Unit or format conversion (TRFM)
  • ❌ Aggregation across entities or time (AGGR)
  • ❌ Metadata tagging or classification (META)
  • ❌ Estimation or extrapolation (SEM)
  • ❌ Policy or routing decisions (ZSSR)

Normalization changes scale, not structure, meaning, or policy.


5. Inputs

NORM engines consume:

  • Validated numeric values (absolute or aggregated)
  • A declared normalization base, such as:
    • revenue
    • production volume
    • floor area
    • headcount
  • Optional benchmark or peer group context

Inputs must be numerically valid and temporally aligned.


6. Outputs

NORM engines produce:

  • Normalized value payload
  • Explicit reference metadata, including:
    • base type
    • base value
    • normalization rule identifier
  • Provenance links to source signals and base signals

Normalized outputs are typically consumed by:

  • reporting and disclosure pipelines,
  • benchmarking and analytics,
  • scenario and scoring engines.

7. Position in the computation chain

Normalization typically occurs after computation and aggregation:

INPUT
→ VALI
→ CALC
→ TRFM
→ AGGR
→ NORM
→ META
→ Reporting / Analysis

This ordering ensures that normalization operates on trusted, consolidated values.


8. Canonical identification

  • Engine Type: NORM
  • USO role: expresses values relative to a declared base
  • Category: Micro Engine (MICE)

NORM identifiers appear in:

  • signal lineage (CMI stamps),
  • audit trails,
  • explainability layers.

9. Registry view

All engines tagged as norm:

Loading micro engines…

10. Design rationale

Normalization is not a formatting concern — it is a semantic scaling operation.

By isolating normalization into its own engine type, ZAYAZ ensures:

  • clear separation between absolute truth and relative insight,
  • auditability of intensity metrics,
  • consistent treatment of denominators across reports,
  • safe reuse of normalization logic across domains.

  • CALC — Calculation Engines
  • AGGR — Aggregation Engines
  • TRFM — Transformation Engines
  • META — Metadata Enrichment Engines
  • SCEN — Scenario Engines

Stable

GitHub RepoRequest for Change (RFC)