Skip to main content
Jira progress: loading…

SENS

Sensor Logic Units

1. Overview

Sensor Logic Units (SENS) are a capability class of Micro Engines (MICE) responsible for device-aware conditioning and plausibility control of sensor/IoT data streams.

SENS engines:

  • apply sensor-specific constraints (calibration, saturation, expected operating envelope),
  • produce conditioned measurements plus quality metadata,
  • act as the trust boundary between physical acquisition and downstream pipelines.

SENS is a type, not an identity.
Concrete engines are identified by MEID, versioned via ZAR, and leave CMI lineage stamps at runtime.


2. Position in the MICE Model

Capability, not identity

A Sensor Logic Unit:

  • does not define which engine runs (MEID does),
  • does not define version/deployment (ZAR does),
  • does define what kind of responsibility the engine carries (device-bound plausibility + conditioning).

Multiple MEIDs may implement SENS capabilities across:

  • energy meters (electricity, fuels),
  • water meters and discharge monitors,
  • environmental sensors (air quality, temp, humidity),
  • industrial telemetry.

Typical placement

DimensionValue
Capability typeSENS
Common tiersPre-Tier (upstream of Tier-1–3 computation)
Typical classificationContract-Engine (deterministic rule application)
VersioningZAR (CMI-level)
Lineage impactAppends CMI stamp to USO

3. Design Principles

  1. Device awareness
    SENS engines are explicitly aware of sensor type, measurement method, calibration state, and acquisition context.

  2. Physical plausibility
    Readings are evaluated against physical and operational constraints (ranges, rate-of-change, saturation, impossible states).

  3. Deterministic conditioning
    Conditioning rules must be deterministic and auditable (no probabilistic estimation).

  4. Separation of concerns
    SENS is device-bound; generic time-series processing belongs in TSER/TSYN; inference/gap filling belongs in SEM.


4. Scope of Responsibility

What SENS engines do

  • Validate sensor-originated payload structure at the device layer (minimal checks)
  • Apply device-specific plausibility rules:
    • min/max bounds
    • rate-of-change limits
    • “stuck sensor” detection
    • calibration expiry handling
    • saturation / clipping detection
  • Condition raw sensor noise (basic device-level smoothing/cleanup when explicitly defined)
  • Annotate signals with:
    • quality flags
    • confidence scores (rule-derived, not ML-inferred)
    • device health indicators

Typical sources:

  • smart meters
  • BMS / SCADA feeds
  • industrial IoT devices
  • environmental monitoring systems
  • telemetry from operational equipment

5. What SENS Engines Do Not Do

SENS engines intentionally do not:

  • ❌ compute ESG metrics (CALC)
  • ❌ aggregate across entities or time windows (AGGR)
  • ❌ perform source-agnostic time-series conditioning (TSER)
  • ❌ synchronize multiple series to a common grid (TSYN)
  • ❌ estimate missing values probabilistically (SEM)
  • ❌ apply governance/routing decisions (ZSSR / assurance)

6. Inputs

Typical inputs include:

  • raw sensor measurements or streams
  • device metadata:
    • sensor_type / model
    • calibration coefficients + last calibration timestamp
    • expected operating ranges
    • unit and resolution
  • acquisition context:
    • sampling interval expectations
    • site/asset binding
    • known downtime windows

7. Outputs

SENS engines emit:

  • conditioned measurement payload (values preserved unless explicit clipping/cleaning rule applies)
  • quality metadata, e.g.:
    • quality.status (ok|warning|invalid)
    • quality.flags (saturation|drift|gap|stuck|out_of_range)
    • quality.confidence (0–1, rule-derived)
  • provenance links:
    • source device ID
    • applied rule profile reference
    • execution metadata for lineage

Outputs are commonly consumed by:

  • TSER (generic time-series cleaning, anomaly detection)
  • VALI (schema + domain constraints at the data-contract level)
  • TRFM (unit normalization where required)
  • CALC (domain computations once conditioned)

8. Audit & Provenance

Every SENS execution records:

  • device identifiers + calibration context
  • applied plausibility ruleset ID/version
  • anomalies/flags and their trigger conditions
  • execution timestamp and engine version (CMI/ZAR)

This enables:

  • forensic traceability (“why was this reading excluded/flagged?”)
  • reproducible conditioning under audit
  • partner/verifier portability when shipped with registry proofs

9. Interaction with ZSSR

ZSSR does not route to “SENS” directly unless explicitly designed to.

Preferred routing is:

  • rule-based selection of a specific MEID for a known device type, or
  • tag-based fallback selection (e.g. sens + energy + meter), if no explicit mapping exists.

Tags are selectors and hints, not identity.
Explicit rules still take precedence over tag-driven discovery.


10. Example Sensor Logic Units

Loading micro engines…

11. Summary

Sensor Logic Units:

  • make physical acquisition trust explicit and auditable,
  • protect downstream computation from unreliable device behavior,
  • remain deterministic and replayable by design,
  • support federation by exporting portable conditioning provenance.

They are the device-bound trust gate of the Computation Hub.


Stable

GitHub RepoRequest for Change (RFC)