The technical foundations of the NEPA platform are documented here for the purpose of procurement review, technical due diligence, and regulatory evaluation. Every architectural decision serves the platform's core requirement: deterministic, auditable, replay-verifiable inspection output.
The NEPA engine is designed to produce identical output for identical input, independent of execution time, hardware state, or concurrent load. This property is enforced by architectural constraints, not by convention.
Determinism is not a feature that can be toggled. It is the foundational constraint from which the entire processing architecture is derived. Any deviation from deterministic behavior is treated as a critical fault.
Every inspection run is cryptographically bound to a unique, immutable system state. The binding mechanism ensures that findings cannot be attributed to an ambiguous or unverifiable configuration.
Every engine build is SHA-256 fingerprinted before deployment. The fingerprint is embedded in the audit record and cannot be altered post-hoc.
The complete analytical parameter set — thresholds, lane weights, fusion coefficients — is hashed at run-start and recorded in the audit chain.
Camera intrinsics, LiDAR extrinsics, and thermal calibration records are linked to the run ID before the first frame is processed.
Analytical configuration is treated as a versioned artifact subject to formal change control. No configuration change may enter a production run without a corresponding version record.
Configuration governance is not optional. Any inspection run that cannot be attributed to a known, versioned configuration state is flagged as non-compliant and excluded from the auditable evidence set.
The replay system allows any inspection finding to be independently re-derived from preserved sensor data. Replay is not a debugging tool — it is a formal validation mechanism with its own audit trail.
Before replay begins, the system verifies that the requested engine version, configuration hash, and sensor calibration records match the stored run manifest. Any discrepancy halts the replay and generates a compliance incident.
The replay engine reconstructs every intermediate processing state at the byte level. Output is compared to stored findings using a structured diff that flags any divergence as a critical validation failure.
Auditors receive a signed attestation package containing the run manifest, replay output, diff report, and chain seal verification. No internal tooling is required on the auditor side.
Every replay attempt is itself logged in the audit chain, including who initiated it, when, the result, and whether any discrepancies were found.
All changes to the NEPA engine, configuration system, and evidence framework follow a structured change control process. No change reaches production without a signed release record and regression validation.
| Change Type | Control Mechanism | Status |
|---|---|---|
| Engine Binary UpdatesCore processing logic modifications | Git-tagged signed release + automated regression suite + fingerprint update | Active |
| Configuration Contract ChangesAnalytical parameter modifications | Version increment + structured diff + approval gate before deployment | Active |
| Audit Chain Schema ChangesEvidence storage format modifications | Schema versioning + migration validation + backward compatibility test | Active |
| Sensor Calibration UpdatesHardware calibration record changes | Certificate versioning + binding validation before next run | Active |
| Deployment Architecture ChangesInfrastructure and topology modifications | Architecture review board sign-off + integration test suite | In Review |
Detailed architecture documentation is available for procurement reviewers, technical due diligence teams, and regulatory evaluators upon request.