Node-RED Implementation Spec
MQTT-to-OPC-UA Protocol Bridge
SYS Real-Time Steel Product Tracking
Project: Siam Yamato Steel Vision Tracking
Prepared by: Appomax Engineering
Version: 1.0 | April 2026
Classification: Internal – Engineering
|
Version |
Date |
Author |
Changes |
|
1.0 |
April 2026 |
Appomax |
Initial spec based on Node-RED Note v04 |
1. Purpose & Scope
This document is the implementation specification for the Node-RED component in the SYS Real-Time Steel Product Tracking system. It provides everything an engineer needs to build, test, and deploy the Node-RED flows that bridge Tapway SamurAI Copilot (MQTT) to Ignition Edge (OPC-UA).
1.1 Role of Node-RED
Node-RED acts as a stateless protocol bridge between the raw MQTT event stream from SamurAI Copilot and the OPC-UA tag tree hosted by Ignition Edge. Its responsibilities are strictly limited to:
• Subscribe to MQTT topics from SamurAI Copilot
• Parse and validate incoming JSON payloads
• Flatten nested structures into canonical event records
• Write normalized values to Ignition Edge OPC-UA tags (as an OPC-UA client)
• Maintain diagnostics counters for observability
1.2 What Node-RED Must NOT Do
To keep the architecture clean and maintainable, Node-RED must not take on responsibilities that belong to Ignition:
• No long-term data persistence — Ignition owns all historical data
• No tracking logic or state machine — Ignition handles product lifecycle
• No enterprise reporting or alerting
• No OPC-UA server hosting — Node-RED is an OPC-UA client only
• No raw JSON dynamic shape forwarding — always normalize first
1.3 Architecture Overview
The data flow is unidirectional:
Tapway SamurAI Copilot
|
[MQTT Broker]
|
[Node-RED] ---- OPC-UA Client Write ----> [Ignition Edge OPC-UA Server]
(parse, (tag storage, historian,
validate, state machine, HMI,
normalize) reporting)
INFO: Node-RED does NOT host an OPC-UA server. Ignition Edge runs the OPC-UA server natively, and Node-RED connects to it as a client to write tag values.
2. Prerequisites & Key Concepts
2.1 Technology Stack
|
Component |
Technology |
Role |
|
Detection Sensor |
Tapway SamurAI Copilot |
Vision AI detection of steel products at camera positions |
|
Message Transport |
MQTT Broker |
Event delivery from SamurAI to Node-RED |
|
Protocol Bridge |
Node-RED |
Parse MQTT, normalize, write to OPC-UA (this spec) |
|
Tracking Backbone |
Ignition Edge |
OPC-UA server, historian, state machine, HMI |
2.2 Key Terminology
|
Term |
Meaning |
|
Detection Event |
A single MQTT message from SamurAI when a product enters or exits a camera zone |
|
Canonical Event |
The normalized, flat JSON object that Node-RED produces from a raw detection event |
|
Zone |
A detection point in the production line (e.g., Cooling Bed Entry). One camera may cover multiple zones |
|
EventSequence |
An integer counter per zone that increments on every new event, used by Ignition to detect changes |
|
EventPulse |
A boolean toggle per zone that flips on every new event, as a secondary change-detection signal |
|
OPC-UA Tag |
A named data point in Ignition Edge's OPC-UA server that Node-RED writes to |
3. Processing Flow
The Node-RED flow processes each incoming MQTT message through a linear pipeline of 6 steps. Each step is a distinct function node (or group of nodes) in the flow.
3.1 Step 1: MQTT Subscribe
Subscribe to the SamurAI Copilot MQTT topic(s). The exact topic pattern will be provided by Tapway during integration setup.
|
Parameter |
Value |
Notes |
|
Broker |
TBD — from Tapway |
IP/hostname of MQTT broker |
|
Port |
1883 (default) |
Or 8883 for TLS |
|
Topic |
TBD — from Tapway |
e.g., samurai/events/# or similar |
|
QoS |
1 (At least once) |
Ensures no missed messages; Node-RED handles dedup |
|
Client ID |
node-red-sys-bridge |
Unique per deployment |
3.2 Step 2: JSON Parse + Validation
Parse the raw MQTT payload as JSON. If parsing fails, the message is dropped and diagnostics are updated.
3.2.1 Required Fields
Every valid message must contain the following fields. If any are missing, the message is treated as invalid:
• cameraID — identifies which camera sent the event
• uniqueEventID — globally unique identifier for deduplication
• eventTimeStamp — ISO 8601 timestamp from SamurAI
• inout — event direction (IN or OUT)
• At least one zone event object
3.2.2 Validation Failure Behavior
Be lenient. When validation fails:
• Log the raw payload to Diagnostics/LastRawPayload and the error to Diagnostics/LastParseError
• Increment InvalidPayloadCounter in System diagnostics
• Increment per-camera ParseErrorCount in camera diagnostics
• Continue processing the next message — never crash or stop the flow
NOTE: Validation failures should be rare if Tapway implements their schema correctly. These diagnostics are safety nets for debugging during integration, not expected runtime paths.
3.3 Step 3: Flatten / Normalize
The raw MQTT payload from SamurAI Copilot contains nested structures (e.g., inout.zone5.event_direction, model-1.className[], model-1.trackID[]). Node-RED must flatten these into a canonical event structure.
Do not forward nested JSON to Ignition. The OPC-UA tag tree expects flat, typed values.
3.3.1 Canonical Event Structure
For each normalized event, Node-RED should produce an object with the following fields:
|
Field |
Type |
Source |
Description |
|
event_id |
String |
uniqueEventID |
Unique event identifier from Tapway |
|
event_ts |
DateTime |
eventTimeStamp |
ISO 8601 timestamp |
|
site_id |
String |
config |
e.g., "APPOMAX" or "SYS" |
|
subgroup_id |
String |
payload |
Template/subgroup identifier |
|
camera_id |
String |
cameraID |
Camera identifier |
|
camera_ip |
String |
payload |
Camera IP address |
|
camera_stage |
Int |
config/payload |
Production stage number |
|
zone_id |
String |
mapping |
Appomax zone ID, e.g., C05_Z01 |
|
event_type |
String |
inout |
IN or OUT |
|
object_direction |
String |
event_direction |
LEFT, RIGHT, UP, DOWN |
|
class_name |
String |
className |
Detected object class |
|
class_id |
Int |
classID |
Numeric class identifier |
|
model_name |
String |
model name |
AI model identifier |
|
model_version |
String |
model version |
AI model version string |
|
logic_type |
String |
config |
INOUT, IN_ONLY, OUT_ONLY |
|
track_id |
Int |
trackID |
Tapway track ID |
|
unique_track_id |
String |
derived |
Unique track identifier |
|
occupancy |
Int |
occupancy count |
Number of objects in zone |
|
zone_occupancy |
Boolean |
derived |
True if occupancy > 0 |
|
dwell_duration |
Float |
dwell time |
Time object spent in zone (seconds) |
|
speed |
Float |
speed |
Object speed if available |
|
confidence |
Float |
confidence |
Detection confidence score 0–1 |
|
image_filename |
String |
image path |
Path to captured image |
|
clip_filename |
String |
clip path |
Path to captured video clip |
3.4 Step 4: Deduplication
Before writing to OPC-UA, check if this event_id has already been processed. Duplicate events must be silently dropped — they should not be logged to Ignition or update any OPC-UA tags.
3.4.1 Dedup Mechanism
• Maintain an in-memory cache of recently processed uniqueEventIDs
• Recommended: per-zone cache of the last 200 event IDs (covers ~20 seconds at peak load)
• If an incoming event_id exists in the cache, drop the message silently
• Increment DuplicateEventCounter in System diagnostics and per-camera DuplicateCount
NOTE: The dedup cache is in-memory only. On Node-RED restart, the cache is empty and a few duplicates may pass through. This is acceptable — Ignition's historian handles idempotent writes via the uniqueEventID.
3.5 Step 5: Write to Ignition Edge OPC-UA Tags
For each valid, non-duplicate canonical event, Node-RED writes the normalized values to the corresponding OPC-UA tags in Ignition Edge. Node-RED connects as an OPC-UA client.
3.5.1 OPC-UA Client Configuration
|
Parameter |
Value |
Notes |
|
Endpoint |
opc.tcp://<ignition-ip>:62541 |
Ignition Edge OPC-UA server endpoint |
|
Security Mode |
TBD |
None for development, Sign or SignAndEncrypt for production |
|
Authentication |
TBD |
Anonymous or username/password per Ignition config |
|
Tag Root Path |
Appomax/SteelTracking/SamurAI/ |
Root of all tags written by Node-RED |
3.5.2 Write Targets per Event
Each canonical event triggers writes to multiple tag groups. The write order should be:
1. Update Camera tags (e.g., Cameras/C05/LastEventID, LastEventTimestamp)
2. Update Zone live-state tags (e.g., Zones/C05/C05_Z01/LastEventType, Occupancy, etc.)
3. Read current EventSequence from Ignition, increment by 1, and write back
4. Toggle EventPulse (read current value, write the inverse)
5. Update Events branch if implemented (e.g., Events/C05/C05_Z01/EventID, etc.)
6. Update Diagnostics counters
INFO: EventSequence and EventPulse require a read-before-write pattern. Node-RED reads the current value from Ignition's OPC-UA server, modifies it, and writes it back. This ensures continuity across Node-RED restarts since Ignition persists the tag values.
3.5.3 OPC-UA Write Failure Handling
If an OPC-UA write fails (Ignition down, network issue, tag path does not exist):
• Log the error to Node-RED's internal log and to Diagnostics/LastErrorMessage
• Do not buffer or retry indefinitely — Node-RED is stateless
• Retry the write once after a short delay (1–2 seconds). If it fails again, drop the event and continue
• Update System/Online to false if repeated failures indicate Ignition is unreachable
3.6 Step 6: Update Diagnostics
After every processed message (valid or invalid, duplicate or not), update the relevant diagnostics tags in Ignition. This provides real-time observability for the integration team.
4. OPC-UA Tag Contract
This section defines the OPC-UA tag paths and data types that must exist in Ignition Edge before Node-RED can write to them. This serves as the interface contract between the Node-RED spec and the Ignition Edge spec.
INFO: Tag creation and UDT definition are the responsibility of the Ignition Edge engineer. This section documents what Node-RED expects to write to.
4.1 Tag Tree Root
All tags live under a common root path:
Appomax/SteelTracking/SamurAI/
System/
Cameras/
Zones/
Events/ (recommended)
Diagnostics/
4.2 System Tags
Global health and status indicators for the integration.
|
Tag Path (under System/) |
Type |
Written By |
Description |
|
Online |
Boolean |
Node-RED |
True when Node-RED is connected to both MQTT and OPC-UA |
|
MQTTConnected |
Boolean |
Node-RED |
True when MQTT subscription is active |
|
OPCServerStatus |
String |
Node-RED |
Connection status to Ignition OPC-UA server |
|
LastMQTTMessageTs |
DateTime |
Node-RED |
Timestamp of last MQTT message received |
|
LastNormalizedEventTs |
DateTime |
Node-RED |
Timestamp of last successfully normalized event |
|
NormalizedEventCounter |
Int64 |
Node-RED |
Total events successfully normalized and written |
|
InvalidPayloadCounter |
Int64 |
Node-RED |
Total messages that failed validation |
|
DuplicateEventCounter |
Int64 |
Node-RED |
Total duplicate events dropped |
|
OutOfOrderEventCounter |
Int64 |
Node-RED |
Total events received out of timestamp order |
|
LastRestartTs |
DateTime |
Node-RED |
Timestamp of last Node-RED restart |
|
SchemaVersion |
String |
Node-RED |
Version of this spec (e.g., "1.0") |
|
LastErrorMessage |
String |
Node-RED |
Most recent error message for quick debugging |
4.3 Camera Tags
Per-camera context and status. Path pattern: Cameras/{CameraID}/
|
Tag Path (under Cameras/C05/) |
Type |
Description |
|
CameraID |
String |
Camera identifier (e.g., "05") |
|
CameraName |
String |
Display name (e.g., "CoolingBed_Entry_View") |
|
CameraIP |
String |
IP address of camera |
|
CameraStage |
Int32 |
Production stage number |
|
SiteID |
String |
Site identifier |
|
SubgroupID |
String |
Subgroup/template identifier |
|
CamResolution_Height |
Int32 |
Camera resolution height in pixels |
|
CamResolution_Width |
Int32 |
Camera resolution width in pixels |
|
CamResolution_Channel |
Int32 |
Camera color channels |
|
LastEventID |
String |
Last event ID processed from this camera |
|
LastEventTimestamp |
DateTime |
Timestamp of last event from this camera |
|
LastEventClipFilename |
String |
Path to last event's video clip |
|
EventCount |
Int64 |
Total events received from this camera |
|
Online |
Boolean |
Camera health status (see Section 5) |
4.4 Zone Tags
Per-zone live state and event detection signals. Path pattern: Zones/{CameraID}/{ZoneID}/
This is the most important tag group because it carries business meaning — each zone represents a detection point in the production line.
4.4.1 Zone Configuration Tags
|
Tag Path (under Zones/C05/C05_Z01/) |
Type |
Description |
|
ZoneSourceKey |
String |
Composite key for lookups |
|
ZoneID |
String |
Appomax zone ID (e.g., C05_Z01) |
|
ZoneName |
String |
Business name (e.g., CoolingBed_Entry) |
|
SourceZoneName |
String |
Tapway's zone name (e.g., zone5) |
|
LogicType |
String |
INOUT, IN_ONLY, or OUT_ONLY |
|
ExpectedObjectDirection |
String |
Expected direction for this zone |
|
INMeaning |
String |
Business meaning of IN event |
|
OUTMeaning |
String |
Business meaning of OUT event |
4.4.2 Zone Live-State Tags
|
Tag Path (under Zones/C05/C05_Z01/) |
Type |
Description |
|
LastEventID |
String |
Most recent event ID in this zone |
|
LastEventTimestamp |
DateTime |
Timestamp of most recent event |
|
LastEventType |
String |
IN or OUT |
|
LastObjectDirection |
String |
LEFT, RIGHT, UP, DOWN |
|
LastUniqueTrackID |
String |
Track ID of last detected object |
|
LastTrackID |
Int32 |
Numeric track ID |
|
LastClassName |
String |
Class of last detected object |
|
LastClassID |
Int32 |
Numeric class ID |
|
LastConfidence |
Float |
Confidence score of last detection |
|
OccupancyCount |
Int32 |
Number of objects currently in zone |
|
IsOccupied |
Boolean |
True if OccupancyCount > 0 |
|
DwellDuration |
Float |
Time last object spent in zone (seconds) |
|
LastImageFilename |
String |
Image path of last event |
|
LastClipFilename |
String |
Video clip path of last event |
4.4.3 Zone Event Detection Tags (Critical)
These tags are how Ignition detects that a new event has occurred, even if the event values look identical to the previous event:
|
Tag Path |
Type |
Behavior |
Description |
|
EventSequence |
Int64 |
Read → Increment → Write |
Monotonically increasing counter; Ignition triggers historian/logic on change |
|
EventPulse |
Boolean |
Read → Toggle → Write |
Flips true/false on every event; redundant change signal |
|
EventCount |
Int64 |
Read → Increment → Write |
Total events in this zone since system start |
|
QualityStatus |
String |
Write |
"GOOD" if latest event passed all validation checks |
INFO: EventSequence is the primary mechanism for Ignition to detect new events. Because Node-RED reads the current value from Ignition and increments it, the counter survives Node-RED restarts without persistence.
4.5 Events Branch (Recommended)
An optional but recommended branch that stores the last normalized event per zone as individual tags. This makes it easier for Ignition to read full event details for historian logging and query.
Path pattern: Events/{CameraID}/{ZoneID}/
|
Tag Path (under Events/C05/C05_Z01/) |
Type |
Description |
|
EventID |
String |
Event identifier |
|
EventTimestamp |
DateTime |
Event timestamp |
|
EventType |
String |
IN or OUT |
|
ObjectDirection |
String |
Direction of detected object |
|
ClassName |
String |
Detected class name |
|
ClassID |
Int32 |
Numeric class ID |
|
UniqueTrackID |
String |
Unique track identifier |
|
TrackID |
Int32 |
Numeric track ID |
|
Confidence |
Float |
Detection confidence |
|
OccupancyCount |
Int32 |
Occupancy count at time of event |
|
DwellDuration |
Float |
Dwell time in seconds |
|
ImageFilename |
String |
Image file path |
|
ClipFilename |
String |
Video clip file path |
|
RawEventJSON |
String |
Full raw JSON payload for debugging |
|
EventSequence |
Int64 |
Mirrors zone EventSequence |
|
EventPulse |
Boolean |
Mirrors zone EventPulse |
4.6 Diagnostics Tags
Diagnostics are split into global (under Diagnostics/) and per-camera (under Diagnostics/{CameraID}/).
4.6.1 Global Diagnostics
|
Tag Path (under Diagnostics/) |
Type |
Description |
|
LastRawTopic |
String |
MQTT topic of last received message |
|
LastRawPayload |
String |
Raw JSON of last received message (for debugging) |
|
LastParseStatus |
String |
OK, PARSE_ERROR, VALIDATION_ERROR |
|
LastParseError |
String |
Error message if last parse failed |
|
LastDuplicateEventID |
String |
Event ID of last dropped duplicate |
|
LastDuplicateTimestamp |
DateTime |
When the last duplicate was detected |
|
LastOutOfOrderEventID |
String |
Event ID of last out-of-order event |
|
LastOutOfOrderTimestamp |
DateTime |
When the last out-of-order was detected |
4.6.2 Per-Camera Diagnostics
Path pattern: Diagnostics/{CameraID}/
|
Tag Path (under Diagnostics/C05/) |
Type |
Description |
|
LastReceiveTs |
DateTime |
Last MQTT message timestamp from this camera |
|
MessageCount |
Int64 |
Total messages received from this camera |
|
ParseErrorCount |
Int64 |
Messages that failed validation |
|
DuplicateCount |
Int64 |
Duplicate events dropped |
|
OutOfOrderCount |
Int64 |
Out-of-order events detected |
|
LastModelName |
String |
AI model name from last message |
|
LastModelVersion |
String |
AI model version from last message |
5. Camera Online/Offline Detection
NOTE: This section depends on Tapway's MQTT broker capabilities. The approach must be confirmed with Tapway during integration setup.
5.1 Preferred: MQTT Last Will / Death Certificate
If Tapway's MQTT broker supports Last Will and Testament (LWT), each camera can publish a "death certificate" message when it disconnects. Node-RED subscribes to the LWT topic and sets Cameras/{CameraID}/Online = false immediately.
5.2 Fallback: Heartbeat Timeout
If LWT is not available, Node-RED infers camera status from event flow:
• Track the timestamp of the last received event per camera (Diagnostics/{CameraID}/LastReceiveTs)
• If no event is received from a camera within the configurable timeout (default: 60 seconds), set Cameras/{CameraID}/Online = false
• When a new event arrives from that camera, set Online = true
5.3 Decision Required
|
Option |
Pros |
Cons |
Status |
|
MQTT LWT |
Immediate detection, standard MQTT feature |
Requires Tapway broker support |
Pending Tapway confirmation |
|
Heartbeat timeout |
No Tapway dependency, works with any broker |
Delayed detection (up to timeout period), false positives during low-activity periods |
Available as fallback |
6. Zone Mapping Configuration
Node-RED must maintain a mapping between Tapway's zone names (SourceZoneName) and Appomax's business zone identifiers (ZoneID). This mapping determines which OPC-UA tag path to write to for each incoming event.
6.1 Mapping Table
Example mapping for Camera 05:
|
Tapway SourceZoneName |
Appomax ZoneID |
ZoneName |
LogicType |
|
zone5 |
C05_Z01 |
CoolingBed_Entry |
INOUT |
The full mapping table will be populated during integration setup with Tapway and the SYS plant operations team.
6.2 Naming Convention
• Camera OPC-UA node path: use camera ID (e.g., C05)
• Camera display/label: use descriptive name (e.g., CoolingBed_Entry_View)
• Zone ID: {CameraID}_{ZoneSequence} (e.g., C05_Z01)
• SourceZoneName: preserved exactly as Tapway sends it (e.g., zone5)
• ZoneName: Appomax business name (e.g., CoolingBed_Entry)
6.3 Why Both SourceZoneName and ZoneID?
• SourceZoneName = what Tapway sends. Preserves the upstream naming for debugging.
• ZoneID = Appomax's business identifier. Decouples downstream tracking logic from Tapway's naming model.
This allows Tapway to rename their zones without breaking the Ignition tag tree or tracking logic.
6.4 Unknown Zone Handling
If Node-RED receives an event with a zone name that is not in the mapping table:
• Log the unknown zone name to Diagnostics/LastParseError
• Increment InvalidPayloadCounter
• Drop the event — do not write to OPC-UA with an unmapped zone path
• The engineering team should add the new mapping and redeploy
7. Startup & Restart Behavior
Node-RED is designed to be stateless. On startup or restart, it should:
1. Connect to the MQTT broker and subscribe to SamurAI topics
2. Connect to Ignition Edge's OPC-UA server as a client
3. Write System/Online = true once both connections are established
4. Write System/LastRestartTs with the current timestamp
5. Write System/SchemaVersion with the spec version ("1.0")
6. Begin processing incoming MQTT messages through the pipeline
There is no state recovery step. EventSequence values are read from Ignition on each event write, so continuity is automatic. The dedup cache starts empty, which means a few post-restart duplicates may pass through — this is acceptable since Ignition handles idempotency.
7.1 Connection Loss Scenarios
|
Scenario |
Behavior |
|
MQTT broker disconnects |
Set System/MQTTConnected = false. Attempt reconnection with exponential backoff. Events are lost during disconnect (SamurAI may buffer depending on QoS). |
|
Ignition OPC-UA unreachable |
Set System/Online = false (in Node-RED internal state). Continue receiving MQTT to keep the connection alive but drop events since there is nowhere to write them. Retry OPC-UA connection with exponential backoff. |
|
Both disconnect |
Attempt reconnection for both independently. No local buffering. |
|
Node-RED crashes/restarts |
Container orchestrator (Docker) should auto-restart. On startup, follow the sequence above. |
8. Deployment & Configuration
8.1 Deployment Target
Node-RED should run as a Docker container on the same network as Ignition Edge and the MQTT broker. This minimizes latency for OPC-UA writes.
8.2 Environment Configuration
All configurable parameters should be externalized as environment variables or a configuration file — not hardcoded in the flow:
|
Variable |
Example |
Description |
|
MQTT_BROKER_HOST |
192.168.6.100 |
MQTT broker hostname/IP |
|
MQTT_BROKER_PORT |
1883 |
MQTT broker port |
|
MQTT_TOPIC |
samurai/events/# |
MQTT subscription topic |
|
MQTT_CLIENT_ID |
node-red-sys-bridge |
MQTT client identifier |
|
OPCUA_ENDPOINT |
opc.tcp://192.168.6.10:62541 |
Ignition OPC-UA server endpoint |
|
OPCUA_TAG_ROOT |
Appomax/SteelTracking/SamurAI |
Root path for all tag writes |
|
CAMERA_TIMEOUT_SEC |
60 |
Seconds before marking camera offline |
|
DEDUP_CACHE_SIZE |
200 |
Max event IDs per zone in dedup cache |
|
SCHEMA_VERSION |
1.0 |
Spec version to write to System/SchemaVersion |
8.3 Required Node-RED Nodes
The following community nodes are required in addition to the core Node-RED palette:
|
Node Package |
Purpose |
|
node-red-contrib-opcua |
OPC-UA client for writing to Ignition Edge |
|
(core) MQTT In |
MQTT subscription (built into Node-RED) |
|
(core) JSON |
JSON parsing (built into Node-RED) |
|
(core) Function |
Custom JavaScript for normalization and mapping |
9. Testing & Acceptance Criteria
9.1 Unit Tests (Node-RED Flow)
|
Test Case |
Input |
Expected Result |
|
Valid single-zone event |
Well-formed MQTT JSON with 1 zone |
All zone tags updated in Ignition, EventSequence incremented |
|
Valid multi-zone event |
MQTT JSON with 3 zones from same camera |
All 3 zone tag groups updated independently |
|
Malformed JSON |
Non-JSON string |
InvalidPayloadCounter incremented, Diagnostics updated, no OPC-UA writes |
|
Missing required field |
Valid JSON missing cameraID |
InvalidPayloadCounter incremented, message dropped |
|
Duplicate event |
Same uniqueEventID sent twice |
Second message silently dropped, DuplicateEventCounter incremented |
|
Out-of-order event |
Event with older timestamp than last |
Event written normally, OutOfOrderEventCounter incremented |
|
Unknown zone name |
Zone not in mapping table |
InvalidPayloadCounter incremented, event dropped |
|
OPC-UA write failure |
Ignition Edge offline |
Error logged, System/Online set to false, events dropped |
|
Node-RED restart |
Kill and restart Node-RED |
EventSequence continues from last value, System/LastRestartTs updated |
9.2 Integration Tests (with Tapway)
• Verify correct mapping of all camera + zone combinations
• Verify EventSequence increments continuously under sustained event load
• Verify EventPulse toggles on every event
• Verify Diagnostics counters match expected values after a known test sequence
• Verify camera Online status reflects actual camera connectivity
10. Open Items & Tapway Dependencies
|
Item |
Owner |
Status |
Impact |
|
MQTT broker hostname and topic pattern |
Tapway |
Pending |
Blocks MQTT configuration |
|
Camera health mechanism (LWT vs heartbeat) |
Tapway + Appomax |
Pending |
Determines camera Online detection approach |
|
Full zone mapping table (all cameras + zones) |
SYS Plant Ops + Appomax |
Pending |
Blocks zone mapping configuration |
|
MQTT payload schema documentation |
Tapway |
Pending |
Required for validation rules |
|
OPC-UA security mode for production |
Appomax + SYS IT |
Pending |
Affects OPC-UA client config |
|
Dynamic camera/zone provisioning approach |
Appomax |
Pending |
What happens when Tapway adds new cameras or zones |
End of Document — Node-RED Implementation Spec v1.0