From PLCnext to the Cloud: streaming OPC UA Data to Azure Event Hub and AWS Kinesis with Rust.
This article explores how to stream industrial OPC UA data from PLCnext controllers to Azure Event Hub and AWS Kinesis using Rust for seamless cloud integration and real-time analytics.
When the line‑manager noticed a sudden temperature spike on a CNC machine, the only thing he could do was stare at the PLC screen and wait for an alarm that arrived minutes later.
Imagine that same spike being pushed to a cloud dashboard in real‑time, triggering an automated shutdown before damage occurs.
What if every critical sensor reading could be streamed to the cloud in milliseconds?
Why Bring PLCnext Data to the Cloud?
Industrial plants generate a flood of sensor readings, alarm states, and diagnostic logs. Keeping that data locked inside a factory‑floor controller creates several practical bottlenecks. Moving the data to the cloud solves those problems and adds new business value.
- Detect anomalies early – cloud‑based analytics can spot patterns that a local PLC would never notice. By using an event‑driven subscription that pushes updates only when values change beyond configured thresholds, we reduce network traffic and minimize the controller’s CPU load, while still preserving enough granularity for effective monitoring.
- Scale insights – a single Event Hub or Kinesis stream can ingest millions of events per day, far beyond the storage capacity of a PLC. This means you can keep historic data for months without expanding on‑prem storage.
- Learn across sites – Combine data from many plants to benchmark performance, roll out best‑practice settings, and update firmware centrally.
Bottom line: You keep the safety‑critical control on the PLC, but you unlock the strategic value of the data in the cloud to gain early insights, scalability, compliance, and new business opportunities.
Solving those challenges inspired us to build the end‑to‑end Rust‑powered OPC UA streaming solution that the remainder of this article walks you through.
Why Rust, Azure Event Hub, and AWS Kinesis?
Before diving into the architecture, it helps to understand why these particular technologies were selected. The decision balances three key dimensions: performance & safety, cloud‑native integration, and total cost of ownership.
Why Rust on the Edge?
When a PLC needs to push a temperature spike in micro‑seconds, there’s no room for a garbage collector to pause the thread.
- Near‑C performance: Rust’s zero‑cost abstractions give deterministic, sub‑millisecond latency while keeping the CPU footprint tiny.
- Memory safety: compile‑time checks eradicate buffer‑overflows and data races, which is crucial for safety‑critical industrial environments.
- Small, single‑binary deployment: the final binary is ~12 MiB, well under the 20 MiB quota of a PLCnext container, and it runs on both x86_64 and ARM without modification.
- Rich async ecosystem: Tokio and the Cargo package manager make it trivial to add new async tasks (e.g., edge‑AI inference) without blowing up the codebase.
Why Azure Event Hub?
Azure‑centric customers usually already have Stream Analytics and Power BI pipelines; they just need a reliable, low‑latency input.
- Native AMQP 1.0 over TLS 1.3: industry‑standard, high‑throughput messaging that the
fe2o3‑amqpcrate abstracts cleanly. - Sub‑10 ms ingress latency and built‑in geo‑replication give you real‑time visibility across sites.
- Managed service: no servers to provision, patch, or scale; Azure handles throttling, partitioning, and durability for you.
- Pay‑as‑you‑go pricing: $0.028 per million events (first 1 TB) means you only pay for what you ingest, with predictable OPEX.
Why AWS Kinesis Data Stream?
For teams that live in the AWS ecosystem, the downstream analytics already run in Lambda, QuickSight, and Redshift.
- Low‑latency, auto‑sharding ingestion: records are spread across shards automatically, delivering typical < 15 ms intra‑region latency.
- Official Rust SDK: handles retries, exponential back‑off, and IAM credential rotation out‑of‑the‑box, so you don’t have to reinvent robust networking logic.
- Seamless downstream integration: Kinesis feeds directly into Lambda, Kinesis Data Analytics, or S3 Data Lakes for historic storage.
- Cost‑effective consumption model: $0.015 per million PUT records (first 1 TB) plus a generous free tier for development.
Bottom line: Rust gives us a rock‑solid, safe edge runtime; Azure Event Hub and AWS Kinesis Data Stream services provide the scalable, low‑latency highways that carry the data to the cloud. With those choices cemented, the next step is to see how the pieces fit together.
High‑Level Architecture
At the heart of the project lies a modern, event-driven architecture designed for industrial IoT applications. A containerized Rust application that runs directly on a PLCnext device and acts as an OPC UA client. Instead of polling, it uses event-driven OPC UA subscriptions to receive real-time sensor data automatically when values change, then securely streams this data to the cloud (Azrue Event Hub or AWS Kinesis Data Stream by choice).
This setup leverages the PLCnext’s embedded OPC UA server to expose sensor data as standardized nodes, enabling efficient real-time data collection without the overhead of continuous polling. The event-driven architecture ensures minimal latency and network traffic while maintaining secure, encrypted transmission to cloud infrastructure.
-
PLCnext Controller: physical manufacturing equipment with sensors and actuators that generate real-time operational data and has embedded server that standardizes and exposes process variables through the OPC UA protocol on port 4840.
-
Rust backend: high-performance application core that runs as a container on the PLCnext device, acting as an OPC UA client, a message broker, and a thin HTTP API for the UI.
-
Vue.js Web UI: web-based configuration interface for setting up connections and monitoring system status.
-
SQLite Database: lightweight local database that persists configuration settings and active node subscriptions.
-
Internal Message Broker: asynchronous communication layer using Tokio channels to handle data flow between components.
-
Cloud Manager: dedicated task responsible for managing cloud connections, authentication, and reliable data transmission.
-
Azure Event Hubs Client: AMQP-based client that establishes secure connections to Azure’s messaging infrastructure.
-
AWS Kinesis Client: HTTPS-based SDK client that connects securely to AWS streaming services.
Each version of the application follows the same pattern, differing only in how the data is sent to the cloud. Two independent containers (one for Azure, one for AWS implementation) can be started side‑by‑side; each talks to its respective cloud service.
Because the two cloud connectors are isolated containers, you can deploy only the one you need or run both for redundancy or multi‑cloud strategies.
Diving Into the Code
The application comes in two versions - one for Azure Event Hubs and another for AWS Kinesis - that share the same codebase and OPC UA client logic but differ only in their cloud connection implementation. Both versions are built from the same Rust workspace as separate binaries, allowing you to deploy the specific cloud connector you need.
OPC UA Client
Purpose: Establish an asynchronous OPC UA subscription that pushes changed values into a JSON channel.
use open62541::{AsyncClient, ClientBuilder};
use tokio::sync::mpsc;
// Basic OPC UA connection
async fn connect_opcua(hostname: &str, port: i32) -> Result<AsyncClient, Error> {
// Load certificates and configure security...
let endpoint_url = format!("opc.tcp://{}:{}", hostname, port);
let client = ClientBuilder::default_encryption(/* certificates */)
.connect(&endpoint_url)?
.into_async();
Ok(client)
}
// Event-driven subscription mechanism
async fn subscribe_to_changes(
client: &AsyncClient,
node_id: &str,
cloud_sender: mpsc::Sender<String>
) -> Result<(), Error> {
// Create subscription with publishing interval
let subscription = client.create_subscription(/* publishing interval */).await?;
// Monitor specific node for changes
let mut monitored_item = subscription
.create_monitored_item(/* parse node_id */)
.await?;
// Spawn task to handle value changes
tokio::spawn(async move {
while let Some(data_value) = monitored_item.next().await {
// Format data as JSON
let json_message = format!(/* node_id, value, timestamp */);
// Send to cloud via internal message channel
let _ = cloud_sender.send(json_message).await;
}
});
Ok(())
}
// Usage
async fn main() {
let client = connect_opcua("localhost", 4840).await?;
let (sender, receiver) = mpsc::channel(64);
// Subscribe to temperature sensor
subscribe_to_changes(&client, "ns=1;s=Temperature", sender).await?;
// Cloud manager processes messages from receiver...
}
Why this matters: The async subscription model means no polling – the PLC pushes data only when it changes, saving bandwidth and CPU cycles.
Azure Event Hub Client
Purpose: Consume JSON messages from the channel and forward them to Azure Event Hub via AMQP 1.0
use fe2o3_amqp::{Connection, Session, Sender};
use tokio::sync::mpsc;
// Azure Event Hub connector
struct EventHubClient {
session: Session,
config: EventHubConfig,
}
impl EventHubClient {
// Connect using AMQP + SAS authentication
async fn connect(config: EventHubConfig) -> Result<Self, Error> {
let connection = Connection::builder()
.sasl_profile(/* SAS credentials */)
.open(/* amqps://hostname */)
.await?;
let session = Session::begin(&mut connection).await?;
Ok(Self { session, config })
}
// Send JSON data as binary message
async fn send(&mut self, json_data: String) -> Result<(), Error> {
let sender = Sender::attach(/* event hub name */).await?;
let message = Message::builder()
.data(/* json_data as bytes */)
.build();
sender.send(message).await?;
sender.close().await?;
Ok(())
}
}
// Cloud manager task - processes OPC UA data
async fn cloud_manager(mut receiver: mpsc::Receiver<String>) {
let mut client = EventHubClient::connect(/* config */).await.unwrap();
while let Some(opcua_data) = receiver.recv().await {
// Send OPC UA JSON to Event Hub
client.send(opcua_data).await.unwrap();
}
}
// Main integration
async fn main() {
let (opcua_sender, cloud_receiver) = mpsc::channel(64);
// Start cloud manager
tokio::spawn(cloud_manager(cloud_receiver));
// OPC UA sends data via opcua_sender...
}
Business angle: Using AMQP 1.0 means you can leverage Azure’s built‑in throttling and geo‑replication without writing custom HTTP logic. The connector is a stand‑alone container, so you can spin it up on any edge device that supports Docker (including the PLCnext’s ARM environment).
AWS Kinesis Client
Purpose: Send the same JSON payloads to an AWS Kinesis stream using the official Rust SDK
use aws_sdk_kinesis::{Client, types::Blob};
use tokio::sync::mpsc;
// AWS Kinesis connector
struct KinesisClient {
client: Client,
stream_name: String,
}
impl KinesisClient {
// Connect using AWS SDK + IAM authentication
async fn connect(stream_name: String) -> Result<Self, Error> {
let config = aws_config::load_from_env().await;
let client = Client::new(&config);
Ok(Self { client, stream_name })
}
// Send JSON data as record
async fn send(&self, json_data: String) -> Result<(), Error> {
let blob = Blob::new(json_data.into_bytes());
self.client
.put_record()
.stream_name(&self.stream_name)
.partition_key(/* node_id or timestamp */)
.data(blob)
.send()
.await?;
Ok(())
}
}
// Cloud manager task - processes OPC UA data
async fn cloud_manager(mut receiver: mpsc::Receiver<String>) {
let client = KinesisClient::connect(/* stream_name */).await.unwrap();
while let Some(opcua_data) = receiver.recv().await {
// Send OPC UA JSON to Kinesis
client.send(opcua_data).await.unwrap();
}
}
// Main integration
async fn main() {
let (opcua_sender, cloud_receiver) = mpsc::channel(64);
// Start cloud manager
tokio::spawn(cloud_manager(cloud_receiver));
// OPC UA sends data via opcua_sender...
}
Business angle: The AWS SDK for Rust handles retries, exponential back‑off, and credential rotation automatically. You pay only for the records you ingest, and you can later hook the stream into Lambda, Redshift, or QuickSight for analytics.
Cross-compiling the Rust Binary for PLCnext (ARM Linux)
PLCnext controllers run on ARM architecture, requiring cross-compilation from x86 development machines. The challenge is compiling a complex Rust application with native dependencies (OPC UA with mbedTLS) for the target platform.
The solution uses the armv7-unknown-linux-musleabihf target for static linking, ensuring compatibility across different PLCnext device variants. The OPC UA library requires special handling due to its C dependencies and mbedTLS integration.
Cross-compile for ARM with required flags for mbedTLS
CFLAGS=-fomit-frame-pointer cargo build --release --target=armv7-unknown-linux-musleabihf
The build process produces a statically-linked binary that runs independently on the PLCnext runtime without external dependencies.
Packaging and Deploying the App to PLCnext
PLCnext uses a containerized application model where apps are distributed as SquashFS packages (.app files). This approach provides isolation, dependency management, and consistent deployment across different controller models.
The packaging follows a two-stage process:
Stage 1: Containerization
The ARM binary and Vue.js frontend are packaged into a minimal container using a distroless base image:
Build ARM container with application
podman build --platform linux/arm -f Containerfile-armv7a --tag plc2cloudapp:latest .
Stage 2: Packaging
The container is then packaged with PLCnext-specific metadata into a SquashFS file:
Create complete PLCnext app package
mksquashfs ./plc2cloudapp plc2cloudapp.app -force-uid 1001 -force-gid 1002
The final package contains:
- Compressed container image
- App metadata for PLCnext runtime
- Docker Compose configuration
- Lifecycle management scripts
Deployment is handled through PLCnext’s Web-Based Management interface or the PLCnext Store. The runtime automatically manages container lifecycle, volume mounting for persistent data, and integration with the device’s certificate store for HTTPS.
This containerized approach enables complex applications to run on industrial controllers while maintaining system stability and security isolation.
Security
| Layer | Mechanism | Why It Matters |
|---|---|---|
| Transport | TLS 1.3 (rustls) for HTTPS UI; TLS for OPC UA and AMQP | Prevents eavesdropping on proprietary process data |
| Authentication | OPC UA certificates; Azure SAS token or AWS IAM credentials stored in env vars (or injected via Kubernetes secrets) | Guarantees only authorized edge devices can publish |
| Session Management | 2‑hour cookie‑based sessions for the UI | Limits exposure if a browser is left unattended |
| Credential Storage | Secrets never written to disk; only kept in memory; optional HashiCorp Vault integration | Reduces risk of credential leakage on the PLC |
| Hardening | Minimal base image (debian‑testing‑slim); no SSH daemon; read‑only filesystem for the binary | Smaller attack surface, aligns with IEC 62443 Level 1 |
Lessons Learned & Tips for Production
During development we hit a few snags; here’s what we discovered and how to avoid them:
- Certificate rotation: self‑signed certs expired after 90 days, causing OPC UA disconnects. Mitigation: automate renewal with a cron job that regenerates the cert and sends SIGHUP to the Rust process.
- AWS SDK throttling: a burst of 10k events / s triggered
ProvisionedThroughputExceededException. Mitigation: enable enhanced fan‑out or batch records (max 500 perPutRecords). - Azure AMQP session timeout: long idle periods closed the AMQP link. Mitigation: send a keep‑alive ping every 30 s (fe2o3‑amqp supports
idle_timeout). - Memory pressure on PLCnext: verbose debug logging filled the 20 MiB quota. Mitigation: switch to
log4rswith size‑based rotation (256 KB, keep 3 archives). - Cross‑compilation hiccups: missing
musl‑devfor ARM caused linker errors. Mitigation: use the officialrustembedded/crossDocker image for reproducible builds.
Extending the Solution
Why extend? Once the basic pipeline is stable, you can extract more value:
- Real‑time dashboards: pipe the Event Hub/Kinesis stream into Azure Stream Analytics or AWS Kinesis Data Analytics, then visualize with Power BI or QuickSight. Provides instant operational visibility.
- Edge AI: add a TinyML inference step in the Rust pipeline (e.g., anomaly detection on vibration data) before sending to the cloud. Reduces false positives and bandwidth usage.
- Multi‑cloud failover: run both containers simultaneously, tag each record with a cloud field, and let downstream consumers choose the best source. Improves resilience against regional outages.
The Bottom Line
By marrying PLCnext’s open‑runtime, Rust’s safety and performance, and cloud‑native event hubs, you get a future‑proof, vendor‑agnostic bridge that streams raw telemetry to Azure or AWS with zero‑copy, TLS‑secured connections, minimal edge footprint, and predictable pay‑as‑you‑go costs. The result is real‑time insight without compromising the deterministic control loop - a win for engineers, operators, and business leaders alike.
Try it yourself…
Ready to get started? Install the solution instantly from the PLCnext Store or, if you prefer, upload the package straight through the PLCnext device’s Web‑Based Management Interface (WBM) and have it up and running in minutes.