Change Data Capture from HPE NonStop to Kafka
Move Data in Real Time from HPE NonStop to Kafka
For streaming data from HPE NonStop to Apache Kafka, log-based change data capture (CDC) methodology brings several advantages over traditional bulk data extract, transform, load (ETL) solutions and in-house solutions with custom-scripts. Striim’s enterprise-grade streaming integration platform performs real-time CDC non-intrusively and with exactly once processing guarantees. Here are the top reasons for you to consider the CDC method for HPE NonStop to Kafka integration:
- Change data capture turns HPE NonStop database operations (inserts, deletes, updates) into an event stream for Kafka Consumers. Given that Kafka is designed for event-driven processing, streaming Oracle database events in real time to Kafka—versus doing bulk data extract— helps with getting more value from Kafka and downstream consumers that use the low-latency data.
- Non-intrusive CDC from HPE NonStop to Kafka—including for SQL MX, SQL MP, and Enscribe— minimizes the impact on source systems because it reads the database auxiliary audit trails. It avoids performance degradation or modification for your production HPE NonStop databases while streaming real-time data to Kafka.
- When you move only the change data continuously, versus moving large sets of data in batches, you utilize your network bandwidth more efficiently.
- When you move change data continuously, versus using database snapshots, you get more granular data about what occurred between the times snapshots were taken. Granular data flow allows more accurate and richer intelligence from downstream analytics systems.
Download this white paper to learn how to non-intrusively move real-time data from HPE NonStop to Kafka. (No registration required)