MySQL to Kafka

Move Data Continuously, in Real-Time from MySQL to Kafka

Log-based change data capture (CDC) methodology has now become the popular approach to moving data in real-time from MySQL to Kafka, based on several advantages over traditional bulk data extract, transform, load (ETL) solutions and in-house solutions with custom-scripts. Striim’s enterprise-grade streaming integration platform performs CDC non-intrusively and with exactly once processing guarantees. Here are the top reasons for you to consider the CDC method for your MySQL to Kafka integration:

  • Change data capture turns MySQL database operations (inserts, deletes, updates) into an event stream for Kafka Consumers. Given that Kafka is designed for event-driven processing, streaming MySQL database events in real time to Kafka—versus doing bulk data extract— helps users gain more value from Kafka and downstream consumers that use low-latency data.
  • Log-based CDC from MySQL to Kafka minimizes the impact on source systems and is non-intrusive because it reads the database transaction logs. It avoids performance degradation or modification for your production MySQL databases while streaming real-time data to Kafka.
  • When you move only the change data continuously, versus moving large sets of data in batches, you utilize your network bandwidth more efficiently.
  • When you move change data continuously, versus using database snapshots, you get more granular data about what occurred between the times snapshots were taken. Granular data flow allows more accurate and richer intelligence from downstream analytics systems.

 

Watch the demo video to learn how to continuously move real-time data from MySQL to Kafka using Striim’s wizards-based, drag-and-drop UI, with exactly once processing guarantees.