STRIIM BLOG

New Quick Start Tutorial for Streaming Kafka Integration

Irem Radzik
May 21, 2020 · 3 minute read

If you have adopted Apache Kafka as your high-performance, fault-tolerant messaging system, Kafka’s real-time integration with your critical data sources and consumers is essential for you to get the most business value. With real-time, streaming Kafka integration, you can build modern applications that access timely data in the right format, enabling time-sensitive operational decisions. However, as Kafka was designed for developers, typically, organizations rely on a team of developers’ manual efforts and specialized skillsets to stream data into, and out of Kafka, and automate data processing for its consumers.

Striim has offered SQL-query-based processing and analytics for Kafka since 2015. Drag-and-drop UI, pre-built applications and wizards for configuring Kafka integration, and custom utilities make Striim the easiest solution to build streaming integration pipelines with built-in analytics applications.

Our new tech guide: “Quick Start Tutorial for Streaming Kafka Integration,” details Striim’s capabilities for Kafka integration and stream processing. Quick Start Tutorial for Streaming Kafka IntegrationIt illustrates how users can get optimal value from Kafka by simplifying the real-time ingestion, stream processing, and delivery of a wide range of data types, including transactional data from enterprise databases, without impacting their performance. The guide offers step-by-step instructions on how to build a Kafka integration solution for moving data from a MySQL database to Kafka with log-based change data capture and in-flight data processing. You can easily use these instructions for other data sources supported by Striim.

 

Some of the key areas covered in this tech guide include how to:

  • Ingest data to Kafka in a streaming fashion from enterprise databases, such as Oracle, SQL Server, MySQL, PostgreSQL, and HPE NonStop using low-impact change data capture. Other data sources, such as system logs, sensors, Hadoop, and cloud data stores, are also discussed in this section.
  • Use SQL-based stream processing to put Kafka data in the consumable format before delivery in sub-seconds. Data formatting and the use of in-memory data cache for in-flight data enrichment are explained too.
  • Support mission-critical applications with built-in scalability, security, exactly-once-processing (E1P), and high-availability.
  • Perform SQL-based in-memory analytics, as the data is flowing through, and rapidly visualize the results of the analytics without needing to code manually.
  • Deliver real-time data from Kafka to other systems, including cloud solutions, databases, data warehouses, other messaging systems, and files with pre-built adapters.

The Striim platform addresses the complexity of Kafka integration with an end-to-end, enterprise-grade software platform. By downloading the new tech guide: “Quick Start Tutorial for Streaming Kafka Integration,” you can get started to rapidly build integration and analytics solutions without extensive coding and needing specialized skillsets. That capability enables your data scientists, business analysts, and other data professionals to focus on delivering fast business value to transform your operations.

For more resources on integrating high volumes of data in and out of Kafka, please visit our Kafka Integration and Stream Processing solution page. If you prefer to discuss your specific requirements, we would be happy to provide you a customized demo of streaming Kafka integration or other relevant use cases.