As Kafka Summit SF begins today, the Striim team is honored to sponsor this premier event for anyone interested in streaming data platforms. In this blog post, we would like to share with you several key new features that Striim recently introduced for Apache Kafka integration and stream processing.
Automated mapping of partitions when ingesting data from Kafka
Striim has added new capabilities to increase its performance multifold, and to simplify the setup for ingesting real-time data from Kafka message queues. A key new feature related to Kafka integration is our Kafka Reader. The Striim platform offers an automated mapping of partitions when ingesting data from Kafka. This feature both increases developer productivity and accelerates time to market.
Multi-threaded delivery with automated data distribution and thread management
The Striim platform also employs multi-threaded delivery with automated data distribution and thread management within a single Apache Kafka Writer. As a result, it scales more easily to support high-volume data environments and enables a significant increase in performance to optimize a many-core, single-node architecture.
Enhanced pipeline monitoring for the Kafka adapters
One of the key differentiators of Striim in streaming data integration is its comprehensive and real-time pipeline monitoring capabilities. In this area, we also introduced broader and deeper metrics to enhance pipeline monitoring for the Kafka adapters. This feature allows Kafka users to easily identify bottlenecks and rapidly fine-tune for even higher performance.
Schema Registry support for Apache Kafka
Last but not least, Striim introduced Schema Registry support for Apache Kafka. With this feature, users can seamlessly track and store schema evolution, and make schema changes without impacting existing applications.
Along with these new features around Apache Kafka integration, we have shared various tutorials and technical deep dives on how Striim supports Kafka integration and stream processing. I invite you to check out:
- the overview of our solutions for Kafka
- the tutorials that guide you how to implement Striim for Kafka
- the technical deep dive blog post with performance benchmarks
If you are not familiar with Striim, the Striim platform is used by leading organizations that rely on Apache Kafka for high-speed and fault-tolerant messaging to continuously ingest real-time data from enterprise databases, logs, sensors, and message queues. The platform enables them to process data in-flight, without coding, using its wizards and drag-and-drop UI. With in-memory SQL stream processing capabilities, Striim delivers filtered, transformed, aggregated, masked and enriched data to Kafka within milliseconds.
In addition, Kafka users look to the Striim software to analyze and visualize their data in real time, as it streams in Kafka, and deliver data and insights to cloud-based or on-premises targets. What’s unique about Striim is that, with built-in security, scalability, reliability, exactly-once-processing, and manageability in production environments, it is well suited for those who want to use an enterprise-grade solution without spending the hours and money necessary to wire together multiple different open source products.