Making the Most of Apache Kafka with Striim

See how using Striim can make the most of your investment in Apache Kafka.
Blog Series: https://www.striim.com/blog/2017/08/making-the-most-apache-kafka-make-kafka-easy/
LinkedIn Article: https://www.linkedin.com/pulse/make-most-kafka-real-world-steve-wilkes

If you’re using, or plan to use, Apache Kafka as a way to distribute streaming data around your enterprise, Striim can help in the following ways:

1) Continuously collecting data from enterprise and cloud sources (including databases through change data capture) and delivering to Kafka in real-time.

2) Delivering streaming data continuously from Kafka to enterprise and cloud targets.

3) Processing data through in-memory continuous SQL based queries including filtering, transformation, aggregation and enrichment before delivering to Kafka or enterprise and cloud targets.

4) Performing real-time streaming analytics on Kafka data including correlations, statistical analysis, anomaly detection, pattern matching (CEP), and machine learning integration, as well as building dashboards to visualize the results.

5) Doing all of this in an easy-to-use end-to-end enterprise-grade platform with a drag and drop UI and interactive dashboard builder.

Unedited Transcript: 

Today, we’re going to see how using Striim can make the most of your investment in Apache Kafka, so you’re using Apache Kafka as a way to distribute streaming data around your enterprise, but how do you ingest data continuously from across your enterprise systems and deliver to targets on premise and in the cloud? How do you process and prepare data or correlate and analyze it and visualize and alert on the results in real time all in an enterprise grade fashion without having to hire an army of developers to code to API. It doesn’t have to be this hard. The Striim platform can help you make the most of Apache Kafka. Striim ships with Kafka out of the box and can start a cluster as part of initial installation. The Kafka alone can’t do processing and analytics, so we integrate it with many other in memory components including high speed messaging in memory, data grids, distributed results storage and sequel based processing and analytics.

Using Kafka for streams is as easy as toggling a slow down in the UI to choose between high-speed and persistent messaging. We also include Kafka sources and targets when various into Kafka wizards help with collecting data from common sources and configuring how to rate the data. This results in a data flow that can now be extended. In this example, we’re adding a cache of product information to enrich a CDC data stream before writing the results to Kafka. If you’re using Kafka for data distribution, it is just as easy to read from Kafka and write data to multiple targets, including files or do in cloud platforms like Azure Blob Storage, but it doesn’t stop there. Data on Apache Kafka can also be used to drive analytics and visualizations. Here we’re using Striim to distribute weblog data on Kafka. Then using another data flow to read that data process and analyze it and produce a dashboard and showing real time web activity. Striim really does make it easy to use Kafka and allows to make the most of your investment. Talk to us today to find out more or download Striim and try for yourself.