Blog

Category: Striim Platform

Kafka Stream Processing + Cloud Integration Lead 47 Features / Enhancements in Striim v 3.8

Striim is proud to announce the launch of version 3.8 of the Striim platform, with 47 new and enhanced capabilities! Since May 2016, Striim has offered Apache Kafka users both change data capture (CDC) and SQL-based stream processing. With more deployments than any other solution, the Striim platform provides the most comprehensive, battle-tested, enterprise-grade integration
+ Read More

Speeding GDPR Compliance with a Streaming Data Architecture

Striim Delivers Data Pseudonymization for Streaming Data to Accelerate GDPR Compliance The clock is ticking fast for the General Data Protection Regulation (GDPR) to go into effect for all EU citizens on May 25th, 2018. After this commencement date, not just organizations in EU, but any company doing business with EU citizens and collecting their
+ Read More

Making the Most of Apache Kafka – Streaming Analytics for Kafka

In Part 4 of this blog series, we shared how the Striim platform facilitates processing and preparation of data, both as it streams in to Kafka, and as it streams out of Kafka to enterprise targets. In this 5th and final post in the “Making the Most of Apache Kafka” series, we will focus on
+ Read More

The Best of Both Worlds: Hybrid Open Source Data Management Platforms

Please join our upcoming webinar with Tony Baer from Ovum where we will discuss the pros and cons of a “hybrid open source” strategy vs. an “open source first” strategy. In the Big Data era, open source technologies have seen increased adoption, with an enormous degree of impact on the entire technology ecosystem. For example,
+ Read More

Making the Most of Apache Kafka – Data Processing and Preparation for Kafka

In Part 3 of this blog series, we discussed how the Striim platform facilitates moving Kafka data to a wide variety of entrprise targets, including Hadoop and Cloud environments. In this post, we focus on in-stream data processing and preparation for Kafka, whether streaming data to Kafka, or from Kafka to enterprise targets. Data Processing
+ Read More

Making the Most of Apache Kafka – Delivering Kafka Data

Delivering Kafka Data In Part 2 of this blog series, we looked at how the Striim platform is able to continuously ingest data from a wide variety of data sources, including enterprise databases via change data capture (CDC), and format and deliver that data in real time into Apache Kafka. Here we’ll take a look
+ Read More

Making the Most of Apache Kafka – Ingestion into Kafka

Ingestion into Kafka In Part 1 of this blog series, we highlighted how Striim’s SQL-based platform makes it easy to deliver processing and analytics of Apache Kafka data. We will now turn our focus toward real-time data ingestion into Kafka from a wide variety of enterprise sources. Getting Data into Kafka When you are considering
+ Read More

Making the Most of Apache Kafka – Make Kafka Easy

Apache Kafka has proven itself as a fast, scalable, fault-tolerant messaging system, and has been chosen by many leading organizations as the standard for moving data around in a reliable way. In this blog series, I would like to share how to make the most of Kafka when building streaming integration or analytics applications, including
+ Read More

The Critical Role of a “Streaming First” Data Architecture – Webinar

Please join us for our upcoming webinar on The Critical Role of a “Streaming First” Data Architecture in 2017, presented by our co-founder and CTO, Steve Wilkes. IDC’s recent prediction that the world will be creating 163 zettabytes (or 163 trillion gigabytes) of data a year by 2025 was shocking. What’s more astounding is how
+ Read More

The Rise of Real-Time Data: How Striim Helps You Prepare for Exponential Growth

In a recent contributed article for RTInsights, The Rise of Real-Time Data: Prepare for Exponential Growth, I explained how the predicted huge increase in data sources and data volumes will impact the way we need to think about data. The key takeaway is that, if we can’t possibly store all the data being generated, “the
+ Read More

GET STARTED