Striim CTO Steve Wilkes in Denver; Why streaming analytics is a must!
This past week I made the trip down the major Colorado arterial highway that runs north – south from Wyoming to New Mexico. Striim was giving a presentation in the Denver Technical Center and Striim CTO, Steve Wilkes, was the keynote presenter. Organized by Radiant Advisers, an independent research and advisory firm that “delivers practical, innovative research and thought-leadership to transform today’s organizations into tomorrow’s data-centric industry leaders,” this was too close to my office to miss. I have known Steve for more than a decade and to have an opportunity to hear Steve always proves stimulating and as much as he has presented of late, he confided to me, his enthusiasm and interaction with the audience made the fast trip down to Denver worthwhile.
The presentation centered on “Modernizing Data Platforms with Streaming Pipelines” which Steve then expanded on, highlighting why “Steaming analytics and machine learning are essential for (such things as) Cybersecurity, AI, IoT, and much more!” However, what struck me most was the material Steve used to highlight just how many open source projects were out there and how many layers of middleware are involved to simply provide a way to ingest, process and then highlight important information gleaned from the multitude of transactions in flight at any point of time.
For instance, noted Steve, even with all of the open source projects out there, simply building a streaming analytics platform from open source is just too daunting a project to start from scratch. For an effective streaming engine to provide value you have to think of the choices you have for distributed high-speed message infrastructure where the options include ActiveMQ, Kafka, etc. Complementing your choice here for messaging infrastructure you then need to select an appropriate in-memory data grid, a role where products already familiar to the NonStop community provide an option and here I am referring to products like Redis.
From here it gets really messy for the average systems programmer – what about data collection, processing and analytics and yes, final data delivery. Products like Kafka might be reused but then again, you need to look at Cassandra and Hive as well as Flink, Storm and Nifi and Logstash. Finally, you may want to then store the results of the capture / analytics / delivery process and your choices here will include offerings like Cassandra and Hbase. Many of these are Apache projects, of course, but even with all the work Apache have fostered under its umbrella, it’s only the basics. Still more has to be done once you have walked through all your options for building your own streaming analytics platform.
A topic even more familiar to the NonStop community and that comes with the NonStop platform out-of-the-box involves developing what Steve referred to as the “glue-code” – all that is need to cluster, scale, be reliable and secure and yes, have all the associated monitoring and management any crucial subsystem is expected to have. Even with this architecture fully embraced, you still need to add the human element – the UI. Both graphical and command line interfaces are important as well as a comprehensive set of dashboards to display the all-important in-memory analytics produced by your streaming analytics platform. Getting the picture? You can’t simply say you are going to build out the equivalent platform solely with Kafka as so many pieces are missing once you get past the rudimentary components on offer today with Kafka. And this is where Striim provides businesses today with value – it comes with all of the above addressed via a mix of open source and all the integration or glue components included.
In the article just published in the October 2017, issue of NonStop Insider, Out of time but not out of options we make the observation that what Striim brings to the NonStop community is truly remarkable. Unfortunately, even as the core message of HPE is revolving around “Simplify Hybrid IT” it would seem that HPE really isn’t getting the message about the importance of analytics, particularly when it comes to transaction processing and the actionable data generated in real time. Streaming analytics platforms do not require a rip-and-replace attitude when it comes to implementation as it truly can be bolted onto what exists today which should appeal to HPE and to the NonStop community. On the other hand, HPE’s own IT department is already using Striim, so perhaps the message about the value proposition of Striim is beginning to percolate within HPE.
At the upcoming NonStop Technical Boot Camp there will be a very strong presence from the Striim team. The customer base has expanded considerably since the last event and the scope of the business problems being addressed by Striim is remarkable. While Steve gave an informative presentation on Striim in Denver this week, should you want to know more about just how well Striim can integrate with your NonStop solutions today and how its Change Data Capture elements allow it to be phased in to your production environment without causing any disruption to your traditional approaches to transaction processing, then stop by the Striim team booth at the Boot Camp exhibition. We look forward to seeing you there!