In this webinar, Striim co-founder and CTO, Steve Wilkes, and Pyalla Technologies CEO, Richard Buckle, go into an in-depth discussion on how to prepare your NonStop transactional data for a real-time, hybrid IT and analytical data infrastructure.

To learn more about Striim for HPE NonStop, visit our partner page

 

Unedited Transcript: 

Welcome and thank you for joining us for today’s Webinar. My name is Katherine and I will be your moderator. The presentation today is entitled, “Embracing hybrid IT and Advanced Analytics in HPE Nonstop Environments.” Our first presenter today is Richard Buckle, CEO of Pyalla Technologies. Richard has over 25 years experience with HPE NonStop both directly and supporting nonstop vendors. He is a regular speaker and blogger in support of the HPE user communities. Joining Richard is Steve Wilkes Co-founder and CTO of Striim. Steve is also a key industry influencer on the topic of real time data integration, having led the data integration efforts at GoldenGate Software and the cloud data integration strategy for Oracle. Throughout the event. Please feel free to submit your questions in either the Q and A or chat panel located on the right hand side of your screen. With that, it is my pleasure to introduce Richard Buckle. Richard.

Well, good afternoon. We’ve got an interesting presentation and one that I’ve been kicking around for some time. As Katherine said, I’ve been in the NonStop community a long time, starting with Tandem computers. Where I finished up there as a product manager looking after comms and networking products. I joined in session, then later acquired by ACI. I’ve been part of Golden Gate. Naturally enough, many of the people at Striim arer known to me from probably othe technologies. But in addition to that, I have been a volunteer for many years with different user groups, user communities best known I think for being on the ITAC board and being its chairman through four and five but probably lesser known was a few years later I was invited to join the board of the IBM share mainframe group where I became director of marketing in 2007, 2009. So my association with the NonStop community spans not just the processes and the systems, but also the application and solution software infrastructure, software, key middleware components. And I’ve also been on both sides of the connections in the data center when it comes to NonStop in the IBM mainframe. All of which is to say I’ve seen a lot and what I’m seeing of late is this whole discussion that’s going on today by HPE concerning digital transformation, the creation of a digital core and I had a situation or circumstance that I think is advantageous for the NonStop community because in reality the NonStop is playing a very important part.

It is at the heart of mission critical computing and it is very much a focus point within HPE for those users looking for the solutions that have the best possible uptime, but no matter how you look at whether the discussion takes you down a road into virtualized workloads including the running of virtualized NonStop work lives or when you want to run in private clouds inside the data center, you know, adjacent to NonStop, to the other systems, traditional systems there possibly into the public clouds. What we find is this just all got to be increased in we’re going to be integrated. The whole thing is complicated and yet, you know, we need the data on all of these systems. Okay? Okay. The last week was spent for me, for many of us were spent in Germany, the G target, and at the g tag event, uh, Randy Maya showed up on very short notice.

He elected to come in on Monday, speak to vendors, and then give a presentation on the Tuesday before jumping a plane, heading back to Houston. We were very appreciative of him dropping by, but he had a very strong message to the community. And what I found incredibly relevant with those two circles at the bottom of of the slide I captured with my mobile phone as he was presenting. So if there’s anyone from HP and NonStop in the audience, I will give credit that this is a slide that Randy used and create it. But the importance of the two circles as we talk about NonStop fundamentals being retained, no matter way we have going, you know, hybrid IT, whenever there’s a reference to NonStop, it is still a NonStop, we know today full aren’t scalable integrity, security does as well as key attributes.

But what is really happening here is that we’re going to see, hey SBA step up to give us a lot of options when it comes to how we consume NonStop and is this consumption of NonStop that’s really playing well into the hands of the hybrid IT messages. But as he presented the slide, Randy also expressed this interesting observation that when we get to transforming to digital, you know, hybrid IT with digital transformation, the digital core, when it comes to real time business, it’s all about time or indeed prime and data. And now the new currency of it, which I found it very intriguing observation NonStop is on this slide for a very simple reason to it is the subject of considerable investment by HPE. And when you look at the subsequent slides, HP presents with this in this forum here, you’ll see that there’s only Linux, Windows and NonStop left, but HP is investing in and I think that’s encouraging when you start to think about mission critical and when you start to think of the heritage of NonStop.

But I think what’s important about the fundamentals not changing really helps us transition into where I’d like to talk next because data, not only about time and data, but I like the concept that data is the new oil. Use your analogy. But data is the new oil and I thought that the follow on statement was really incredible. Yeah, payments is the wealth transactional diameters, you know, just one piece of the picture. But when it comes to financial trouble codes, some of the industries are so well versed in NonStop. We’ve now come to realize that the data that’s being produced on their most mission critical of applications still needs to be integrated. It needs to make it into the data warehouses, data lakes, you name it. Where the technology’s being used in support of enriching data side of that, the data business can run the business analytics. Okay. The trick here is the integration. If time and data are the new currency, then reality is that the integration is just the glue that holds us all together.

I was caught off guard to by a little remark that Randy Maya gave while he was at G Tech, he said, you know, it’s really all about the data and I couldn’t think past all the politicalnonsense that’s been going on. Wherever parties get, Mays didn’t divide. They always dragged the conversation back to the economy. But in our field it’s all about the data. And Randy said to us at the time, I’m to becomes the center of the universe irrespective of how you deploy your systems today, how far into the transformation to hybrid IT you’ve progressed or no matter whether you’ve moved to where everything’s going to clouds including public clouds. At the end of the day, it’s the data that differentiates the way we run our business. It is data that determines in many respects whether we stay in business. And so as I looked to, well where are we going?

I’d just like to think about hybrid IT in the context of the role it plays in collapsing, integrating and transforming the systems inside our data center. I recently wrote in a blog only a few days ago, commentary that Randy made, Randy Maya made of, you know, why we are transforming to address a changing landscape. And I kind of like that concept of a changing landscape because you know, nothing gonna to stay static in today’s world, Nothing ever stays the same. The world is hybrid today. It’s recognized and it’s a hard thing when it comes to NonStop as well. Okay. It’s a hybrid. But you know, we’ve been blessed with a strong eco partnership in the NonStop community for many years. But now we’re starting to see a new generation of vendors, emerging partners emerging if you like. We’re all familiar with those that monitor systems back them up.

I provide security, but in the world of hybrid IT and the dominant role that data is playing, we’re starting to see this collection of creation of a new ecosystem because as Randy said, he doesn’t want to be constrained by the tools or the environment that exists at the moment. So we’re now starting to talk and see reference made to new vendors that maybe we’re not all that familiar with. We’re looking at partners that know data and know the value of data. And I suspect in time that these are going to become the prevalent vendors that we take up the most about mind share when we’re talking about how best to leverage NonStop in hybrid IT.

We’re on a journey. I think this is a cliche that I think, uh, it’s not so much a case of it being overused the journey, but it’s something that, every vendor, every solution, every business in it likes to reinforce whenever they talk about the steps, they’re walking down a pathway when they’re faced with implementing any kind of vision. But what I wanted to point out here is empowering the journey to a data driven enterprise. We’ve got tied up with doing integration. Okay. We accept now that we’re in a data-driven enterprise and I put rings around three of the key elements on there. I think one of the things that you’ll see is that converge systems that the type of traditional system that we are so used to in the NonStop arena is going to be with us for quite some time.

In fact, I was talking to one hedge manager with respect to how financial institutions embrace changing technology. He quietly informed me that if they were moving at glacial speed, that would be a vast improvement. So yes, we have people and an enterprise that embrace the traditional systems and we’ll continue to use them and hopefully we’ll continue to supply them. What’s influencing here is the arrival of cloud. And I’m probably not so much endorsing this, but I would suggest that it’s the guys that provide, you know, the server farms like rec spice. And things like that. I think maybe that is a transitional step, but at the same time I certainly see those server farms being embraced inside of the data center and you will reach your point with virtualized NonStop. Another HP executive once told me when he walked me through a data center two or three years ago that he’d like to be out to he’s hands and say NonStop.

It’s in here somewhere, but I don’t know where it is. It’s running some way and that’s really the concept of the prospect that comes from running virtualized NonStop workloads. But I think the third one’s going to also attract considerable attention and it’s already under debate within the HPE community is NonStop as a service, whether it’s NonStop, you know, the database as a service, whether it’s a solution running on NonStop in the databases as a service, whether it’s the blockchain that’s been implemented now on nonstop, on top of NonStop SQL. A lot of what people are now talking about is an option to consume. It’s no longer a capital investment, but more or less an operational investment. I just want the NonStop as I need it. So here you’ve three different directions for the product, but these new choices for NonStop useswith these kinds of radical approaches taking us, you know, in this journey, far from where we started, but at the end of the day, it’s still NonStep and it was a message that was conveyed more than once.

If we’re going virtualized NonStop, It’s not virtualized, NonStop, it’s virtualized NonStop. If we’re going to consume a service, it’s a NonStop service being consumed. It’s not in fact NonStop or based on or leveraging NonStop. Nonstop is a very potent word brand label in HP. And it means a very specific thing. As we saw just a short time ago, yesterday, HPE announced its financial results for the quarter. No, pretty upbeat, but amongst the questions that the financial community asked HPE Antonio Naria was about the progress being made on his initiative. HPE Next, which is in an initiative that Neri launched to very much streamline the company. But what I found intriguing was with the separation between volume styles and value silos in the value styles, in a very short period of time. He’s moved from 26 platforms down to just seven and one of those platforms is NonStop.

So we’re mission critical, competing demands, users deploy NonStop. The converged systems, the traditional systems will always be an option, but if you want to stay to the left, you know with a converged system, that’s fine. If you want to move to the right and run with a private cloud virtualized NonStop or maybe you know more electric and I said the and public clouds, then that’s an option that you can use and at the same time, neither or none of these options takes you away from any responsibility of opening up to having your data integrated with the rest of the data center. Hybrid by definition is more than one and which ever way we cut this cake, there will be NonStop. Plus we are mission critical, but there’s so many other solutions and systems that businesses depends on the day that whether we take the left, the traditional, or to the right, the more aggressive pause, we still have to integrate the data.

Okay. I think there is no better example of the challenge that has been accepted. We didn’t hire HPE. No, you just did the reduced number of platforms that got, but in the fact that HPE has elected to put NonStop again at the heart of their own operations, and I’m not going to talk too much about this as I believe Steve’s going to get into this and a little bit more detail, but suffice to say, it’s at the heart of HPE Inside and it’s running NonStop SQL MX and it’s got multitenancy support now and it’s actually colliding and integrating the information from many disparate systems on to NonStop QLX and then responsible for providing the content to any of the user applications HP happens to run. It’s a substantial investment by HPE in its own technology and it’s working well. Okay, but here’s a thing that comes from integration.

If you’re like, this is kind of like special sauce in all of this, you know we’re getting to this. When we stop doing this, as Randy said, we’re going to see crazy new insights into how things work. And the word things was very deliberate. I went up and asked him, I said, What things? He said everything. Because what we’re going to do is we’re going to find out things about topics when we didn’t even know we were going to ask the question. When we get all the data and put it under the microscope, when we let the analytics have added, then we’re going to see new things. We’re going to see changes in behavior. We’re gonna see changes in the location. Yes, technology I believe is on a journey, but let’s not forget that the people are also on a journey as well as the business.

Okay? It might be an overused cliche. The journey, the path, the steps. We’re taking the baby steps, the incremental, but at the same time we are moving forward and it is so important to remain in business but we consider aggressively collapsing the traditional IT data center silos that we have created in the past. Remember rules promotions for understanding APIs and network protocols so we could buy best of breed and as we solved one business problem with the best of breed solution in one area we’d get another one and then another one and then another one and over time I just said it was populated with disparate systems all with their own ownership of data. Terrible model in hindsight, but we are dismantling that and the path would take into data integration I think is very important to recall that as we break the silos down, we have actually got the the mechanisms in place and so I think the companies that are best placed to do the integration are those companies that have been involved in replication for some time.

Okay. I’m not going to touch on this topic much more than just say that centrally, my understanding is change data capture and when you get to change that to capture the roots of stream will allow us to take NonStop deep down this journey into integration. Forrester research has once asked us is your digital transformation driving seamless customer journeys? And I think this is where I’d like to end my intro here is because it is not only a journey but it’s customer driven. They want a better interaction, a better association with us, so hastily NonStop customers on a digital transformation. As I say, it’s all about a better integration. The data is front and center. We know that time, we know about data.

So embracing hybrid IT and then creating an environment to perform at best analytics is inclusive of NonStop. So let’s not forget our role or the role that NonStop plays in mission critical and its significance to giving us the data we need when we want to drive the business. With that, I’d like to hand over to Steve. Thanks Steve. I’ll let you take over.

Thank you, Richard. So I always follow ones from that wonderful introduction and we’ll start by talking about databases and obviously can be used for other things. It can be utilized as amazing, reliable processing environments and supporting microservices and all those kinds of new things. It’s primarily used as a database environment. So when you’re thinking about integration and in the new world of integration, we’re talking about realtime time streaming integration. And you have to ask the question, how do you continuously collect real-time data from a database?

The database is a historical record of what’s happened in the past and that’s true of NonStop is also true of other databases as well. But you can’t use SQL queries against the database continually cause that puts a lot of pressure on the database, like using timestamps of something and you can’t use triggers cause they’re typically very high of a head as well. So you need to use a different technology. And as Richard inferred, the technology of choice is change data capture, which is a nonintrusive way of collecting streaming database changes and it works fundamentally because databases write transactions to logs. It could be through the TMF log on NonStop, it could be audit largely, it could be any other type of log, right. And they need to do this inherently for recovery and kind of other purposes. So change data capture leverages that instead of using timestamps It triggers it, reads the transaction logs and it can then convert the DML operations into real time events.

So change events directly from the transaction logs and what that gives you is change events or contains the data that changed all the data that was inserted or deleted along with additional other metadata. Okay. Which can then be enriched and utilized in a whole host of different kind of streaming integration use cases. And this is kind of inherently the nature of data. Data was never created in a batch. We got to batch processing because of technology limitations because storage was cheap. Memory CPU was expensive, it was easier to store stuff on disk. And then process it in a batch at the end of the day. But now there’s no more end of day. There’s no more batch windows. Data’s produced continuously and the nature of data is created because things happen every single row in a database or log line in a machine log or devices sending you data that will happen because things happen in the real world.

And so to collect the real nature of data you need to utilize the streaming technologies, and so stream data out as it’s being created. And so streaming integration has emerged as this major infrastructure requirement and streaming integration is all about continuously moving any enterprise data. Okay. Being able to handle extreme volumes of data at scale, maintaining high throughput and low latency while also being able to process that data, analyze it and correlate it while it’s moving in order to make that data valuable, give you visibility into it and do this in a verifiable fashion. And streaming integration can be used for lots and lots of different use cases. We have customers that are taking data from databases and integrate that onto Kafka as a way of distributing data around the enterprise, adopting cloud technologies or integrating into Hadoop to keep that continually up to date so they can have continuous analytics or even doing the analytics in memory.

So having real time analytics happening within the platform, giving you instant visibility into that data and real time alerts. And of course with the inherent streaming nature of device data, doing internet of things, edge processing, edge analytics on premise analytics, cloud analytics for IoT is also a crucial use of streaming integration. And it’s all about modernization as Richard said. HP NonStop is part of your ecosystem, but it’s an essential part of the ecosystem. However, different technologies as suitable for different things. It may be more difficult to build analytics in the NonStop environments, but it’s easier to build the analytics maybe in a Hadoop environment or capture environments. And it all comes down to the questions you want to ask if the data, a data warehouse is good for certain types of analytics. But if you want to store youtube mounts of data for trading machine learning models, then you probably looking at something like Hadoop or um, lob storage in the cloud or some other scalable, uh, data storage mechanism.

If you want to understand connections in data, you may want to use the Kafka database. If you want to do spatial queries, you probably want to use a spatial database. So integrating these newer technologies and as a way of leveraging your data and being able to get value out of it is essential. And so streaming integration is the way of doing that is the foundation for data modernization. It allows you to build the shiny aluminum and glass structure that is all these new technologies on the side of your existing castles. And HP NonStop is a great example of a castle. So the Striim platform is being built for streaming integration. And why they starts with is realtime data collection. So continuously collecting data wherever it is created. And with things like sensors and message queues, they can push data. So we collect that data as it’s being pushed to us with log files.

We will read at the end of the file on the stream at new data as it’s being written. And of course handling reading multiple files in parallel log rollover. Doing that across multiple machines in parallel, being able to take data from a cloud and big data environments in real time. And then also importantly with databases for HP NonStop, Oracle, my SQL, SQL server, et cetera. We have change data capture technologies that enable us to see the change happening in the database. All the inserts, updates and deletes in real time as they are happening. So this gives you real time streams of data. Those streams of data can be moved anywhere within the enterprise or cloud and then delivered to suitable targets. You may want to deliver a change data from a database into Kafka or log files into a tube or a sensor data into the cloud.

We have any data from anywhere to anywhere in real time. It’s really low latency in certain circumstances. We ship with Kafka as part of our products and that enables you to do this in a way that can be rewound. You can go back in time and replay utilizing Kafka. You can also decouple applications using Kafka. Yeah. But pretty often people aren’t just moving raw data from one place to another. They’re doing some processing on that and this is all done through a SQL-based in memory stream processing. These continuous queries happening on the data as it’s being collected and data pipelines that can be arbitrarily complex. Well in each step you could be doing some filtering, some transformation, some aggregation of that data. And also enrich it so you can get large amounts of reference data into our in memory platform as a distributed data grid and then join that data in real time with the stream data to enrich it and give it additional context.

You can also correlate data across multiple data streams together and do statistical analysis on anomaly detection and complex event processing where you’re looking for patterns and sequences of events over time in the data. And you could also build real time dashboards using our dashboard builder, generate alerts and integrate with other things. So common use cases these days are integrating the real time streaming data with a prebuilt machine learning model that allows you to make real time predictions, real time anomaly detection, et cetera. And this is all done in an enterprise grade platform that is inherently distributed, scalable, reliable and secure. We have a lot of sources and targets we support. This slide is invariably out of date because we are releasing new sources and targets with every release. As you can see, we’r  missing a patch for that target, which we announced very recently. ut you can go to our website and check out sources and targets to find out everything that we support.

Okay. All of the processing of the platform is done through data flows. These data flows are basically the steps that you will want to do with the data. Any steps you can write that data out into an intermediate stream. By default will those streams in memory only very, very fast run it almost network speed when you’re going between nodes, but you can switch any of those streams to use Kafka, so that they offer assistance so you can rewind into them. For reliability and application decoupling purposes, all of the processing, as I mentioned, each one of those black boxes can be a SQL based query that is doing transformation, filtering, correlation, etc. on the data. And at any point to any one of those streams, you can write the data to 20 of the targets. So this is how you work with our platform through this UI.

You can build these data flows really quickly. We also have a scripting language that you can use if you like text editors. So you can switch, you can mix and match. You can start with the code. The scripts go into the UI, start with the UI, go into the script. We also have a dashboard builder that allows you to drag and drop visualizations into the dashboard and then configure those to talk to the back end data stream so you can very quickly get visibility into your data stream. We use for a lot of different things with HP NonStop. So, in addition to being able to read the transactional data from the audit files, we can also read log files in real time. We couldn’t read CCP, so web service type of logs in real time and stream all those into the platform. We then do processing and analytics on it and then deliver that to any of the targets that we mentioned.

We support change data capture for HP NonStop nonintrusively collecting real time data, being able to read in parallel across multiple audit trails. We have exactly once guarantees for zero data loss with recovery and reliability built in and we can spread these NonStop processes across the CPS to really scale that. And we handle lots of different failure scenarios, like CDC processes down the Striim platform goes down, etc. for truly reliable processing. We support CDC for SQL EMX, SQL, MP and stride to support more. And you can also read in a batch fashion. So in cases where you may want to do an initial load of data from  into some target, you could do that and you could also write into HPE on NonStop at Amex through our database writer.

This is a use case where basically you can do real time database replication. So take data from any of the source databases we support and deliver to any of the targets we support. And the general process here is that you start change data capture and then you do an initial load from the source to deliver to the target tables. And the reason for that is that initial load usually take some time. So while that’s happening, while you’re doing that initial replication of one database to another, changes will happen unless you have the luxury of seeing your database changes will happen on the source. So if you start changing the capture to start with, and then once the initial load is done, you can apply all those changes to target and now it’s up to date. But if you keep that going, then any changes that happened to source will also be replicated at the target.

So not only do you have an up to date database now, but you also have one that’s continually being updated to represent the original source. And if you add the Kafka streams into this you can have mission critical reliability and exactly once processing. You could also do this by directionally. So you can go from HP NonStop to any of the other databases that we support CDC for. And I’m able to then keep two databases in sync and have them both apply changes to each other. And again, with exactly one’s processing guarantees.

We also have use cases where people are delivering into Hadoop technologies. So being able to take the data from HP NonStop also to an initial load, but then ongoing changes using CDC can also be populated into Hadoop to keep up two days continuously. It’s the same with cloud technologies. So we have customers that are delivering into things like Microsoft Azure, SQL DB, Amazon Redshift, Google BigQuery, but cloud analytics databases. And again, you can do the initial load into the cloud, followed by ongoing change to keep the cloud up to date. And with all of these solutions, all of these integrations solutions, it’s actually pretty trivial to build dashboards using our platform so you can continuously monitor what’s going on. So you can see what’s happening across different source tables, what the load is over time, what the lag is over time and truly understand what’s happening in your integration flows.

And this can be scaled across multiple tables and customized to really meet your business operations requirements. We’ll talk about a couple of use cases. We have a major aerospace company that wanted to get much faster insight into the  parts databases and in order for them to understand what’s going on with parts and to optimize the parts delivery process that’s essential for keeping planes in the air. So this enables us to deliver data into Hadoop from HP NonStop so that they can do operational reporting for real time, understanding what’s going on and delivering alerts. So the architecture is reasonably complicated. You have data coming from a parts repository which is running on HP NonStop that initially populated Hadoop through an initial load and then subsequently using change data capture to keep it up to date. There’s also data coming from our Oracle database, which is an ERP system that is used to enrich that streaming data with additional information about the parts before he’s actually moved into the do.

And then some additional clickstream data, which is coming from a web service logs indicates parts requests, and that’s being distributed at some to Kafka for notification purposes. So there’s a pass here where you’re taking data from the previous systems, that transactional systems and moving it into a technology appropriate for the types of analytics and the forecasting and the parts management that they actually want to do along with real time notifications. This use case here with a global technology provider where they wanted to get insight into the order processing and there was no real time visibility into this order processing. And they couldn’t understand what the path of orders, whether if you lay in orders, until they one or two because of the architecture, because it wasn’t designed for real time. It’s designed for more batch analytics on kind of ETL cycle operations.

So SD build the ways in which that she did the BI. So to get real time analytics on it, they built an architecture where they took data from an Oracle database in this case said delivering it into HP NonStop. So they’re taking data from Oracle into HP NonStop because that was the technology of choice. This is actually HP is this is what was needed. This is the technology that’s being chosen for doing the analytics. And so being able to move that data in real time into NonStop and into Hadoop with some type of going on as on to Kafka for notifications and alerting enables you to get visibility into this order processing immediately.

So if you’re a NonStop user, and I’m imagining almost everyone on here is the key value propositions here are that, you know, we can deliver data in real time from HP NonStop to the rest of the enterprise and expand the impact of HP NonStop by combining that data with other enterprise data for any purpose and being able to deliver other data into HP known stuff. If it’s the most suitable technology for the types of analytics that you want to do, being able to deliver the HP NonStop data into analytics, whether that is Hadoop or whether it’s Kafka or some other target in the right format without having to do extensive coding, enabling you to get immediate insights into what’s happening in HP. It’d be NonStop in real time through the dashboards and also including alerts and being as build these dashboards to visualize you are a spin on some data without needing another product. So I hope that you can understand this, trying try to be fast here because there’s a lot of content. But if you want more information about the solutions you can go to our website, you can contact us if you what more information. You could just ask questions right now. So I hand back over to Katherine who I think probably has some questions for us.

Great. Thanks so much Richard and Steve. As a reminder, you can submit your questions via the Q and A or chat panel in the right hand side of your screen. While we’re waiting I’ll mention that we will be sending a link to the recording to this webinar within the next day or so. Please feel free to share this link with your colleagues. So let’s go ahead and turn to our questions. Our first question, I’m not exactly sure when this came in in the presentation, but the question is does this include non audited pho files? So I’m not sure if that was an ingestion question. Steve, does that have any, does that ring a bell for you where that might’ve come in?

Yeah, I’m not quite sure exactly where that came in. As I mentioned, in addition to reading the audit trails, we can read additional files and stream from those. But I think the question is if you have non audited and scribe system where the changes aren’t being audited, can we handle that? And honestly, I’m not 100% sure. I’d have to look into that and get back to the question, but yeah, that’s one we can get back to back to, I think that sounds good. Ray, we’ll get back to you on that one.  The next question has CDC been used to perform stress testing of a target production platform by providing a live transaction feed to it from the source?

I mean that’s personally a use case you could do. I mean we have obviously tested our platform on how well we scale with CDC. Um, and you were able to do 30 to 50 megabytes per second of data being generated so we can handle quite large volumes of CDC. Um, but exactly that use case. We have to investigate and no one’s actually done that for a utilize utilizing stress testing of a source using CDC. I see. That’s an interesting use case though. Okay,

The next question: does Striim run on-prem only or is it not? Uh, what cloud environments does Striim run on?

I see, that’s a great question. So Striim actually is a Java based platform. It runs anywhere Java runs. So yes,you can run on premise on actual machines, on Vms, but we also have offerings in the AWS marketplace, Azure marketplace, so you can spin up instances of it there. You can also install this if you want to spin up a VM and then install Striim on that VM yourself. So I mean those are some options. We also have a dockered image. So you can with just a couple of commands, download the Striim docket and we can run that as well. So that would run anywhere. Obviously docker runs including cloud environments. So it’s a very flexible platform that has this hybrid architecture that spans enterprise cloud.

Excellent. The next question: What’s the cost to produce a straight port or is that the actual words or does Kafka product a straight port to HP Nonstop JVM and Striim supports both producer and consumer end points.

So yes, Striim supports producer and consumer end points. We have  in addition to building Kafka into our product so it can run transparently as the backing for a stream. As I mentioned, our streams are default in memory, high speed streams, but you can with a click in the UI switch that transparently to a Kafka-based stream. In addition to that, we have specific readers for Kafka, so you can read specific data as part of a data flow from almost any version of Kafka. We support Kafka 0.8 through to the current version because they change the API is continually between versions so we can support all of them and we can right into Kafka when your reading and writing from Kafka, we support lots of different formats as well. Anything from delimited limited JSON, XML, Avro, etc. Um, so we have pretty comprehensive complete support for Kafka. 

Great. It looks like we’ve answered all of the questions. On behalf of Steve Wilkes and Richard Buckle and the entire Striim team, I would like to thank you again for joining us for today’s discussion. Have a great rest of your day.