Take Kafka to the Next Level with Streaming Integration

See how easy it can be to use pre-built wizards to ingest and integrate data into Kafka from a wide variety of sources, and provide SQL-based stream processing on Kafka.

Unedited Transcript:

We going to be talking about taking Kafka to the next level utilizing Striim. So we got to start off talking about what is streaming integration. I’m processing a quick overview of the Striim platform. Then we’ll go into a particular solutions for Kafka including some customer examples and what our differentiating features are. Then we’ll do a 10 minute demo of how you can get data out of databases put into Kafka to do some integration processing on that and visualizations ending up with a Q&A.

So the first thing to bear in mind is that alright, streaming data is kind of the natural way that data’s supposed to be. It’s kind of a quirk of fate history. Technology that we started dealing with data in batches. Data is not ever created in batches. But batch processing occurred because storage was cheap and memory and CPU was expensive. But now in a world where storage could potentially run out, there’ll be overwhelmed by the amount of data that we have and memory and CPU is governed a lot cheaper, right? Which means that streaming in general and streaming integration has emerged and is not capable of being a major infrastructure requirement.

It’s not just enough though to be able to move data around. You also need to be able to collect it, process it, analyze it, visualize it, and deliver that data or the results of processing that data somewhere else as it’s happening instantaneously. And to be able to do this by running something familiar, something that people can work with like SQL on that data in motion before it ever lands on disc. So to do the processing in memory, and this is all kind of part of a modernization. No, we have a number of customers, quite a number of customers that are modernizing. They’re moving to new technologies and they’re not just doing that for the fun of it. They’re doing it because there’s this recognition that in order to ask the right questions of data, that data needs to be present in the correct technology.

So it used to be in the old way you would have data in a database and you put it into a data warehouse for analytics, but a data warehouse, might not be the most appropriate technology for building machine learning models, for example. So you may want to put that data into Hadoop or push it into the cloud or into Hadoop on the cloud. You may want to use those other technologies as well, like, you know, graph databases for making connections or in memory caches, distributed data grades, you know, to speed up applications. So it’s important to be able to get your data in the right place in the most suitable place, but doesn’t mean that you’re ripping out or replacing legacy systems. These things may not be replaceable. They may take a long time to replace. We have customers that have databases that are, you know, 30 plus years old that are doing transaction processing.

So in order to do analytics on something like that and modern analytics or something like that, you need to be able to breach the old and new worlds. And streaming integration is the foundation by which you can do that. It’s the shiny glass and aluminum structure built on the side of the old castle of your legacy systems. And they opened up Striim. Striim provides streaming integration with intelligence and we’re founded in 2012. The four of us founded the company. We came out of GoldenGate software. We’ve been working in the enterprise middleware enterprise data markets for a long time. We’re backed by some really great investors including some partners, Intel capital, Atlantic bridge and Dell EMC and I customers range across most industries, financial services, Telco, logistics, healthcare, retail and IoT, customers, streaming integration. Well, what is streaming integration with intelligence? Well, it’s all about continuously as very important work that moving any enterprise data, ensuring that you can handle extreme volumes of data at scale with very high throughput that you can do processing analytics and correlation of that data in flight, in memory while it’s moving before it ever hits desk in order to make the data valuable, to extract the maximum value, add that data visible so you can interact with it. And do this in a verifiable fashion.

Companies are using streaming integration for lots of different use cases. We have customers that are taking data in databases and pushing it out onto Kafka. There are others that are using data in Kafka and doing analytics on it or doing processing on data in Kafka. There are other customers that are using us for cloud adoption. So being able to go from an on premise database into the cloud continuously in real time to keep that cloud database instance, whether it’s a Azure SQL DB or Amazon redshift or Google big query or any of the RDS databases continuously up to date. So there is always in sync with on-premise databases and assembly with Hadoop. We work with all of the major Hadoop implementations. Cloudera, Hortonworks, MapR and we enable you to collect data in real time prepared and get it in the right format and then continuously deliver that into Hadoop.

And then from there you may be building machine learning models or a deeper analysis on long term storage of data. But of course we also have customers that are doing real time analytics. So these customers are using streaming integration to not only collect and move data in real time and process it in real time, but also to visualize it, to interact with it and to spot anomalies, to make predictions, to send alerts and to visualize this in dashboards. And then we have customers on the Internet of things side wanting to do edge processing on five scale data. And that edge processing actually also holds true for things like security data large amounts of security logs being generated within the enterprise. So [inaudible] processing and also analytics on the edge and analytics in the cloud for Iot. So streaming integration can be utilized for a large number of different use cases.

And we’ll talk about some of those in more detail and we’ll also talk about now the Striim platform and how the Striim platform provides all the aspects of streaming integration. This is kind of how streaming integration is defined with intelligence and what the pieces are and the pieces of the stream provides. So the first piece of streaming integration is being able to continuously collect data. So being able to get the data wherever it’s generated within the enterprise and turn that into streams of data. No, we things like senses on message cues like facto, they push you data. So that’s kind of streaming already, but with a real time getting real time data. And it seems like log files, you can’t treat those as a batch. You have to continuously read at the end of the file. You have to handle file rollovers, you have to be able to deal with the parallel reading of multiple files across multiple servers.

Too. As soon as new data is written to any files, you’re streaming, that’s heading 10, nine to a data stream. Similarly with databases, databases are thought of as a historical record of what happened in the past. In order to get real-time data out of the database, you need to use a technology called change data capture, which listens to the database log, sees the inserts, updates and deletes as they’re happening with the database and streams those out in real time. So now, once you get past this continuous data collection, you have real time data streams. There’s real time data streams can be used to move data anywhere within the enterprise or cloud and then delivered to a continuously too, a variety of different targets. And it’s kind of a mix and match. It’s like a big bag of multicolored skittles. You can go from databases to Kafka, you can go from capita to the cloud, you can go from log files into Hadoop.

It’s very varied what you can do here. But this is a simple case. You just, you know, one source, one target. You’re delivering, whatever’s created in real time continuously from one side to the other side. I pretty often that is utilizing Kafka in some way to distribute data around an enterprise. And that can be our built in Kafka that we have capital built into the products. We actually have two ways of moving data around and the product, we have our own in memory, high-speed message bus that is in memory only. So it’s, it’s very, very fast. And then we also have Kafka and Kafka is a persistent message bus but it doesn’t, I so typically you don’t want to use capita between every piece of stream processing that would slow things down cause you’re doing particularly unnecessary I/O. So most of the, our customers, we use a Kafka close to sources too, create a persistent, a rewind double data stream that you can use for multiple applications.

And then most of the rest of the processing will be in memory only using our own streams. And we’ll see in a little bit how you can easily tell in the elm within the platform. But typically just moving data around isn’t enough. Most of our customers are doing some degree of in-memory stream processing. This is through a SQL based language. It’s a big extension of sequence, very rich. And it includes time series capabilities that includes Java capabilities and the other extensions but at the Kool aid SQL. So it’s very easy to understand, easy to work with and that allows you to do things like filtering aggregation and enrichment on transformation of data as it’s moving through the platform. And this isn’t just a single query applied to a source. This can be an entire complex pipeline of processing that you’re doing before you delivering potentially to multiple targets.

On top of this, we can do more advanced analytics types of functionality that bring intelligence. So these are things like correlating data, multiple data streams in real time, being able to kind of do advanced versions of that through complex event processing where you’re looking for sequences of events over time and statistical analysis, anomaly detection, that kind of thing. This can be visualized. We have a built in dashboard builder that allows you to build visualizations. And also generate alerts and sends alerts within the enterprise. Yeah, the platform can integrate with other things as well. So we have seen integrations with machine learning and Neo4J for example, it’s all done in an enterprise grade way, an enterprise grade platform that is inherently distributed, scalable, reliable and secure.

And the Striim platform integrates with a lot of different things. These are at least as of when we created this slide of sources and targets within the platform. So you can see it’s a pretty varied bunch of things that you can read from on the continuous suggestion side and it too on the continuous data delivery side, also supporting a lot of different formats. So out of the box support for things like a Apache access log or event logs or system logs, you just configure your usage, that type of bug in a way. You already have the ways of passing that data and suddenly on delivery there’s lots of different varied ways that you can deliver supporting things like delimited data, JSON Avro, XML, etc. supporting. So what’s is targets across the enterprise on cloud? You can go to our website for a full up-to-date list of the sources and targets they get added to during almost every release of the product.

This is what the data flows that like within the platform. So data flows, you read them from the top. You can have multiple sources in a single data. So each step of the way you can be doing some processing. Yeah. That processing is through a SQL based language. As you can see, it’s very easy for anyone to understand the SQL, whether they are developers or this is on the list data scientist, but she built this out. And of course you can build extensions to SQL and that’s pretty often where with the developers that write Java code, for example, can come in to, can add business specific functionality, maybe written in Java for example into our platform and call that directly from our SQL.

Similarly, dashboards can be built very easily, drag and drop visualizations into the dashboard configure them with a query and know that visualization is showing real time data. So let’s talk about Striim for Kafka. A Kafka by itself is not a destination or a source. It’s a road, right? It’s for moving data around. In order to make use of it, you need to be able to push data into it. You need to be able to take data out of it. You need to be able to do some kind of processing on that data and visualize and surface that in some way and do this in a enterprise grade fashion. Right? the, the way of doing that with Apache Kafka would be through the various API APIs. So there are lots of different API APIs that allow you to produce data, consume data process that data and configure it.

But that involves a lot of developers, a lot coding, a lot of knowledge and you still kind of just building a platform around Kafka as opposed to typically building the applications cause you’re going to put your own framework in place. He doesn’t have to be this hard. What we have done is surround Kafka with the Striim platform. Bye. Being able to do the ingestion and preprocessing of data, so all collect from all those sources and deliver it into Kafka. Be able to take data out of Kafka, do further processing, prep preparation, formatting, and then deliver where it needs to go. Be able to do analytics on top of Kafka. So things like the correlation and complex event processing and anomaly detection and integration with machine learning and visualize that through dashboards. And crucially this can be done through a drag and drop UI.

We also have a scripting language that those people that like working with vi and text, but you can build these applications very quickly using our UI, allowing developers to focus on business value as opposed to kind of reinventing the wheel and building all of the platform components that become part of our platform. A casket is built into the Striim platform. You can start a Kafka, a broker, set of brokers during installation process. And then very easily you can switch any of our streams from being a memory only high-speed stream to being a captive stream, just with a click of a switch in the UI or a keyword in the scripting language.

So we run on top of Kafka, we allow you to take data from any source, put it into Kafka. Kafka is both a source and target within our platform and also built into our platform. So here’s some customer examples. We have a leading credit card network that is using a Kafka as a real time security hub and that’s where all of the activity around security analytics is taking place. So the using Striim to ingest logs from the huge number of different security devices correlate those yeah. In memory. Ah, so joining them potentially on things like a session id or IP address in order to join together records. So for example, you might want to say I want to join all of the activity from each IP address across all the different logs within one second. And push that out onto Kafka.

So you already have these correlated joined records a much richer than the original raw data. For example, IP addresses that are doing bad stuff really quickly. But then also taking a, the title off Kafka, doing more processing on that more analytics than pushing the results back onto Kafka and to multiple other targets and doing in memory analytics and visualizations on that data and by running at around a 20 terabytes a day right now of security data. Another example is this media technology company. They are doing change data capture from Oracle and SQL server databases and doing the processing on that and pushing those onto Kafka and also reading some data from capitol as well and doing additional processing or pushing that in Kafka that doing encrypt shins on the data as it’s moving. And this is all to enable the a line of business people to be continually upstate.

What’s happening with customers. It’s all about customer service, making sure that you can have continually up to date data in the cloud, serving the line of business. And I find the example, this is on top a online retailer. They already had website data on Kafka and they using us for real time analytics and dashboards. And the great thing about this was the engineers responsible for this. They just downloaded their product and they want us to show the value of this real time data that they had on Kafka. So they built some vegetables really, really quickly as justification to their management that they needed budget for the project. So that really increased the productivity of the teams and gave them insights into kind of what was happening in real time.

The key things around Kafka are that you can expand the reach of caffeine, expand the type of data that you have on Kafka by ingesting data from any relevant enterprise source and pushing it onto Kafka. You can the right fullness and pre-process it without having to do extensive coding and get insights into Kafka through analytics immediately. Generating alerts, visualize the data on Kafka very easily and deliver data. And the insights that you got from that data from Kafka to any enterprise or of target. So key features, all that we have this SQL based development language that makes it much easier to work with the platform. Of course that frees up developers to do real business specific value add as opposed to reinventing the wheel and platform tasks. So the people that know the data could work with the data you can do in flight enrichment of the data using a built-in cache.

We also have time series analysis and time windows and we do guarantee a exactly once processing of data, even with relatively complex data flows that may involve time windows and complex event processing and other things that slow things up. That means that every source event will be processed once and the only ones, this is part of enterprise grade promises. It is a inherently distributed and plus a scalable architecture that automatically self heals and all Smithy parallelizes. It’s a use of Capco. We have this change data capture capability to get real time data out of databases through the incense of places. These as are happening in real time. You can do these in memory joins and correlation of data not just on Kafka but from any of the other sources that we have. Well and the built in analytics and visualizations are laid to go to the next level.

I just an example of how we continually enhance I work with Kafka in the last release of the product, we added disability to scale the delivery you’re writing into Kafka even more. You could already scale by adding additional nodes. Now you can scale by using multiple threads within each single node of Striim to right empower ll across multiple Kafka partitions. And similarly on reading Kaka as you scale, the Striim cluster will automatically map out the partition season scale. Reading from Kafka we added in a own monitoring of Kafka metrics so that it helps you pinpoint issues and bottlenecks in Kafka itself as opposed to adjust within our product. And obviously we are continually supporting new versions of Kafka as they’re being deployed in customer sites was just kind of what some of the Kafka monitoring looks like. And we have more detailed drill down on this as well.

So you’ll see how many acknowledgements of the world, the average time between calls Helen has taken to, to do various things with Kaka. And this helps engineers support, greedy, target bottlenecks and see is this a problem in the way you can figure it on application? Is it problem in the way you configure Kafka? Can we help you with that? Can we help you get better performance? So now we’ll go into a quick demo where we’ll walk through building things within the Striim platform and we’re going to start with doing a change data onto Apache Kafka.

So the way this works is you start by choosing a template and the template is going to define a source and target really go from MySQL and into Kafka. But we also support MSC, cool. And Oracle and HP nonstop and Maria DB for change data capture, what creates an application and they will configure how to connect to the database. So you enter the properties on a house, connects the database, and then we’ll actually look at the database. And see is it ready for change? Data capture if it’s not was how you had to fix it. So once you’ve done that, you then select the tables that you’re interested in doing, change data capture on and that will build David streams. I include the change for all those tables. Final step is configuring Kafka. So you choose what topic you want to write to where Kafka is and also what format we’re going to do this and Jason, so we configure the writer to write it and JSON on to Kafka.

And once we save that, that’s going to create a data flow. And the data flows in this case is very simple. We are sourcing from change data capture and we have racing into Kafka. We can test this out by deploying the application that turns this definition into real runtime objects and starting it, which will start things in the correct order so you don’t lose any data. And then take a look at the data stream. So we take a look at this data stream. You’ll see the role change data that we getting from the database, both this is updates. You can see the real data and you could also see the before image on a lot of Metadata, but pretty often you don’t just want the raw data pushed out onto Kafka, you want to do some processing on it. So the first thing we’re going to do is extract some of the fields.

And we can do that by adding in a query. So we’ll add in a continuous in memory SQL query that is going to do some processing. And initially we just got to extract the major fields from the data and you one from the before image as well. And that then is now you can see it’s very different to what we had before. It’s basically just the royal fields that we chose, not all the rest of the metadata and everything else. We can modify the Kafka writer know to read from that instead of the direct change data capture. And now we have this linear flow where we’re doing change data capture, we’re processing it. And though we’re writing to Kafka, we may also want to enrich it. So we’re going to add a in memory cache in memory data grid that’s going to contain some reference data.

This case we’re going to load product info, reference data from a database by specifying the data type that matches the database table. We’re looking at by product ID and then the database configuration. You said you’re getting my SQL on what the query is. We’ll save that. Then we have a definition of his in memory cache. We can utilize this cache to do enrichment. So we modify the sequel here to join with the cache in real time and they can see the data here includes the description and brand and category from that in memory cache. And that was kind of how easy it was. Everything is receivable. It makes processing the data enriching it very, very easy. Then what we’re going to do is take that data that’s been written to Kafka and we going to do the reverse. We’re going to use a Kafka reader.

So you might you already have data on Kafka, use a Kafka reader potentially to do some processing on that data and then ride that out to some of the targets that we support. So I will at the end of the Kafka reader, we’ll configure it with the same configurations we use for Kafka writing and we know that it’s in JSON Format. So we use the Jason, I’m going to put that into the data stream. Okay, so now if we start this up a, what you’d be able to see is that this is the JSON version of that data that we created in the previous step, including the enrichment from the cache. Now we can write a query that’s going to pull data out of JSON, this west. Some of the extensions of the platform come in. It’s you don’t just work with two pools within the platform.

You can work with nested structured data like JSON. You can see here we’re getting adjacent values and turning those into Striim elements. So now if we run this again, you can see we’ve now pulled out all the bits of the chase on, into individual fields within the data stream. So that’s kind of what the first part of the processing is. Now we’re going to take that data and we’re going to write the ads with file. So we’ll connect in the target and we’re going to write it as a nit limited form. On default for d limited is the CSV format, which can be read by say spreadsheets. So we can figure out how we, where are we going to write the data, what the format is and then we can save that out. And when we start up the application now that data is going to be continuously written to a set of rolling files role based on the file rolling policy.

We can open that up in excel. Well, we can see the same data that we, we’re just looking at in the preview, right? In the excel document. Ah, of course. You know, that’s just one example right into a file. You could also use the same mechanisms, right, to things like Hadoop in the cloud. So we’ve got an [inaudible] connection set to do. We’re gonna firstly put in a query that he’s got to limit the data that we’re writing to do. We just filtering here, we’re gonna fill everything apart from location 10, and now we’re going to use a Hadoop target. And so we’ll choose the Hadoop target and track that into the data flow and then configure the connectivity. You can see is all the usual things that you’ll be used to. If you’re writing things into HTFS we’re going to write an afro formats and ivory format.

You can specify the name of a schema file that you want us to generate that tell someone else the structure of this data. Save her out. Yeah. And then, you know, if that was started, you would be writing data to Hadoop as well, but we’re just going to show, you know, also how you’d write data to say Azure above storage. The Azure target is time straight from the preprocessed data stream. We’ve got to get the configuration for the Hadoop blog story for the Azure Blob Storage from a cloud. And so the configuration information, one this to be written, and JSON now when you deploy this and start this running, you’re right into Hadoop and the file on the cloud. So now we’re going to start looking at some more analysis. In this case, what we’ve done is we’ve put in the application that’s reading weblogs in real time.

It’s doing some processing on those web logs, again through a SQL language and writing that weblog data at Kafka. So you mentioned this running on web bugs across multiple servers, you’ve turning that into a real time continuous web stream. So if we start this up, you can see this is kind of what the data looks like. You have seen this IP address, what the request is. And then some things we pulled out to the URL to make processing easier later on. So now what we’re going to do is use that weblog data. And this is quite a complex application. It’s taking the data from Kafka, it’s extracting all the key values and then it’s putting it through a whole bunch of analytics is doing time series analysis. It’s grouping the data together. It’s slicing and dicing the data over multiple variables and it’s doing all of that in real time.

So you could utilize that processing, that type of analytics, full alerts and writing out things. But also for visualizations. This is an example of a dashboard in a platform. It’s visualizing that data that was on Kafka. You can drag and drop visualizations into the dashboard, configure them with queries, and then you have a full rich real time streaming dashboard built on the back end. And these are some additional visualizations and analytics applications we’ve built looking at things like security in factories transaction data monitoring in real time making predictions based on current values and the machine running model, monitoring traffic through an airport in real time.

So that’s a very quick demo of some of the things that you could do and how we use it is to work with Kafka. If you once a why you want to use Striim for Kafka because we were designed for real time data. And for business critical real time data as a distributed clustered in memory platform that has ha scalability exactly once processing guarantees built in security design out of the box. We’re working with Kafka data. He can deliver you very immediate ROI. If you can get up and running very quickly, you can start providing value almost straight away and you can build multiple use cases and multiple applications and data flows within a single cluster of Striim. And then as you to get operational value from the data, one is still relevant because data can lose value very quickly and also utilize the people that know the data, data scientists, business analysts to work directly with the data through allowing developers then to focus on adding business value with specific algorithms and coding.

And of course he is easily extensible, integrates with a lot of things machine learning and any other technology choices you may have made. And we’re continually increasing the sources and targets that we support. This helps you speed your time to market and speed building things out that provide value to your business. I’m taking real time data that is on capital or putting real time data on Kafka and help your business work much more quickly. It is designed to be an enterprise grade solution that you can trust with mission critical applications and reduce the total cost of ownership of applications because the only do you have, do you not need a lengthy development effort or evaluation effort for putting lots of bits and pieces together on top of something like Kafka. In order to get value, you just need one vendor to support that.

So it is very easy to iterate and build out new applications. Striim is recognized as a only a cool vendor and an innovative vendor, but we’re actually a really great place to work as well. Very proud of awards that recognize that. So I says there’s a few things to take away from this presentation is that you can use Striim to ingest data into Kafka from any enterprise source continuously. You can use Striim to pre-process and prepare data for different users and different targets. You can analyze and visualize data running in Kafka. You can deliver data from Kaka to enterprise and cloud targets. Keep those continuous, the up to date all while using an enterprise grade platform, which is an end to end solution a enables your developers to focus on adding business value.

So to get started you can go to our website, you can read more. You can watch videos that we have on getting the most and everything else about different use cases as well. And feel free to interact with us. So let’s see. Right 1136. Katherine, do you think we have time for some questions? Yes, we are at the bottom of the hour. So if for those who can stay on, we’ll go ahead and answer a few questions. I’ll go ahead and jump right in. The first question is how does Striim handle highly sensitive data? Does it encrypt data before moving? So of course we as with everything you have options. Yeah. But yes, Striim count encrypt data across data streams. That could be our own internal in memory data streams. They can be a Kafka data streams. In fact, we had security on Kafka before Kafka has security on Kafka.

If you remember, there was security in version 0.8. Kafka. so we can protect any data stream within our platform through a, a role based security policy that allows you to work with results of processing of a data stream but not actually get to or see the data on a previous stream. You can turn on encryption in the UI or through the scripting language just with a simple flag. And that works both on the only memory streams also on Kafka. So it’s, it’s very easy to secure things down. We’ve also, they recently added in some data masking capabilities that enable you to mask sensitive data as it’s flowing through the platform in real time as well.

Great. our next question, does it run on premise only? If not, what cloud environment does Striim run on? Striim runs that you have a JVM, right? So it couldn’t run on premise servers. It can run in cloud servers. You can run hybrid across the two. It can run on Vms, it can run on physical machines, you can run on laptops. Even in fact a lot of people you’re testing that on laptops. As far as supporting cloud’s concerned we, oh, both in both the AWS and zoo marketplaces. You can spin up instances there. You can also ask us for images that you can spin up in those clouds. So you can give us a try in the cloud or on premise to you.

Thank you. How is change data capture from database handled? How do we configure the parameters on which the changed data has to be captured? So I know we stepped through the wizard pretty quickly, but as you are configuring that, you can choose what tables and what columns for what tables. And that is going to work with the database in the right way, using the create KPIs, etc, to be able to turn on and access change data capture for those particular tables that you’ve selected. So that’s kind of the first step is saying well data, do I want to collect? Yeah. So, but if you look at replication solutions then they will basically just say, hey, you can do, can you say to capture it from one database and deliver that change into another database? Are they then not streaming integration solutions?

So Striim is a full streaming integration solution. So we don’t just do the change data capture and the change delivery. You could also do a lot of processing the filtering transformation, enrichment, correlation to all of that in memory. With those data flows I showed you which you can build through the drag and drop UI or through the scripting language, do all of that processing and memory before you deliver things out. So it’s a much more flexible solution that allows you to get exactly the day that you want, wherever you want in your enterprise or in the cloud. If you’re working with change data capture, we have customers using change data capture for delivering on Kafka, also for cloud migrations. Also for a daily replication, a data consolidation. Very many various use cases.

Great. Thanks Steve. I’m, I’m going to I think I understand what this questions about. Does capability requires that the Striim project be enabled by Kafka? Hmm. I’m not quite sure why. I think that’s about we might take that one offline. Yeah, we’ll think about that one.

How does stream support exactly once processing when there are time windows. So the way the stream deals with recovery in general ensure that, you know, if you have a system as you don’t do stuff, it’s true. A, a consistent checkpointing mechanism across an entire cluster. And this enables you to be covered state by rewinding into sources and rebuilding that state. You know, that’s can be quite a complex task because not only do you need to know every view at the earliest point, you have to go back to rebuild the whole state. Well you also need to be able to know, have I already admitted results cause I don’t want to have more than one result. Cause that’s not exactly one sentence. You know, at least once you need to know how you are already generated results for this data as I’m rebuilding state and know that for every I put, so it’s quite a complex thing to actually achieve, but we have achieved it.

Great. Thanks so much Steve. In the interest of time, if we did not get to your specific question, Steve, we’ll follow up with you directly via email within the next few hours. On behalf of Steve Wilkes and the Striim team, I’d like to thank you again for joining us today. Have a great rest of your day.